sentences
sequence
labels
sequence
[ "Knowledge Graphs (KG) are multi-relational graphs consisting of entities as nodes and relations among them as typed edges.", "Goal of the Question Answering over KG (KGQA) task is to answer natural language queries posed over the KG.", "Multi-hop KGQA requires reasoning over multiple edges of the KG to arrive at the right answer.", "KGs are often incomplete with many missing links, posing additional challenges for KGQA, especially for multi-hop KGQA.", "Recent research on multihop KGQA has attempted to handle KG sparsity using relevant external text, which isn't always readily available.", "In a separate line of research, KG embedding methods have been proposed to reduce KG sparsity by performing missing link prediction.", "Such KG embedding methods, even though highly relevant, have not been explored for multi-hop KGQA so far.", "We fill this gap in this paper and propose EmbedKGQA.", "EmbedKGQA is particularly effective in performing multi-hop KGQA over sparse KGs.", "EmbedKGQA also relaxes the requirement of answer selection from a pre-specified neighborhood, a sub-optimal constraint enforced by previous multi-hop KGQA methods.", "Through extensive experiments on multiple benchmark datasets, we demonstrate EmbedKGQA's effectiveness over other state-of-the-art baselines.", "Knowledge Graphs (KG) are multi-relational graphs consisting of millions of entities (e.g., San Jose , California , etc.) and relationships among them (e.g., San Jose-cityInState-California ).", "Examples of a few large KGs include Wikidata (Google, 2013), DBPedia (Lehmann et al., 2015), Yago (Suchanek et al., 2007), and NELL (Mitchell Equal contribution EmbedKGQA's source code is available at https://github.com/malllabiisc/EmbedKGQA Figure 1: Challenges with Multi-hop QA over Knowledge Graphs (KGQA) in sparse and incomplete KGs : Absence of the edge has genre(Gangster No. 1, Crime) in the incomplete KG makes it much harder to answer the input NL question, as the KGQA model potentially needs to reason over a longer path over the KG (marked by bold edges).", "et al., 2018).", "Question Answering over Knowledge Graphs (KGQA) has emerged as an important research area over the last few years (Zhang et al., 2018; Sun et al., 2019a).", "In KGQA systems, given a natural language (NL) question and a KG, the right answer is derived based on analysis of the question in the context of the KG.", "In multi-hop KGQA , the system needs to perform reasoning over multiple edges of the KG to infer the right answer.", "KGs are often incomplete, which creates additional challenges for KGQA systems, especially in case of multi-hop KGQA.", "Recent methods have used an external text corpus to handle KG sparsity (Sun et al., 2019a, 2018).", "For example, the method proposed in (Sun et al., 2019a) constructs a question-specific sub-graph from the KG, which is then augmented with supporting text documents.", "Graph CNN (Kipf and Welling, 2016) is then applied over this augmented sub-graph to arrive at the final answer.", "Unfortunately, availability and identification of relevant text corpora is a challenge on its own which limits broad-coverage applicability of such methods.", "Moreover, such methods also impose pre-specified heuristic neighborhood size limitation from which the true answer needs to be selected.", "This often makes the true answer out of reach of the model to select from.", "In order to illustrate these points, please consider the example shown in Figure", "1. In this example, Louis Mellis is the head entity in the input NL question, and Crime is the true answer we expect the model to select.", "If the edge has genre(Gangster No. 1, Crime) were present in the KG, then the question could have been answered rather easily.", "However, since this edge is missing from the KG, as is often the case with similar incomplete and sparse KGs, the KGQA model has to potentially reason over a longer path over the KG (marked by bolded edges in the graph).", "Moreover, the KGQA model imposed a neighborhood size of 3-hops, which made the true answer Crime out of reach.", "In a separate line of research, there has been a large body of work that utilizes KG embeddings to predict missing links in the KG, thereby reducing KG sparsity (Bordes et al., 2013; Trouillon et al., 2016; Yang et al., 2014a; Nickel et al., 2011).", "KG embedding methods learn high-dimensional embeddings for entities and relations in the KG, which are then used for link prediction.", "In spite of its high relevance, KG embedding methods have not been used for multi-hop KGQA we fill this gap in this paper.", "In particular, we propose EmbedKGQA , a novel system which leverages KG embeddings to perform multi-hop KGQA.", "We make the following contributions in this paper:", "1. We propose EmbedKGQA, a novel method for the multi-hop KGQA task.", "To the best of our knowledge, EmbedKGQA is the first method to use KG embeddings for this task.", "EmbedKGQA is particularly effective in performing multi-hop KGQA over sparse KGs.", "2. EmbedKGQA relaxes the requirement of answer selection from a pre-specified local neighborhood, an undesirable constraint imposed by previous methods for this task.", "3. Through extensive experiments on multiple real-world datasets, we demonstrate EmbedKGQA's effectiveness over state-of-the-art baselines.", "We have made EmbedKGQA's source code available to encourage reproducibility.", "KGQA : In prior work (Li et al., 2018) TransE, (Bordes et al., 2013) embeddings have been used to answer factoid based questions.", "However, this requires ground truth relation labeling for each question and it does not work for multi-hop question answering.", "In another line of work (Yih et al., 2015) and (Bao et al., 2016) proposed extracting a particular sub-graph to answer the question.", "The method presented in (Bordes et al., 2014a), the sub-graph generated for a head entity is projected in a high dimensional space for question answering.", "Memory Networks have also been used to learn high dimensional embeddings of the facts present in the KG to perform QA (Bordes et al., 2015).", "Methods like (Bordes et al., 2014b) learn a similarity function between the question and the corresponding triple during training, and score the question with all the candidate triples at the test time.", "(Yang et al., 2014b) and (Yang et al., 2015) utilize embedding based methods to map natural language questions to logical forms.", "Methods like (Dai et al., 2016; Dong et al., 2015; Hao et al., 2017; Lukovnikov et al., 2017; Yin et al., 2016) utilize neural networks to learn a scoring functions to rank the candidate answers.", "Some works like (Mohammed et al., 2017; Ture and Jojic, 2016) consider each relation as a label and model QA task as a classification problem.", "Extending these kinds of approaches for multi-hop question answering is non-trivial.", "Recently, there has been some work in which text corpus is incorporated as a knowledge source in addition to KG to answer complex questions on KGs (Sun et al., 2018, 2019a).", "Such approaches are useful in case the KG is incomplete.", "However, this leads to another level of complexity in the QA system, and text corpora might not always be available.", "KG completion methods : Link prediction in Knowledge Graphs using KG embeddings has become a popular area of research in recent years.", "The general framework is to define a score function for a set of triples ( h, r, t ) in a KG and constraining them in such a way that the score for a correct triple is higher than the score for an incorrect triple.", "RESCAL (Nickel et al., 2011) and DistMult (Yang et al., 2015) learn a score function containing a bi-linear product between head entity and tail entity vectors and a relation matrix.", "ComplEx (Trouillon et al., 2016) represents entity vectors and relation matrices in the complex space.", "SimplE (Kazemi and Poole, 2018) and TuckER (Balazevic et al., 2019) are based on Canonical Polyadic (CP) decomposition (Hitchcock, 1927) and Tucker decomposition (Tucker, 1966) respectively.", "TransE (Bordes et al., 2013) embeds entities in high dimensional real space and relation as translation between the head and the tail entities.", "RotatE (Sun et al., 2019b) on the other hand projects entities in complex space and relations are represented as rotations in the complex plane.", "ConvE (Dettmers et al., 2018) utilizes Convolutional Neural Networks to learn a scoring function between the head entity, tail entity and relation.", "InteractE (Vashishth et al., 2019) improves upon ConvE by increasing feature interaction.", "In this section, we formally define a Knowledge Graph(KG) and then describe link prediction task on incomplete KGs.", "We then describe KG embeddings and explain the ComplEx embedding model.", "Given a set of entities E and relations R , a Knowledge Graph G is a set of triples K such that K E R E .", "A triple is represented as ( h, r, t ) , with h, t E denoting subject and object entities respectively and r R the relation between them.", "In link prediction, given an incomplete Knowledge Graph, the task is to predict which unknown links are valid.", "KG Embedding models achieve this through a scoring function that assigns a score s = ( h, r, t ) R , which indicates whether a triple is true, with the goal of being able to score all missing triples correctly.", "For each e E and r R , Knowledge Graph Embedding (KGE) models generate e e R d e and e r R d r , where e e and e r are d e and d r dimensional vectors respectively.", "Each of the embedding methods also has a scoring function : E R E R to assign some score ( h, r, t ) to a possible triple ( h, r, t ) , h, t E and r R .", "Models are trained in a way such that for every correct triple ( h, r, t ) K and incorrect triple ( h (cid:48) , r (cid:48) , t (cid:48) ) (cid:54) K the model assign scores such that ( h, r, t ) > 0 and ( h (cid:48) , r (cid:48) , t (cid:48) ) < 0 .", "A scoring function is generally a function of ( e h , e r , e t ) .", "ComplEx (Trouillon et al., 2016) is a tensor factorization approach that embeds relations and entities in complex space.", "Given h, t E and r R , ComplEx generates e h , e r , e t C d and defines a scoring function: ( h, r, t ) = Re ( (cid:104) e h , e r , e t (cid:105) ) = Re ( d (cid:88) k =1 e ( k ) h e ( k ) r e t ( k ) ) (1) such that ( h, r, t ) > 0 for all true triples, and ( h, r, t ) < 0 for false triples.", "Re denotes the real part of a complex number.", "In this section, we first define the problem of KGQA and then describe our model.", "Let E and R be the set of all entities and relations respectively in a KG G , and K E R E is the set of all available KG facts.", "The problem in KGQA involves, given a natural language question q and a topic entity e h E present in the question, the task is to extract an entity e t E that correctly answers the question q .", "We work in a setting where there is no fine-grained annotation present in the dataset, such as the question type or the exact logic reasoning steps.", "For example, co-actor is a combination of starred actor 1 and starred actor relations, but our model does not require this annotation.", "EmbedKGQA uses Knowledge Graph embeddings to answer multi-hop natural language questions.", "First it learns a representation of the KG in an embedding space.", "Then given a question it learns a question embedding.", "Finally it combines these embedding to predict the answer.", "In the following sections, we introduce the EmbedKGQA model.", "It consists of 3 modules:", "1. KG Embedding Module creates embeddings for all entities in the KG.", "2. Question Embedding Module finds the embedding of a question", "3. Answer Selection Module reduces the set of candidate answer entities and selects the final answer 4.2 KG Embedding Module ComplEx embeddings are trained for all h, t E and all r R in the KG such that e h , e r , e t C d .", "The entity embeddings are used for learning a triple scoring function between the head entity, question, and answer entity.", "Based on the coverage of the KG entities in the QA training set, the entity embeddings learned here are either kept frozen or allowed to be fine-tuned in the subsequent steps.", "This module embeds the natural language question q to a fixed dimension vector e q C d .", "This is done using a feed-forward neural network that first embeds the question q using RoBERTa (Liu et al., 2019) into a 768-dimensional vector.", "This is then passed through 4 fully connected linear layers with ReLU activation and finally projected onto the complex space C d .", "Given a question q , topic entity h E and set of answer entities A E , it learns the question embedding in a way such that ( e h , e q , e a ) > 0 a A ( e h , e q , e a ) < 0 a / A where is the ComplEx scoring function (1) and e a , e a are entity embeddings learnt in the previous step.", "For each question, the score ( . ) is calculated with all the candidate answer entities a (cid:48) E .", "The model is learned by minimizing the binary cross-entropy loss between the sigmoid of the scores and the target labels, where the target label is 1 for the correct answers and 0 otherwise.", "Label smoothing is done when the total number of entities is large.", "At inference, the model scores the (head, question) pair against all possible answers a (cid:48) E .", "For relatively smaller KGs like MetaQA, we simply select the entity with the highest score.", "However if the knowledge graph is large, pruning the candidate entities can significantly improve the performance of EmbedKGQA.", "The pruning strategy is described in the following section.", "Similar to PullNet (Sun et al., 2019a) we learn a scoring function S ( r, q ) which ranks each relation r R for a given question q .", "Let h r be the embedding of a relation r and q (cid:48) = ( < s >, w 1 , .., w | q | , < / s > ) be the sequence of words in question q which are input to RoBERTa.", "The scoring function is defined as the sigmoid of the dot product of the final output of the last hidden layer of RoBERTa ( h q ) and the embedding of relation r ( h r ).", "Among all the relations, we select those relations which have score greater than 0 .", "5 It is denoted as the set R a .", "For each candidate entity a (cid:48) that we have obtained so far (Section 4.4), we find the relations in the shortest path between head entity h and a (cid:48) .", "Let this set of relations be R a (cid:48) .", "Now the relation score for each candidate answer entity is defined as the size of their intersection.", "We use a linear combination of the relation score and ComplEx score to find the answer entity.", "where is a tunable hyperparameter.", "more than 400k questions in the movie domain.", "It has 1-hop, 2-hop, and 3-hop questions.", "In our experiments, we used the vanilla version of the questions.", "Along with the QA data, MetaQA also provides a KG with 135k triples, 43k entities, and nine relations.", "2. WebQuestionsSP (tau Yih et al., 2016) is a smaller QA dataset with 4,737 questions.", "The questions in this dataset are 1-hop and 2-hop questions and are answerable through Freebase KG.", "For ease of experimentation, we restrict the KB to be a subset of Freebase which contains all facts that are within 2-hops of any entity mentioned in the questions of WebQuestionsSP.", "We further prune it to contain only those relations that are mentioned in the dataset.", "This smaller KB has 1.8 million entities and 5.7 million triples.", "We compare our model with the Key-Value Memory Network (Miller et al., 2016), the GraftNet (Sun et al., 2018) and the Pullnet (Sun et al., 2019a) for WebQuestionsSP dataset.", "For MetaQA dataset we also compare with the VRN (Zhang et al., 2018).", "These methods implement multi-hop KGQA, and except VRN, use additional text corpus to mitigate the KG sparsity problem.", "Key-Value Memory Network (KVMem) (Miller et al., 2016) is one of the first models that attempts to do QA over incomplete KBs by augmenting it with text.", "It maintains a memory table which stores KB facts and text encoded into key-value pairs and uses this for retrieval.", "GraftNet (Sun et al., 2018) uses heuristics to create a question-specific subgraph containing KG facts, entities and sentences from the text corpora and then uses a variant of graph CNN (Kipf and Welling, 2016) to perform reasoning over it.", "PullNet (Sun et al., 2019a) also creates a question-specific sub-graph but instead of using heuristics, it learns to pull facts and sentences from the data to create a more relevant Model MetaQA KG-Full MetaQA KG-50 1-hop 2-hop 3-hop 1-hop 2-hop 3-hop VRN 97.5 89.9 62.5 -GraftNet 97.0 94.8 77.7 64.0 (91.5) 52.6 (69.5) 59.2 (66.4) PullNet 97.0 99.9 91.4 65.1 (92.4) 52.1 (90.4) 59.7 (85.2) KV-Mem 96.2 82.7 48.9 63.6 (75.7) 41.8 (48.4) 37.6 (35.2) EmbedKGQA (Ours) 97.5 98.8 94.8 83.9 91.8 70.3 Table 2: Results on MetaQA dataset.", "The complete KG setting is the easiest setting for QA because the datasets are created in such a way that the answer always exists in the KG, and there is no missing link in the path.", "However, it is not a realistic setting, and the QA model should also be able to work on an incomplete KG.", "So we simulate an incomplete KB by randomly removing half of the triples in the KB (we randomly drop a fact with probability = 0.5).", "We call this setting KG-50 and we call full KG setting KG-Full in the text.", "In the next section we will answer the following questions: Q1.", "Can Knowledge Graph embeddings be used to perform multi-hop KGQA?", "(Section 5.3)", "Q2.", "Can EmbedKGQA be used to answer questions when there is no direct path between the head entity and the answer entity?", "(Section 5.4)", "Q3.", "How much does the answer selection module help in the final performance of our model?", "(Section 5.5) 5.3 KGQA results In this section, we have compared our model with baseline models on MetaQA and WebQuestionsSP datasets.", "MetaQA has different partitions of the dataset for 1-hop, 2-hop, and 3-hop questions.", "In the full KG setting (MetaQA KG-Full) our model is comparable to the state-of-the-art for 2-hop questions and establishes the state-of-the-art for 3-hop questions.", "EmbedKGQA performs similar to the state-of-the in case of 1-hop question which is expected because the answer node is directly connected to the head node and it is able to learn the corresponding relation embedding from the question.", "On the other hand performance on 2-hop and 3-hop questions suggest that EmbedKGQA is able to infer the correct relation from the neighboring edges because the KG embeddings can model composition of relations.", "Pullnet and GraftNet also perform similarly well because the answer entity lies in the question sub-graph most of the times.", "We have also tested our method on the incomplete KG setting, as explained in the previous section.", "Here we find that the accuracy of all baselines decreases significantly compared to the full KG setting, while EmbedKGQA achieves state-of-the-art performance.", "This is because MetaQA KG is fairly sparse, with only 135k triples for 43k entities.", "So when 50% of the triples are removed (as is done in MetaQA KG-50), the graph becomes very sparse with an average of only 1.66 links per entity node.", "This causes many head entity nodes of questions to have much longer paths ( > 3) to their answer node.", "Hence models that require question-specific sub-graph construction (GraftNet, PullNet) are unable to recall the answer entity in their generated sub-graph and therefore performs poorly.", "However, their performance improves only after including additional text corpora.", "On the other hand, EmbedKGQA does not limit itself to a sub-graph and utilizing the link prediction properties the KG embeddings, EmbedKGQA is able to infer the relation on missing links.", "WebQuestionsSP has a relatively small number of training examples but uses a large KG (Freebase) as background knowledge.", "This makes multi-hop KGQA much harder.", "Since all the entities of the KG are not covered in the training set, freezing the Model WebQSP KG-Full WebQSP KG-50 KV-Mem 46.7 32.7 (31.6) GraftNet 66.4 48.2 (49.7) PullNet 68.1 50.1 (51.9) EmbedKGQA 66.6 53.2 Table 3: Performance on WebQuestionsSP dataset.", "entity embeddings after learning them during KG embedding learning phase (Section 4.2) is necessary.", "Results on WebQuestionsSP (Table 3) highlight the fact that, even with a small number of training examples EmbedKGQA can learn good question embeddings that can infer the multi-hop path required to answer the questions.", "Our method on WebQSP KG-50 outperforms all baselines including PullNet, which uses extra textual information and is the state-of-the-art model.", "Even though WebQuestionsSP has fewer questions, EmbedKGQA is able to learn good question embeddings that can infer mission links in KG.", "This can be attributed to the fact that relevant and necessary information is being captured through KG embeddings, implicitly.", "State-of-the-art KGQA models like PullNet and GraftNet require a path between the head entity and the answer entity to be present in the Knowledge Graph to answer the question.", "For example, in PullNet, the answer is restricted to be one of the entities present in the extracted question subgraph.", "For the incomplete KG case where only 50% of the original triples are present, PullNet (Sun et al., 2019a) reports a recall of 0.544 on the MetaQA 1-hop dataset.", "This means that only for 54.4 percent of questions, all the answer entities are present in the extracted question subgraph, and this puts a hard limit on how many questions the model can answer in this setting.", "EmbedKGQA, on the other hand, uses Knowledge Graph Embeddings rather than a localized sub-graph to answer the question.", "It uses the head embedding and question embedding, which implicitly captures the knowledge of all observed and unobserved links around the head node.", "This is possible because of the link prediction property of Model Accuracy ComplEx 20.1 EmbedKGQA 29.9 Table 4: QA results on MetaQA 1-hop for the experiments in which there is no link between head entity and answer entity.", "So unlike other QA systems, even if there is no path between the head and answer entity, our model should be able to answer the question if there is sufficient information in the KG to be able to predict that path (See Fig. 1).", "We design an experiment to test this capability of our model.", "For all questions in the validation set of the MetaQA 1-hop dataset, we removed all the triples from the Knowledge Graph that can be directly used to answer the question.", "For example, given the question what language is [PK] in ' in the validation set, we removed the triple ( P K, in language, Hindi ) from the KG.", "The dataset also contains paraphrases of the same question, for, e.g., what language is the movie [PK] in ' and what is the language spoken in the movie [PK] '.", "We also removed all paraphrases of validation set questions from the training dataset since we only want to evaluate the KG completion property of our model and not a linguistic generalization.", "In such a setting, we expect models that rely only on sub-graph retrieval to achieve 0 hits@1.", "However, our model delivers a significantly better 29.9 hits@1 in this setting.", "This shows that our model can capture the KG completion property of ComplEx embeddings and apply it to answer questions which was otherwise impossible.", "Further, if we know the relation corresponding to each question, then the problem of 1-hop KG QA is the same as KG completion in an incomplete Knowledge Graph.", "Using the same training KG as above and using the removed triples as the test set, we do tail prediction using KG embeddings.", "Here we obtain 20.1 hits@1.", "The lesser score can be attributed to the fact that ComplEx embedding uses only the KG while our model uses the QA data as well which in itself represents knowledge.", "Our model is first trained on the KG and then uses these embeddings to train the QA model, and thus it can leverage the knowledge present in both the KG and QA data.", "We analyse the effect of the answer selection module (Section 4.4) on EmbedKGQA in the WebQuestionsSP dataset by ablating the relation matching module.", "Furthermore, in order to compare with other methods that restrict the answer to a neighbourhood in the KG (Sun et al. (2019a), Sun et al. (2018)), we experimented with restricting the candidate set of answer entities to only the 2-hop neighbourhood of the head entity.", "The results can be seen in Table 5.", "As we can see, relation matching has a significant impact on the performance of EmbedKGQA on both WebQSP KG-full and WebQSP KG-50 settings.", "Also, as mentioned earlier, WebQSP KG (Free-base subset) has an order of magnitude more entities than MetaQA (1.8M versus 134k in MetaQA) and the number of possible answers is large.", "So reducing the set of answers to a 2-hop neighbourhood of the head entity showed improved performance in the case of WebQSP KG-Full.", "However, this caused a degradation in performance on WebQSP KG-50.", "This is because restricting the answer to a 2-hop neighbourhood on an incomplete KG may cause the answer to not be present in the candidates (Please refer figure 1).", "In summary, we find that relation matching is an important part of EmbedKGQA.", "Morever, we suggest that n-hop filtering during answer selection may be included on top of EmbedKGQA for KGs which are reasonably complete.", "In this paper, we propose EmbedKGQA, a novel method for Multi-hop KGQA.", "KGs are often incomplete and sparse which poses additional challenges for multi-hop KGQA methods.", "Recent recent for this problem have tried to address the incompleteness problem by utilizing an additional text corpus.", "However, the availability of a relevant text corpus is often limited, thereby reducing broad-coverage applicability of such methods.", "In a separate line of research, KG embedding methods have been proposed to reduce KG sparsity by performing missing link prediction.", "EmbedKGQA utilizes the link prediction properties of KG embeddings to mitigate the KG incompleteness problem without using any additional data.", "It trains the KG entity embeddings and uses it to learn question embeddings, and during the evaluation, it scores (head entity, question) pair again all entities, and the highest-scoring entity is selected as an answer.", "EmbedKGQA also overcomes the shortcomings due to limited neighborhood size constraint imposed by existing multi-hop KGQA methods.", "EmbedKGQA achieves state-of-the-art performance in multiple KGQA settings, suggesting that the link prediction properties of KG embeddings can be utilized to mitigate the KG incompleteness problem in Multi-hop KGQA.", "We would like to thank the anonymous reviewers for their constructive feedback, and Ashutosh Kumar, Aditya Rastogi and Chandrahas from the Indian Institute of Science for their insightful comments.", "This research is supported in part by a grant from Intel and the Ministry of Human Resource Development, Government of India." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "2 Department of Computing, The Hong Kong Polytechnic University 3 Department of Computer Science and Technology, Tongji University 4 Microsoft 5 Peng Cheng Laboratory", "Abstract", "Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning.", "The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets.", "After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder.", "Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text.", "To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder.", "Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.", "We release our code and model at https://github.com/microsoft/ SpeechT5 .", "Starting with ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019), substantial work has shown that pre-trained models can significantly improve in various natural language processing (NLP) tasks", "(Radford et al., 2019; CONNEAU and Lample, 2019; Yang et al., 2019; Dong et al., 2019; Lewis et al., 2020).", "Following the pre-training techniques in NLP, self-supervised speech representation learning has also been investigated and shown promising results, benefiting from richly learned representations (Chung and Glass, 2018; Chuang et al., 2020; Song et al., 2019; Baevski et al., 2020; Wang et al., 2021; Hsu et al., 2021; Chung et al., 2021a), such as wav2vec 2.0 (Baevski et al., 2020) and HuBERT (Hsu et al., 2021).", "However, previous speech pre-training work suffers from two problems: (1) most of them learn the speech representation with only unlabeled speech data but ignore the importance of textual data to spoken language tasks (e.g., automatic speech recognition) which require the modality transformation; (2) most of these models solely rely on a pre-trained speech encoder for various downstream tasks, leaving the decoder not pre-trained for the sequence-to-sequence generation tasks.", "How to design a unified encoder-decoder model that can take advantage of both unlabeled speech and text data to improve various spoken language processing tasks is not well explored.", "we attempt to formulate each spoken language processing task as a speech/text to speech/text problem via an encoder-decoder framework, which enables us to use the same pre-trained model with bimodal data across diverse tasks, as shown in Figure 1.", "To achieve this, we propose a unified-modal pre-training framework, SpeechT5, containing an encoder-decoder backbone network and modal-specific pre/post-nets.", "With the pre-nets, the input speech/text is embedded in a shared space, and the encoder-decoder backbone network models the sequence-to-sequence conversion, from which the model-specific post-nets generate the speech/text output.", "Particularly, SpeechT5 is mainly pre-trained with a denoising sequence-to-sequence method by leveraging large-scale unlabeled text and speech corpus.", "To align the textual and acoustic information into a unified semantic space, the proposed SpeechT5 model (1) maps text and speech representations into a shared vector quantization space, and (2) randomly mixes up the quantized latent representations and the contextual states, which can better guide the quantizer to learn the cross-modal features.", "We fine-tune SpeechT5 on a wide variety of downstream spoken language processing tasks, including automatic speech recognition (ASR), text-to-speech (TTS), speech translation (ST), voice conversion (VC), speech enhancement (SE), and speaker identification (SID).", "Massive experiments show that the proposed SpeechT5 model achieves a significant improvement on these spoken language processing tasks compared with the state-of-the-art baselines.", "Specifically, the proposed SpeechT5 outperforms wav2vec 2.0 (Baevski et al., 2020) and HuBERT (Hsu et al., 2021) with the BASE model on the ASR task and also performs better than the state-of-the-art voice Transformer network (Huang et al., 2021) on the VC task.", "Besides, SpeechT5 is significantly superior to SpeechNet (Chen et al., 2021b) and pre-trained models from SUPERB (Yang et al., 2021) and achieves the state-of-the-art performance (i.e., 96.49%) on the SID task.", "We further provide an empirical comparison of the pre-training tasks and modules, and the ablation study demonstrates the effectiveness of the proposed joint speech-text pre-training method.", "The contributions of this paper are summarized as follows.", "framework for various spoken language processing tasks.", "We propose a cross-modal vector quantization approach, which learns the implicit alignment between acoustic and textual representation with large-scale unlabeled speech and text data.", "Extensive experiments on spoken language processing tasks demonstrate the effectiveness and superiority of the proposed SpeechT5 model.", "In this section, we propose SpeechT5, a unified-modal framework for learning joint contextual representations for speech and text data via a shared encoder-decoder structure.", "Figure", "2(a) shows the model architecture of the proposed SpeechT5 model.", "It consists of an encoder-decoder module and six modal-specific pre/post-nets.", "The pre-nets convert the input speech X s D s or text X t D t to a unified space of hidden representations and then feed them into the shared encoder-decoder to perform the sequence-to-sequence conversion.", "Finally, the post-nets generate the output in the speech or text modality, based on the decoder output.", "Input/Output Representations To train a single model for a diverse set of spoken language processing tasks, we formulate them as speech/text to speech/text tasks, where the model is fed with speech/text as the input and generates the corresponding output in the speech/text format.", "Specifically, a text is split into a sequence of characters X t = ( x t 1 , ..., x tN t ) as the input and output.", "For speech modality, the raw waveform X s = ( x s 1 , ..., x sN s ) is used as the input, and a sequence of the log Mel-filterbank features X f = ( x f 1 , ..., x fN f ) extracted from raw audio using librosa tool 1 is adopted as the target output.", "A vocoder (Kong et al., 2020) is leveraged to generate the final waveform from the generated features.", "refer to Vaswani et al. (2017) for more details.", "We employ the relative position embedding (Shaw et al., 2018) to help capture the relative position differences between elements in the input.", "Specifi-cally, we only add the relative position embedding to the dot-product weights of the self-attention.", "Speech Pre/Post-Net The convolutional feature extractor of wav2vec 2.0 (Baevski et al., 2020) serves as the speech-encoder pre-net to downsam-ple raw waveform X s and produce a sequence of a speech utterance H = ( h 1 , ..., h N h ) .", "The speech-decoder pre-net is a neural network composed of three fully connected layers with the ReLU activation, fed with the log Mel-filterbank X f .", "To support multi-speaker TTS and VC, the speaker embedding extracted with the x-vector (Snyder et al., 2018) is concatenated with the output of the speech-decoder pre-net followed by a linear layer.", "The speech-decoder post-net consists of two modules.", "The first module uses a linear layer fed with the decoder output to predict the log Mel-filterbank Y f = ( y f 1 , ..., y fN f ) , followed by five 1-dimensional convolutional layers to produce a residual to refine the predicted Y f .", "Another linear module is added to project the decoder output to a scalar for predicting the stop token.", "Text Pre/Post-Net We use shared embeddings as the text-encoder pre-net and text-decoder pre/post-nets.", "The pre-net transforms a token index into an embedding vector.", "The post-net transforms the hidden state into the probability distribution of tokens, normalized by the softmax function.", "The proposed SpeechT5 model can be pre-trained with large-scale collections of unlabeled speech and text corpus.", "The proposed joint pre-training method can align the textual and acoustic information into a unified semantic space.", "Speech Pre-Training Leveraging unlabeled speech data D s to learn general speech representations for both classification and generation tasks, SpeechT5 is trained with two types of tasks: bidirectional masked prediction and sequence-to-sequence generation.", "Following HuBERT (Hsu et al., 2021), the bidirectional masked prediction leverages a masked language model similar to BERT (Devlin et al., 2019) for the encoder, in which an acoustic unit discovery model provides the frame-level targets Z = ( z 1 , ..., z N h ) 2 .", "Specifically, we apply span mask strategies to the output H from speech-encoder pre-net, where 8% of timesteps are randomly selected as start indices, and spans of 10 steps are masked.", "The Transformer encoder takes masked H as the input and produces hidden representations U = ( u 1 , ..., u N h ) .", "Based on these hidden representations, the cross-entropy loss is computed over masked timesteps as L smlm = (cid:88) n M log p ( z n | H , n ) , (1) where H denotes the masked version of H , M 2 The target labels are generated by clustering outputs of the 6-th Transformer layer in the first iteration HuBERT BASE model via the k -means clustering method with 500 clusters.", "Furthermore, we propose to reconstruct the original speech via a sequence-to-sequence generation task, given the randomly masked input as introduced in bidirectional masked prediction.", "Following seq2seq TTS models (Li et al., 2019), we enforce the corresponding predicted output Y f , which is generated through the speech-decoder pre-net, Transformer decoder, and speech-decoder post-net, to be close to the original X f by minimizing their L 1 -distance as L s 1 = N f (cid:88) n =1 (cid:107) y fn x fn (cid:107) 1 , (2) where x fn denotes n -th an 80-dimensional log Mel-filterbank from X f .", "Besides, we use the binary cross-entropy (BCE) loss L s bce for the stop token.", "Text Pre-Training With unlabeled text data D t , SpeechT5 is trained to reconstruct the model output Y t = ( y t 1 , ..., y tN t ) to the original text X t , using the corrupted text X t = ( x t 1 , ..., x tM ) as the input generated with a mask-based noising function.", "Following the text infilling approach in BART 3 (Lewis et al., 2020), we randomly sample 30% of text spans to mask, where the span length of text spans draws from a Poisson distribution ( = 3 . 5 ), and each span is replaced with a single mask token.", "Formally, SpeechT5, including text-encoder pre-net, encoder-decoder model, and text-decoder pre/post nets, is optimized to generate the original sequence with the maximum likelihood estimation as L tmle = N t (cid:88) n =1 log p ( y tn | y t<n , X t ) , (3) Joint Pre-Training The above pre-training methods can only leverage speech or text data to model acoustic or language information individually.", "To build a cross-modality mapping between speech and text, which is essential for tasks such as ASR and TTS, we propose a cross-modal vector quantization method to learn representations capturing the modality-invariant information.", "Specifically, we utilize vector quantized embeddings as a bridge to align the speech representation and text representation through a shared codebook, 3 We conducted experiments to compare the BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) mask strategies, which can be found in Appendix A. as shown in Figure", "2(b).", "Inspired by VQ-VAE (Oord et al., 2017) and SemFace (Ren et al., 2021), we first use the quantizer to convert these continuous speech/text representations u i from the output of the encoder into discrete representations c i from a fixed-size codebook CK , which contains K learnable embeddings.", "Then, the nearest neighbor search is performed between the encoder output and the embedding of each latent code via the L 2 distance as c i = arg min j [ K ] (cid:107) u i c j (cid:107) 2 , (4) where c j is the j -th quantized vector in the codebook.", "Note that we do the same operation for the output of the speech and text encoders with a shared codebook.", "Then, we randomly replace a proportion (10%) of the contextual representations with quantized latent representations in the corresponding time steps and calculate the cross-attention upon the mixed representations, which can explicitly guide the quantizer to utilize the cross-modal information.", "The diversity loss is used to encourage sharing more codes by maximizing the entropy of the averaged Softmax distribution as L d = 1 KK (cid:88) k =1 p k log p k , (5) where p k is the averaged probability of choosing the k -th code in the codebook.", "The final pre-training loss with unlabeled speech and text data can be formulated as L = L smlm + L s 1 + L sbce + L tmle + L d .", "(6) where is set to 0 .", "1 during pre-training.", "After pre-training, we fine-tune the encoder-decoder backbone via the loss of the downstream task.", "The goal is to measure the learning abilities of SpeechT5, and we study the performance on a diverse set of downstream tasks such as ASR, TTS, ST, VC, SE, and SID.", "All of the spoken language processing tasks that we consider can be learned by concatenating the outputs of the encoder-decoder backbone and corresponding pre-net and post-net.", "Taking ASR as an example, the final model consists of the speech-encoder pre-net, encoder-decoder, text-decoder pre-net, and text-decoder post-net, 5726 Model LM dev-clean dev-other test-clean test-other wav2vec 2.0 BASE (Baevski et al., 2020) -6.1 13.5 6.1 13.3 HuBERT BASE (Hsu et al., 2021) -5.5 13.1 5.8 13.3 Baseline (w/o CTC) -5.8 12.3 6.2 12.3 Baseline -4.9 11.7 5.0 11.9 SpeechT5 (w/o CTC) -5.4 10.7 5.8 10.7 SpeechT5 -4.3 10.3 4.4 10.4 DiscreteBERT (Baevski et al., 2019) 4-gram 4.0 10.9 4.5 12.1 wav2vec 2.0 BASE (Baevski et al., 2020) 4-gram 2.7 7.9 3.4 8.0 HuBERT BASE (Hsu et al., 2021) 4-gram 2.7 7.8 3.4 8.1 wav2vec 2.0 BASE (Baevski et al., 2020) Transf.", "which are initialized by SpeechT5 and fine-tuned via the cross-entropy loss on the corresponding training data.", "The baseline systems have the same architecture as SpeechT5, but the weights of the baseline encoder are initialized by the HuBERT BASE model (Hsu et al., 2021) if the input data of the downstream tasks is speech.", "It allows raw waveform as the model input and can provide a strong baseline.", "All models are implemented in Fairseq 4 (Ott et al., 2019).", "The encoder-decoder backbone contains 12 Transformer encoder blocks and 6 Transformer decoder blocks, where the model dimension is 768, the inner dimension (FFN) is 3,072, and the number of attention heads is 12.", "The above encoder setting is the same as that in wav2vec 2.0 BASE and HuBERT BASE .", "The speech-encoder pre-net contains 7 blocks of temporal convolutions, each of which is composed of 512 channels with strides (5 , 2 , 2 , 2 , 2 , 2 , 2) and kernel sizes (10 , 3 , 3 , 3 , 3 , 2 , 2) .", "For the speech-decoder pre-net and post-net, we use the same setting as the pre-net and post-net in Shen et al. (2018) except that the number of channels of the post-net is 256.", "For text-encoder/decoder pre/post-net, a shared embedding layer with dimension 768 is used.", "For the vector quantization, we use two codebooks with 100 entries for the shared codebook module, resulting in a theoretical maximum of K = 10 4 code entries.", "For text pre-training, we use the normalized language model training text of LibriSpeech as unlabeled data, which contains 400M sentences.", "5 We optimize the model with Adam (Kingma and Ba, 2014) by warming up the learning rate for the first 8% of updates to a peak of 2 10 4 , which is linear decayed for the following updates.", "We pre-train the proposed SpeechT5 model on 32 V100 GPUs with a batch size of around 90s samples per GPU for speech and 12k tokens per GPU for text and set the update frequency to 2 for 500k steps.", "We fine-tune the ASR model with the LibriSpeech 100/960 hours data and train the language model (LM) with the LibriSpeech LM text data, which is used for shallow fusion (Gulcehre et al., 2015) during the ASR inference.", "Besides the cross-entropy loss for the decoder, we add an extra linear layer to calculate the connectionist temporal classification (CTC) loss on the top of the encoder (Watan-abe et al., 2017), so that we can apply the joint CTC/attention decoding (Hori et al., 2017) to boost the performance.", "We measure the performance of ASR by the word error rate (WER).", "The implementation details can be found in Appendix B.1.", "The results of ASR on the 100 hours set of LibriSpeech are reported in Table 1.", "We compare with several state-of-the-art self-supervised approaches, including DiscreteBERT (Baevski et al., 2019), wav2vec 2.0 (Baevski et al., 2020), and HuBERT (Hsu et al., 2021).", "Without LM fusion, the baseline outperforms wav2vec 2.0 BASE and HuBERT BASE with the help of the joint CTC/attention decoding, which shows the importance of the decoder.", "The proposed SpeechT5 model achieves significant improvements on all settings compared to wav2vec 2.0 BASE , HuBERT BASE and our strong baselines, demonstrating the superiority of the proposed pre-training method.", "Furthermore, when decoding with LM fusion, SpeechT5 obtains the lower WERs than wav2vec 2.0 BASE on all sets and achieves the state-of-the-art performance.", "Due to space constraints, the results of 960h fine-tuning experiments are reported in Appendix C. 3.3 Evaluation on TTS We fine-tune the pre-trained model on the 460-hours LibriTTS clean sets (Zen et al., 2019) with the L 1 loss, L sbce loss, and attention loss (Tachibana et al., 2018).", "We utilize the HiFi-GAN (Kong et al., 2020) vocoder to convert the log Mel-filterbank to the raw waveform.", "We evaluate the Naturalness with the open-source NISQA-TTS (Mittag and Mller, 2020), the mean option score (MOS), and the comparison mean option score (CMOS) by native speakers on the randomly selected 200 sentences with various lengths (no overlapping with training data) generated by different models, in which case we keep the text content consistent.", "More details can be found in Appendix B.2.", "Table 3 shows the experimental results of TTS.", "The proposed SpeechT5 trained without L smlm is considered because the bidirectional masked prediction loss is proposed to help the encoder learn to encode the speech signal, and this variant achieves superior Naturalness, as shown in Table 13 (in Appendix D).", "The proposed SpeechT5 model behaves better than baseline and achieves a performance of 2.91 Naturalness and 3.65 MOS.", "Furthermore, our proposed SpeechT5 obtains a gain of +0.29 in CMOS with respect to the baseline model, which suggests the proposed pre-training method significantly improves the speech generation quality.", "We evaluate the ST task on the MUST-C dataset (Di Gangi et al., 2019), including English-German (EN-DE) and English-French (EN-FR) translation tasks.", "We use the default training setting of speech translation in Fairseq ST (Wang et al., 2020), and we also average the last 10 checkpoints and use a beam size of 5 for decoding.", "Translation results are evaluated with case-sensitive BLEU (Papineni et al., 2002).", "Details about the dataset and fine-tune setting are introduced in Appendix B.3.", "We list the BLEU scores of ST in Table 4.", "The result of SpeechT5 without initializing the decoder is also reported since we do not pre-train the decoder with German or French data, and it outperforms the strong baseline whose encoder is initialized by HuBERT encoder.", "The proposed SpeechT5 further beats the SpeechT5 without initializing the decoder, and achieves a significant improvement of 1.75 and 1.54 BLEU scores than baseline in EN-DE and EN-FR tasks, respectively, which demonstrates the effectiveness and superiority of our method.", "Besides, our SpeechT5 model outperforms existing models such as Fairseq ST (Wang et al., 2020), 5728 ESPnet ST (Inaguma et al., 2020), and Adapter Tuning (Le et al., 2021) that employs adapter modules to be further specialized in each language pair from different pre-trained models.", "VC aims to convert a speaker-dependent source speech waveform into a different one while preserving linguistic information of the source speech waveform.", "We follow the many-to-many setting and utilize speech recordings of four speakers in the CMU Arctic (Kominek and Black, 2004), including clb, bdl, slt, and rms.", "For the waveform synthesis, we use the Parallel WaveGAN (Yamamoto et al., 2020), a non-autoregressive variant of the WaveNet vocoder.", "We employ the average of MCD (Mel-Cepstral Distortion) and WER as the metrics for the VC task.", "More details about the dataset and fine-tune setting are given in Appendix B.4.", "We show the results of VC in Table 2, where we list the conversion from speaker bdl to slt and clb to slt as used in the voice Transformer network (VTN) (Huang et al., 2021).", "The experimental results demonstrate that the proposed SpeechT5 model achieves a significant gain than the strong baseline model.", "The proposed SpeechT5 model also outperforms the state-of-the-art VTN variants in terms of MCD, including VTN fine-tuned from ASR or TTS (Huang et al., 2021) and many-to-many VTN (Kameoka et al., 2021).", "SE is the task of removing background noise from a degraded speech signal and improving the intelligibility and the perceived quality of the signal.", "We use the WSJ0 Hipster Ambient Mixtures (WHAM!) dataset (Wichern et al., 2019) and conduct the 16 kHz max enhance-single task that recovers the signal from a mixture of only the first WSJ0 speaker and noise.", "We utilize HiFi-GAN to transform the log Mel-filterbank to the raw waveform.", "Since the input and output lengths are probably different in the encoder-decoder model, we can not evaluate it by PESQ (Rix et al., 2001) and ESTOI (Jensen and Taal, 2016), so we evaluate the negative impact on the ASR performance by WER.", "The implementation details of SE are in Appendix B.5.", "As shown in Table 5, our strong baseline model recovers contents from the noisy speech, achieving 10.9% WER from 76.1% WER.", "Moreover, the proposed SpeechT5 model gets a relative 9% WER reduction compared to the strong baseline model.", "The results suggest that although the noisy speech with WHAM! is challenging as summarized in Table 12 (in Appendix B.5), the proposed encoder-decoder framework can effectively suppress the noise and recover the content.", "We convert SID, a multi-class classification task of classifying each utterance for its speaker identity, to a speech to text task by sequence to sequence model.", "Compared to the ASR task, the text embedding table is replaced by a speaker embedding table, and the decoder predicts speaker identifies at the first step.", "We adopt the VoxCeleb1 dataset (Nagrani et al., 2017), which contains over 100,000 speech records uttered by 1,251 celebrities extracted from videos uploaded to YouTube.", "The top-1 speaker classification accuracy (ACC) is used as the evaluation metric of SID.", "Refer to Appendix B.6 for more details about the dataset and fine-tuning.", "As shown in Table 6, our baseline is superior to existing Transformer-based methods such as SpeechNet (Chen et al., 2021b) and pre-trained models from SUPERB (Yang et al., 2021).", "Moreover, it outperforms ResNet-based architectures such as Thin ResNet-34 (Chung et al., 2020), indicating the superiority of the encoder-decoder ar-5729 chitecture for the SID task.", "The SpeechT5 further improves the performance compared to baseline and achieves the state-of-the-art performance (i.e., 96.49% accuracy), which demonstrates the effectiveness of the proposed pre-training technique.", "To better understand why the proposed SpeechT5 model is effective, we investigate the influence of the pre-training methods by removing each of them independently.", "As shown in Table 7, we can draw the following conclusions: (1) The pre-training methods, including speech pre-training, text pre-training, and joint pre-training method, are important to SpeechT5 since without each of them, the performance of all tasks will degrade significantly; (2) Speech pretraining is more critical than text pre-training on these tasks that need to encode speech, and the ASR model fine-tuned from SpeechT5 without speech pre-training even can not converge; (3) Without the joint pre-training method, the performance of the ASR model decreases, which demonstrates that the learned alignment from joint pre-training brings benefits for cross-modality tasks; (4) The masked language model learning L smlm of speech data is mainly responsible for extracting acoustic features and learning better speech representation, which is beneficial to ASR and SID tasks.", "Large-scale pre-training models such as BERT (De-vlin et al., 2019), T5 (Raffel et al., 2020), wav2vec 2.0 (Baevski et al., 2020), and HuBERT (Hsu et al., 2021) have drawn much attention in the NLP and speech communities, due to its strong capability", "capability of generalization and efficient usage of large-scale data (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019; Lewis et al., 2020; Chen et al., 2021c; Baevski et al., 2020; Lakhotia et al., 2021; Kharitonov et al., 2021; Chen et al., 2021a).", "However, the research mentioned above effects gear towards single-modal learning, hence they can only be used in either text or speech modeling.", "Although some speech-language pre-training work (Chung et al., 2021b; Kim et al., 2021; Qian et al., 2021) attempts to improve spoken language understanding tasks, these methods only focus on an encoder with task-specific layers for different tasks and do not pre-train a decoder for generation tasks such as speech synthesis or text generation.", "Besides, a series of research work begins to investigate joint text and speech training (Han et al., 2021; Ye et al., 2021; Tang et al., 2021a; Zheng et al., 2021; Tang et al., 2021b), but they are mainly designed for speech to text tasks.", "The proposed SpeechT5 method is most related to T5 (Raffel et al., 2020).", "The core idea of the T5 model, a unified framework for a variety of text-based language problems, is to treat every text processing problem as a text-to-text problem.", "SpeechT5 is also related to Speech Chain (Tjandra et al., 2020), which leverages the ASR model and TTS model to build a closed-loop machine speech chain to train models on the concatenation of both labeled and unlabeled data, and SpeechNet (Chen et al., 2021b), which designs a universal modularized model to perform multiple speech processing tasks with multi-task learning.", "The differences from the above models are that (1) SpeechT5 is a shared cross-modal encoder-decoder framework, whose input and output are speech or text through multiple pre/post-nets; (2) SpeechT5 attempts to pre-train and improve the universal model with large-scale unlabeled text and speech data.", "Another related work is SUPERB (Yang et al., 2021), a benchmark to examine the capability of pre-trained models such as HuBERT (Hsu et al., 2021).", "SUPERB focuses on investigating a simple framework to learn SUPERB tasks with a frozen and shared pre-trained encoder and lightweight prediction modules fine-tuned for each task.", "In contrast, the goal of SpeechT5 is to learn all spoken language processing tasks by fine-tuning a unified-modal encoder-decoder model, which is pre-trained on unlabeled speech and text corpus.", "In this paper, we have proposed SpeechT5 as a pre-trained encoder-decoder model for various spoken language processing tasks.", "We convert all spoken language processing tasks into a speech/text to speech/text format and propose a novel joint pretraining method to utilize cross-modal information by leveraging the unlabeled speech and text data.", "The proposed unified encoder-decoder model can support generation tasks such as speech translation and voice conversion.", "Massive experiments show that SpeechT5 significantly outperforms all baselines in several spoken language processing tasks.", "In the future, we are going to pre-train the SpeechT5 with a larger model and more unlabeled data.", "We are also interested in extending the proposed SpeechT5 framework to address multilingual spoken language processing tasks for future work.", "We thank Yanqing Liu and Sheng Zhao for their help in TTS human evaluation.", "We also want to thank the anonymous reviewers for insightful comments and suggestions." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "other", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "abstain", "method", "objective", "other", "other" ]
[ "We address the problem of speech act recognition (SAR) in asynchronous conversations (forums, emails).", "Unlike synchronous conversations (e.g., meetings, phone), asynchronous domains lack large labeled datasets to train an effective SAR model.", "In this paper, we propose methods to effectively leverage abundant unlabeled conversational data and the available labeled data from synchronous domains.", "We carry out our research in three main steps.", "First, we introduce a neural architecture based on hierarchical LSTMs and conditional random fields (CRF) for SAR, and show that our method outperforms existing methods when trained on in-domain data only.", "Second, we improve our initial SAR models by semi-supervised learning in the form of pretrained word embeddings learned from a large unlabeled conversational corpus.", "Finally, we employ adversarial training to improve the results further by leveraging the labeled data from synchronous domains and by explicitly modeling the distributional shift in two domains.", "With the ever-increasing popularity of Internet and mobile technologies, communication media like emails and forums have become an integral part of people's daily life where they discuss events, issues and experiences.", "Participants interact with each other asynchronously in these media by writing at different times, generating a type of conversational discourse that is different from synchronous conversations such as meeting and phone conversations (Louis and Cohen, 2015).", "In the course of the interactions, the participants perform certain communicative acts like asking questions, requesting information, or suggesting something, which are known as speech acts (Austin, All authors contibuted equally.", "C 1 : hoping to do the XinJiang Tibet Highway.", "[Statement] Has anyone done it?", "[Question] Am hoping to hire a 4-wheel drive.", "[Statement] I know the roads are bad and I would need an experienced driver and guide any recommendations?", "[Question] C 2 : I never done this routine,however i been to Xinjiang twice,in my opinion the local people not friendly, not safe to do this.", "[Response] I still have relative stay in Xinjiang, however don't know what they can offer for help... [Response] C 3 : I'm not sure if travelling overland from Xinjiang to Tibet is officially legal yet.", "[Response] You might want to post your question on the NorthEast Asia branch of Lonely Planet's ThornTree forum for more (useful) answers.", "[Suggestion] C 4 : a frend and i are trying this route as well, we will likely be in urumuqi and northern part of xinjiang from 8th apr to end apr; looking at doing the xin jiang tibet highway from end apr. (truncated) [Statement] contact me at [email] if you want to hook up for possible transport sharing [Suggestion] cheers.", "[Polite] Figure 1: Example of speech acts in a forum thread.", "1962).", "For example, consider the forum conversation in Figure", "1. The participant who posted the initial comment C 1 , describes his situation and asks a couple of questions .", "Other participants respond to the initial post with more information and provide suggestions .", "In this process, the participants get into a conversation by taking turns, each of which consists of one or more speech acts.", "Speech act recognition (SAR) is an important step towards deep conversational analysis, and can benefit many downstream applications.", "Availability of large labeled datasets such as the Switchboard-DAMSL (SWBD) (Jurafsky et al., 1997) and the Meeting Recorder Dialog Act (MRDA) (Dhillon et al., 2004) corpora has fostered research in data-driven SAR methods in synchronous domains.", "However, such large corpora are not available in the asynchronous domains, and many of the existing (small-sized) ones use task-specific tagsets as opposed to a standard one.", "The unavailability of large annotated datasets with standard tagsets is one of the main reasons for SAR not getting much attention in asynchronous domains, and it is often quite expensive to annotate such datasets for each domain of interest.", "SAR methods proposed before the neural tsunami', e.g., (Qadir and Riloff, 2011; Jeong et al., 2009; Tavafi et al., 2013), used mostly bag-of-ngram representation ( e.g., unigram, bigram) of a sentence, and most of these methods disregard conversational dependencies (discourse structure) between sentences.", "Recently, Joty and Hoque (2016) proposed a neural-CRF framework for SAR in forum conversations.", "In their approach, a bi-LSTM (trained on the SAR task) first encodes the sentences separately into task-specific embeddings, which are then used in a separate CRF model to capture the conversational dependencies between sentences.", "They also use labeled data from the MRDA meeting corpus, without which their LSTMs perform worse than simple feed-forward networks.", "Although their method attempts to model sentence structure (using LSTM) and conversational dependencies (using CRF), the approach has several limitations.", "First, the LSTM-CRF framework was disjoint, and thus cannot be trained end-to-end.", "Second, when using the MRDA meeting data, their method simply concatenates it with the target domain data assuming they have the same distribution.", "However, asynchronous domains (forum, email) differ from synchronous (MRDA) in their underlying conversational structure (Louis and Cohen, 2015), in style (spoken vs. written), and in vocabulary usage (meetings on some focused agenda vs. conversations on any topic of interests in a public forum).", "Therefore, we hypothesize that to make the best use of labeled data from synchronous domains, one needs to model the shift in domains.", "In this work, we advance the state-of-the-art of SAR in asynchronous conversations in three main steps.", "First, we introduce an end-to-end neural architecture based on a hierarchical LSTM encoder with a Softmax or CRF output layer.", "Second, we improve our initial SAR model by semi-supervised learning in the form of word embeddings learned from a large unlabeled conversational corpus.", "Most importantly, we adapt our hierarchical LSTM encoder using domain adversarial training (Ganin et al., 2016) to leverage the labeled data from synchronous domains by explicitly modeling the shift in the two domains.", "We evaluate our models on three different asynchronous datasets containing forum and email conversations, and on the MRDA meeting corpus.", "Our main findings are: ( i ) the hierarchical LSTMs outperform existing methods when trained on in-domain data for both synchronous and asynchronous domains, setting a new state-of-the-art; ( ii ) conversational word embeddings yield significant improvements over off-the-shelf ones; and ( iii ) domain adversarial training improves the results by inducing domain-invariant features.", "The source code, the conversational word embeddings, and the datasets are available at https://ntunlpsg.github.io/ demo/project/speech-act/ .", "Previous studies on SAR in asynchronous conversation have used supervised, semi-supervised and unsupervised methods.", "Cohen et al. (2004) classify emails into acts like deliver' and meet-ing'.", "Their approach however does not take email context into account.", "Carvalho and Cohen (2005) use an iterative algorithm containing two different classifiers: the content classifier that only looks at the content of the message, and the context classifier that takes into account both the content and contextual speech acts in the email thread structure.", "Other supervised approaches use classifiers and sequence taggers with hand-crafted features (Qadir and Riloff, 2011; Tavafi et al., 2013).", "Jeong et al. (2009) use semi-supervised boosting to induce informative patterns from labeled spoken domains (MRDA, SWBD).", "Given a sentence represented as a set of trees (dependency, POS tags, n-grams), the boosting algorithm iteratively learns the sub-tree features.", "This approach does not consider the dependencies between the act types, something we successfully exploit in our work.", "Also, we leverage labeled data from synchronous conversations while adapting our model to account for the domain shift.", "Joty and Hoque (2016) use a bi-LSTM to encode a sentence, then use a separate CRF to model conversational dependencies.", "To learn an effective bi-LSTM model, they use the MRDA meeting data; however, without modeling the domain differences.", "The unsupervised methods use variations of Hidden Markov Models (HMM) including HMM-Topic (Ritter et al., 2010), HMM-Mix (Joty et al., 2011), and Mixed Membership (Paul, 2012).", "Several neural methods have been proposed in recent years for SAR in synchronous conversations .", "Kalchbrenner and Blunsom (2013) use a simple recurrent neural network (RNN) to model sequential dependencies between act types in phone conversations.", "They use a convolutional network to compose sentence representations from word vectors.", "Lee and Dernoncourt (2016) use a similar model, but also experiment with RNNs to compose sentence representations.", "Khanpour et al. (2016) use a stacked LSTM to compose word vectors into a sentence vector.", "Kumar et al. (2018) also use a hierarchical LSTM-CRF.", "However, none of these methods were applied to asynchronous conversations, where not much labeled data is available.", "Also to the best of our knowledge, no prior work attempted to do domain adaptation from the synchronous conversation, which is our main contribution in this paper.", "We use a bidirectional long short-term memory or bi-LSTM (Hochreiter and Schmidhuber, 1997) to encode each sentence into a vector representation.", "Given an input sentence x i = ( w 1 , , w m ) of length m , we first map each word w t to its corresponding vector representation v t by looking up the word embedding matrix.", "The LSTM recurrent layer then computes a compositional representation z t at every time step t by performing nonlinear transformations of the current input v t and the output of the previous time step z t 1 .", "The output of the last time step z m is considered as the representation of the sentence.", "A bi-LSTM composes a sentence in two directions: left-to-right and right-to-left, yielding a representation h i = [ z m ; z m ] , where ;' denotes concatenation.", "Similar to (Joty and Hoque, 2016), we could use h i to classify sentence x i into one of the speech act types using a Softmax output layer.", "However, in that case, we would disregard the discourse-level dependencies between sentences in a conversation.", "To take conversational dependencies into account, we explore two methods as we describe below.", "sentences X = ( x 1 , , x n ) , the sentence-level bi-LSTM generates a sequence of n vectors H = ( h 1 , , h n ) .", "To consider interdependencies between sentences, we place another bi-LSTM layer on top of H to connect the sentence vectors sequentially in both directions, and encode each sentence within its left and right contexts.", "As shown in Figure 2, the upper bi-LSTM combines the current input h i with its previous hidden state u i 1 (resp., u i +1 ) to generate a representation for the current sentence u i (resp., u i ).", "The hierarchically encoded sentence vectors U = ( u 1 , , u n ) (where u i = [ u i ; u i ] ) are fed into a Softmax classifier for speech act classification.", "where W are the classifier weights, and are the parameters of the hierarchical LSTM encoder.", "We train the model by minimizing the cross entropy: L c ( W, ) = n (cid:88) i =1 K (cid:88) k =1 y i,k log p ( y i = k | X , W, ) with y i,k being the one-hot encoding of the label.", "The hierarchical LSTM (H-LSTM) captures contextual information by propagating information through hidden layers, and has been shown to be effective in similar tasks such as context encoding in dialog systems (Serban et al., 2016).", "Despite this, its modeling strength is limited compared to structured models that use global inference to model consistency in the output, especially when there are strong dependencies between output labels (Collobert et al., 2011).", "Therefore, instead of classifying sentences independently with a Softmax layer, our second method is to model them jointly with a CRF layer (Lafferty et al., 2001).", "For an input-output sequence pair ( X , y ) , we de-fine the joint probability distribution: p ( y | X ) = 1 Z ( U , A, V, ) n (cid:89) i =1 n ( y i | u i , V ) (cid:124) (cid:123)(cid:122) (cid:125) nodefactor n (cid:89) i =0 e ( y i,i +1 | A ) (cid:124) (cid:123)(cid:122) (cid:125) edgefactor where U = ( u 1 , , u n ) is the hierarchically encoded sentence vectors as before, and n ( y i = k | u i , V ) = exp( V Tk u i ) is the node-level score with V being the weight matrix, e is the transition matrix parameterized by A , and Z ( . ) is the global normalization constant that ensures a valid probability distribution.", "The cross entropy loss for the ( X , y ) sequence pair can be written as: L c ( V, A, )= n (cid:88) i =1 log n ( y i | u i , V ) n (cid:88) i =0 log A i,i +1 +log Z We use Viterbi decoding to infer the most probable tag sequence for an input sequence of sentences, y = argmax y p ( y | X , V, A, ) .", "We will demonstrate later in our experiments that a CRF layer helps the H-LSTM to adapt quickly ( i.e., with less labeled data) to a target domain by exploiting the tag dependencies in the source domain.", "The hierarchical models have many parameters.", "Given enough training data, they should be able to encode a sentence, capturing its syntactic and semantic properties, and discourse-level dependencies.", "However, when it comes to SAR in asynchronous domains, not many large annotated corpora are available.", "Because of the large number of parameters, the models usually overfit when trained on small datasets of asynchronous conversations (shown in Sec. 6).", "We propose two solutions to address this problem.", "Our first (simple but effective) solution is to leverage large unlabeled conversational corpus to learn better task-agnostic word embeddings, and use it to initialize our models for better generalization.", "In the interests of coherence, we present this method in Section", "5. Our second solution is to leverage data from synchronous domains for which large annotated corpus is available ( e.g., MRDA corpus).", "However, as we will see, simple concatenation of the datasets is not quite effective in our case, because the conversations in synchronous and asynchronous domains differ in their conversational structures, modality (spoken vs. written), and vocabulary usage.", "To get the best out of the available synchronous domain data, we need to adapt our models by explicitly modeling the domain shift.", "More precisely, our goal is to adapt the hierarchical encoder so that it learns to encode sentence representations U ( i.e., features used for classification) that is not only discriminative for the act classification, but also invariant across the domains.", "We propose to use the domain adversarial training proposed by Ganin et al. (2016).", "Let DS = { X p , y p } Pp =1 denote the set of P labeled training conversations in the source domain (MRDA).", "We consider two adaptation scenarios.", "( i )", "Unsupervised adaptation: In this scenario, we have only unlabeled examples in the target domain ( e.g., forum).", "Let D uT = { X p } Qp = P +1 be the set of ( Q P 1) unlabeled training instances in the target domain with Q being the total number of training instances in the two domains.", "( ii )", "Semi-supervised/supervised adaptation: In addition to the unlabeled instances D uT , here we have access to some labeled training instances in the target domain, D lT = { X p , y p } Rp = Q +1 , with R being the total number of training examples in the two domains.", "Depending on the amount of labeled data in the target domain, this setting is referred to as semi-supervised or supervised adaptation.", "The dashed lines in Figure 2 show the extension of our base model for adaptation.", "The input conversation X is sampled either from a synchronous domain ( e.g., meeting) or from an asynchronous domain ( e.g., forum).", "Our goal is to adapt the H-LSTM encoder (parameterized by ) to generate U such that it is not only informative for the SAR task but also invariant across domains.", "Upon achieving this, we can use the adapted encoder to encode a target sentence, and use the source classifier (Softmax or CRF) to classify the sentences.", "We achieve this by adding a domain discriminator (dashed lines in Figure 2), another neural network that takes U as input, and tries to discriminate the domains of the input conversation X ( e.g., meeting vs. forum).", "The output of the discriminator is defined by a sigmoid function: d = p ( d = 1 | u i , , ) = sigm( w Td h d ) (1) where d { 0 , 1 } denotes the domain ( 1 for meeting, 0 for forum), w d are the final layer weights of the discriminator, and h d = g ( U d u i ) defines the hidden layer of the discriminator with U d being the layer weights, and g ( . ) being the activations.", "We use cross entropy as the discrimination loss: L d ( , ) = d log d (1 d ) log (cid:16) 1 d (cid:17) (2) The composite network has three players: the hierarchical LSTM encoder , the classifier (Softmax or CRF), and the domain discriminator .", "During training, the encoder and the classifier play a co-operative game, while the encoder and the discriminator play an adversarial game.", "The training objective L ( W, , ) of the composite model is: P (cid:88) p =1 L pc ( W, ) (cid:124) (cid:123)(cid:122) (cid:125) actclassif(src) (cid:104) P (cid:88) p =1 L pd ( , ) (cid:124) (cid:123)(cid:122) (cid:125) domaindisc(src) + Q (cid:88) p = P +1 L pd ( , ) (cid:124) (cid:123)(cid:122) (cid:125) domaindisc(tar) (cid:105) (3) where are the parameters of the encoder, W are the classifier weights, and = { U d , w d } are the parameters of the discriminator.", "1 The hyper-parameter controls the relative strength of the act classifier and the discriminator.", "We learn that optimizes the following min-max criterion: = argmin W, max U d , w d L ( W, , ) (4) Note that the updates of the shared encoder for the two networks (classifier and discriminator) work adversarially with respect to each other.", "Algorithm 1 provides pseudocode of our training method.", "The main challenge in adversarial training is to balance the networks (Arjovsky et al., 2017).", "In our experiments, we found the discriminator to be weaker initially.", "To balance the two components, we would need the error signals from the discriminator to be fairly weak initially, with full power unleashed only as the classification errors start to dominate.", "We follow the weighting schedule proposed in (Ganin et al., 2016, p. 21), which initializes to 0 , and then changes it gradually to 1 as training progresses.", "It is straight-forward to extend our adaptation method to a semi-supervised/supervised setting.", "Similar to the instances in the source domain, the labeled instances in the target domain D lT are used for act classification and domain discrimination.", "The total training loss in this case is L ( W, , ) = P (cid:88) p =1 L pc ( W, ) (cid:124) (cid:123)(cid:122) (cid:125) actclassif.", "where the second term is the classification loss on the target dataset D lT , and the last term is the discrimination loss on both labeled and unlabeled data in the target domain.", "We now describe the datasets and the act tagset that we use, and the conversational word embeddings that we learn from a large unlabeled corpus.", "As mentioned, asynchronous domains lack large corpora that are annotated with a standard speech act tagset.", "Jeong et al. (2009) annotated sentences in TripAdvisor (TA) forum threads with the standard 12 act types defined in MRDA.", "They also remapped the BC3 email corpus (Ulrich et al., 2008) according to these tags.", "Subsequent studies (Tavafi et al., 2013; Oya and Carenini, 2014; Joty Asynchronous Synchronous TA BC3 QC3 MRDA Total # of conversations 200 39 47 73 Avg. # of comments/conv 4.02 6.54 13.32 N.A Avg. # of sentences/conv 18.56 34.15 33.28 955.10 Avg. # of words/sen 14.90 12.61 19.78 10.11 Table 1: Basic statistics about our corpora. Asynchronous Synchronous Tag Description TA BC3 QC3 MRDASU Suggestion 7.71 5.48 17.38 5.97 R Response 2.4 3.75 5.24 15.63 Q Questions 14.71 8.41 12.59 8.62 P Polite 9.57 8.63 6.13 3.77 ST Statement 65.62 73.72 58.66 66.00 Table 2: Distribution of speech acts in our corpora. and Hoque, 2016) used these datasets but grouped the 12 acts into 5 coarser classes.", "Joty and Hoque (2016) also created a new dataset of QatarLiving 2 forum threads called QC3 .", "3 We use these three asynchronous datasets in our experiments.", "For our experiments on synchronous domains, we use the MRDA meeting corpus that was also used in related studies (Jeong et al., 2009; Joty and Hoque, 2016).", "Tables 1 and 2 show some basic statistics of the datasets and the tag distributions.", "Note that the tagset used by us and other related studies in asynchronous (written) conversation is different from the one used in synchronous spoken conversations (Lee and Dernoncourt, 2016; Khanpour et al., 2016; Kumar et al., 2018).", "The later tagset contains acts like backchannel , filter and disruption that are more specific to speech.", "The train-dev-test splits of the asynchronous datasets are done uniformly at random at the conversation level .", "Since the asynchronous datasets are quite small in size, to have a reliable test set, we create the train:test splits with an equal number of conversations (Table 3).", "Joty and Hoque (2016) also created conversation level datasets to train and test their CRF models.", "Their test sets however contain only 20% of the conversations, providing only 5 conversations for QC3 and BC3, and 20 for TA.", "Our experiments on these small test sets showed unstable results for all the models.", "Therefore, we use a larger test set (50%), and we report more general results on the whole corpus based on 2-fold cross-validation, where the second fold was 2 http://www.qatarliving.com/ 3 https://ntunlpsg.github.io/project/speech-act/ Train Dev.", "created by interchanging the train and test splits in Table", "3. The same development set was used to tune the hyperparameters of the models for experiments on each fold.", "For experiments on MRDA, we use the same train:test:dev split as in (Jeong et al., 2009; Joty and Hoque, 2016).", "One simple and effective approach to semi-supervised learning is to use word embeddings pretrained from a large unlabeled corpus.", "In our work, we use generic off-the-shelf pretrained embeddings to boost the performance of our models.", "In addition, we have also trained word embeddings from a large conversational corpus to get more relevant conversational word embeddings.", "We use Glove (Pennington et al., 2014) to train our word embeddings from a corpus that contains 24K email threads from W3C (w3c.org), 25K threads from TripAdvisor, 220K threads from QatarLiving, and all conversations from SWBD and MRDA (a total of 120M tokens).", "Table 4 shows some statistics of the datasets used for training the conversational word embeddings.", "We also trained skip-gram word2vec (Mikolov et al., 2013), but its performance was worse than Glove.", "We followed similar preprocessing steps as Joty and Hoque (2016); specifically: normalize all characters to lower case, spell out digits and URLs, and tokenize the texts using TweetNLP (Gimpel et al., 2011).", "For performance compar-QC3 TA BC3 MRDA SVM c-gl 16 .", "ison, we use accuracy and macroF 1 .", "Like other related studies, we consider macro-F 1 as the main metric (more appropriate when class distributions are imbalanced), and select our model based on the best F 1 on the development set.", "Due to space limitations, we report only macro-F 1 here.", "Please refer to the Appendix for the accuracy numbers.", "We first evaluate our base models on in-domain datasets by comparing with state-of-the-art models.", "In the next subsection, we evaluate our adaptation method in the three adaptation scenarios.", "Settings.", "To validate the efficacy of our model, we compare it with two baselines: a Support Vector Machine ( SVM ) and a feed-forward network ( FFN ).", "In one setting, we use the concatenated word vectors as the input sentence representation, while in another, we use the pretrained skip-thought vectors (Kiros et al., 2015).", "We also compare our models with the bi-LSTM ( B-LSTM ) model of Joty and Hoque (2016) and the stacked LSTM ( S-LSTM ) of Khanpour et al. (2016).", "We use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001, and use dropout to avoid over-fitting.", "We use the Xavier initializer (Glorot and Bengio, 2010) to initialize the weights, and uniform U ( 0 . 05 , 0 . 05) to initialize the word vectors randomly.", "For pretrained word embeddings, we experiment with off-the-shelf embeddings that come with Glove as well as with our conversational word embeddings.", "For both random and pretrained initialization, we fine-tune our word embeddings on the SAR task.", "We construct sequences from the chronological order of the sentences in a conversation.", "Since MRDA conversations are much longer compared to those in asynchronous domains (955 vs. 18-34 sentences in Table 1), we split the MRDA conversations into smaller parts containing a maximum of 100 sentences.", "4 The number of epochs and batch size were fixed to 30 and 5 (conver-sations), respectively.", "We ran each experiment 5 times, each time with a different random seed, and report the average of the (2-fold 5=10) runs along with the standard deviation.", "Recently, Crane (2018) show that the main source of variability in results for neural models come from the random seed, and the author has recommended to report the distribution of results from a range of seeds.", "Results.", "We present the results in Table", "5. From the first block of results, we notice that both SVM and FFN baselines perform poorly compared to other models that tune the word embeddings and learn the sentence representation on the SAR task.", "The second block contains five LSTM variants: ( i ) B-LSTM rand , referring to bi-LSTM with random initialization; ( ii ) B-LSTM gl , referring to bi-LSTM initialized with off-the-shelf Glove embeddings; ( iii ) B-GRU c-gl , referring to bidirectional Gated Recurrent Unit (Cho et al., 2014) initialized with our conversational Glove; ( iv ) B-LSTM c-gl , referring to bi-LSTM initialized with conversational Glove, and ( v ) S-LSTM c-gl , referring to a 2-layer stacked LSTM with conversational Glove.", "5 From the results, we can make the following conclusions.", "First, B-LSTM rand overfits extremely on 4 In a different setting, we created sequences by connecting each non-initial comment with the initial comment generating many 2-comment sequences.", "This is considering the fact that in many QA forums, users mostly answer to the questions asked in the initial post.", "In our experiments on in-domain training, we found this competitive with our one long-chain' structure.", "However, the adaptation in this setting was much worse because of the mismatch in discourse structures of synchronous and asynchronous conversations.", "5 Increasing the number of layers in S-LSTM c-gl did not give any gain (see Table 2 in the Appendix).", "the asynchronous datasets, giving the worst results among the LSTMs.", "Second, pretrained vectors help to achieve better results, however, compared to the off-the-shelf vectors, our conversational word vectors yield much higher F 1 , especially, in the asynchronous datasets that are smaller in size (5 11% absolute gains).", "This demonstrates that pretrained word embeddings provide an effective method to perform semi-supervised learning, when they are learned from relevant datasets.", "The last block shows the results of our models.", "It is evident that both H-LSTM and H-LSTM-CRF outperform other models in all the datasets except QC3 where the difference is very small.", "They also give the best F 1 reported so far on MRDA, outperforming the B-LSTM models of Joty and Hoque (2016) and S-LSTM model of Khanpour et al. (2016).", "When we compare the two models, we notice that H-LSTM outperforms H-LSTM-CRF in all the datasets.", "A reason for this could be that the contextual dependency is already captured by the upper LSTM layer and the data may be too small for the CRF to offer anything more.", "Settings.", "We compare our adversarial adaptation method with three baseline methods: Transfer, Merge and Fine-tune.", "Transfer models are trained on the source (MRDA) and tested on the target (QC3, TA, BC3).", "Our adversarial unsupervised adaptation method is comparable to the transfer method as they use labeled data only from the source domain.", "In Merge , models are trained on the concatenated training set of source and target datasets.", "Fine-tune is a widely used adaptation method for neural models (Chu and Wang, 2018).", "In this method, we first train a model on the source domain until convergence, then we fine-tune it on the target by training it further on the target dataset.", "Both merge and fine-tune are comparable to our semi-supervised/supervised adaptation as these methods use labeled data from the target domain.", "For semi-supervised experiments, we take smaller subsets ( e.g., 25%, 50%, and 75% of the labeled data) from the target domain.", "We also compare our method with Neural SCL (Ziser and Reichart, 2017), which is another domain adaption method in the neural framework.", "We used the implementation made available by the authors.", "6 For training our adaptation models, 6 https://github.com/yftah89/structural-correspondence-learning-SCL Method Model QC3 TA BC3 Transfer SVM 17 .", "Results.", "The adaptation results are shown in Table", "6. We observe that without any labeled data from the target ( Unsup. adap ), our adversarial adapted models (Adv-H-LSTM, Adv-H-LSTM-CRF) perform worse than the transfer baseline in all three datasets.", "In this case, since the out-of-domain labeled dataset (MRDA) is much larger, it overwhelms the model inducing features that are not relevant for the task in the target domain.", "However, when we provide the models with some labeled in-domain examples in the semi-supervised (50%) setting, we observe about 11% absolute gains in QC3 and BC3 over the corresponding Merge baselines, and 7 8% gains over the corresponding Fine-tune baselines.", "As we add more target labels ( 100% ), performance of our adapted models ( Sup. adap ) improve further, yielding sizable improvements ( 3% absolute) over the corresponding baselines in all datasets.", "Also notice that our adversarial adaptation outperforms Merge and Fine-tune methods for all models over all datasets, showing its effectiveness.", "Figure 3 presents the F 1 scores of our adapted models with varying amount of labeled data in the target domain.", "We notice that the largest improvements for all three datasets come from the first 25% of the target labels.", "The gains from the second quartile are also relatively higher than the last two quartiles for TA and BC3.", "Another interesting observation is that H-LSTM-CRF performs better in unsupervised and semi-supervised settings ( i.e., with less target labels).", "In other words, H-LSTM-CRF adapts better than H-LSTM with small target datasets by exploiting the tag dependencies in the source.", "As we include more labeled data from the target, H-LSTM catches up with H-LSTM-CRF.", "Surprisingly, Neural SCL performs the worst.", "We suspect this is due to the mismatches between pivot features of the source and target domains.", "If we compare our adaptation results with the in-domain results in Table 5, we notice that using the same amount of labeled data in the target, our supervised adaptation gives 3-4% gains across the datasets.", "Our semi-supervised adaptation using half of the target labels (50%) also outperforms the in-domain models that use all the target labels.", "To further analyze the cases where our adapted models make a difference, Figure 4 shows the confusion matrices for the adapted H-LSTM and the non-adapted H-LSTM on the concatenated test-sets of QC3, TA, and BC3.", "In general, our classifiers get confused between Response and Statement , and between Suggestion and Statement the most.", "We noticed similar phenomena in the human annotations, where annotators had difficulties", "with these three acts.", "It is however noticeable that the adapted H-LSTM is less affected by class imbalance, and it can detect the Suggestion and Polite acts more correctly than the non-adapted one.", "We proposed an adaptation framework for speech act recognition in asynchronous conversation.", "Our base model is a hierarchical LSTM encoder with a Softmax or CRF output layer, which achieves state-of-the-art results for in-domain training.", "Crucial to its performance is the conversational word embeddings.", "We adapted our base model with adversarial training to effectively leverage out-of-domain meeting data, and to improve the results further.", "A comparison with existing methods and baselines in different training scenarios demonstrates the effectiveness of our approach.", "Shafiq Joty would like to thank the funding support from MOE Tier-1 (Grant M4011897.020)." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "method", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "other", "method", "abstain", "other", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "result", "objective", "other" ]
[ "The automatic text-based diagnosis remains a challenging task for clinical use because it requires appropriate balance between accuracy and interpretability.", "In this paper, we attempt to propose a solution by introducing a novel framework that stacks Bayesian Network Ensembles on top of Entity-Aware Convolutional Neural Networks (CNN) towards building an accurate yet interpretable diagnosis system.", "The proposed framework takes advantage of the high accuracy and generality of deep neural networks as well as the interpretability of Bayesian Networks, which is critical for AI-empowered healthcare.", "The evaluation conducted on the real Electronic Medical Record (EMR) documents from hospitals and annotated by professional doctors proves that, the proposed framework outperforms the previous automatic diagnosis methods in accuracy performance and the diagnosis explanation of the framework is reasonable.", "The automatic diagnosis of diseases has drawn the increasing attention from both research communities and industrial companies in the recent years due to the advancement of artificial intelligence (AI) (Liang et al., 2019; Esteva et al., 2019; Liu et al., 2018).", "As reported in (Anandan et al., 2019), AI-enabled analysis software is helping to guide doctors and other health-care workers through diagnostic processes and questioning to arrive at treatment decisions with greater speed and accuracy. Although the image-based diagnosis has been well studied using PACS (Picture Archiving and Communication Systems) data (Lit-jens et al., 2017), the text-based diagnosis for Clinical Decision Support (CDS) (Berner, 2007) remains difficult due to the rare access to reliable clinical corpus and the difficulty in balancing between accuracy and interpretability.", "Diagnosis (Acute tonsillitis) There have been attempts to study automatic text-based diagnosis with Electronic Medical Record (EMR) documents integrated in the Hospital Information System (Mullenbach et al., 2018; Yang et al., 2018; Girardi et al., 2018).", "Basically, an EMR document is written by a doctor and consists of several sections that describe the illness of the patient.", "Besides the patient's basic information like name, age and gender, an EMR document contains Chief Complaint (CC), History of Present Illness (HPI), Physical Examination (PE), Test Reports (TR, e.g. lab test reports and PACS reports), Diagnosis, etc.", "Table 1 shows a real outpatient EMR document from a hospital.", "These sections describe the patient's medical situation from different aspects: CC summarizes the patient's main discomforts of this visit.", "HPI extends CC by adding more details and findings from the conversation between doctor and patient.", "PE shows the findings by physically examining the patient's body, e.g. by palpation or inspection.", "TR are the objective findings from the lab test reports or the PACS reports.", "In the hospitals, the doctors will make a comprehensive analysis mainly based on CC, HPI, PE, TR and the basic information, and make a diagnosis.", "However, it is very hard for computers to automatically understand all the diverse sections and capture the key information before making an appropriate diagnosis.", "Besides, an inpatient EMR document is similar to that in Table 1 except that HPI, PE and TR are usually more lengthy and detailed.", "The framework proposed in this work can be applied on both the outpatient and the inpatient EMR documents and we will not distinguish them later.", "In this study, we bring forward a novel framework of automatic diagnosis with EMR documents for CDS.", "1 Specifically, we propose to predict the main diagnosis based on the patient's current illness.", "Different from the previous works (Yang et al., 2018; Sha and Wang, 2017; Li et al., 2017; Girardi et al., 2018; Mullenbach et al., 2018) that solely rely on the end-to-end neural models, we propose to stack the Bayesian Network ( BN ) ensembles on top of Entity-aware Convolutional Neural Networks ( ECNN ) in automatic diagnosis, where ECNN improves the accuracy of the prediction and BN ensembles explain the prediction.", "The proposed framework attempts to bring some interpretability of the predictions by incorporating the knowledge encoded in the BN ensembles.", "The main contributions of this work are as follows: We propose a novel framework that stacks the Bayesian network ensembles on top of the entity-aware convolutional neural networks to bring interpretability into automatic diagnosis without compromising the accuracy of deep learning.", "Interpretability is very important in the AI-empowered healthcare studies.", "We bring forward three variants of Bayesian Networks for disease inference that provides interpretability.", "Moreover, we ensemble these BNs towards more robust diagnosis results.", "The evaluation conducted on real EMR documents from hospitals proves that the proposed framework outperforms the previous automatic diagnosis methods with EMRs.", "The proposed framework has been used as a critical component in the clinical decision support system developed by Baidu, which assists physicians in diagnosis in over hundreds of primary healthcare facilities in China.", "We publish the Chinese medical knowledge graph of Gynaecology and Respiration used in our Bayesian Network for disease inference with this paper for reproducibility.", "The data 1 Different from Electronic Health Record (EHR) where the illness of a patient's multiple visits are combined together, EMR only contains the patient's illness of this particular visit.", "EMRs are more generally used in the hospitals in China.", "Due to the rapid advancement of machine intelligence, the text-based automatic diagnosis is becoming one of the most important applications of machine learning and natural language processing in the recent years (Anandan et al., 2019; Koleck et al., 2019).", "Different from diagnosis or question answering on the Web (Chen et al., 2019), diagnosis for the CDS takes place in the hospitals and clinics, and the predictive algorithm is integrated into the Hospital Information System to assist doctors and physicians in the diagnosis.", "Liang et al. (2019) proposes a top-down hierarchical classification method towards diagnosing pediatric diseases.", "From the root to the leaf, each level on the diagnostic hierarchy is a logistic regression model that performs classification on labels from coarse granularity to fine-grained granularity, e.g. from organ systems down to respiratory systems and to upper respiratory systems.", "This method requires heavy manual annotation of training samples at different levels of hierarchy.", "Zhang et al. (2017) combines the variational auto-encoder and the variational recurrent neural network together to make diagnosis based on laboratory test data.", "However, laboratory test data are not the only resources considered in this paper.", "Prakash et al. (2017) introduces the memory networks into diagnostic inference based on free text clinical records with external knowledge source from Wikipedia.", "Sha and Wang (2017) proposes a hierarchical GRU-based neural network to predict the clinical outcomes based on the medical code sequences of the patient's previous visits.", "It deals with the sequential disease forecasting problem with EHR data rather than the diagnosis problem for the current visit with EMR document.", "Similarly, Choi et al. (2016a) studies the RNN-based model for clinical event prediction.", "Baumel et al. (2017) investigates the multi-label classification problem for discharge summaries of EHR with hierarchical attention-bidirectional GRU.", "The most similar works to ours are in (Yang et al., 2018; Li et al., 2017) which trains an end-to-end convolutional network model to predict di-2 https://github.com/PaddlePaddle/ Research/tree/master/KG/ACL2020_SignOrSymptom_Relationship agnosis based on EMRs.", "Besides, Girardi et al. (2018) improves the CNN model with the attention mechanism in automatic diagnosis.", "Moreover, Mullenbach et al. (2018) studies a label-wise attention model to further improve the accuracy of diagnosis at the cost of more computation time.", "Choi et al. (2016b) proposes a reverse time attention mechanism for interpretable healthcare studies.", "Different from the previous studies, the novelty of this paper is to bring interpretability into automatic diagnosis by stacking the ensembles of Bayesian networks on top of the entity-aware convolutional neural networks.", "Automatic diagnosis can be formally considered as a classification problem where the proposed method outputs a probability distribution Pr( d | S ) over all diseases d D based on the illness description S .", "In this study, S corresponds to the patient's EMR document, i.e. S consists of several sections of texts and some structured data like age, gender and medical department.", "We bring forward a new framework that combines the black-box deep learning and the white-box knowledge inference to diagnose disease with EMR documents.", "Figure 1 shows the architecture of the proposed framework.", "Firstly, the medical entities are extracted from the EMR contents.", "Then, the EMR document is fed into the entity-aware convolutional networks to generate disease prior probability.", "Next, the Bayesian network ensembles perform disease inference based on the prior probability and the probabilistic graphical models (PGMs) before ensembling the final predictions.", "Before introducing the convolutional and the Bayesian networks, we first discuss a basic component of this framework the named entity recognition (NER).", "NER extracts the entities as well as their types from text sentences, which is very important to capture the key information of the texts.", "In our experiments, we used Baidu's enterprise Chinese medical NER system that integrates the advanced NER models (Dai et al., 2019; Jia et al., 2019) and extracts entities of symptoms , vital signs , diseases and test report findings .", "The F1 score of the NER system we use is 91% in a separate evaluation conducted on 1000 dedupli-cated sentences from real EMR documents by 10 Table 2: The NER results of the EMR document shown in Table 1.", "certificated physicians in China.", "3 Meanwhile, the polarity ( positive (+) , negative (-) or unknown (?) ) of entities is also recognized.", "The polarity in this work objectively means the presence or absence of a finding in a given EMR.", "It is recognized in conjunction with the rule-based method with a vocabulary of negative Chinese words as well as the polarity detection model.", "Table 2 shows the NER results of the EMR in Table 1.", "Please note that the disease ( acute tonsillitis ) from the diagnosis section is the ground-truth label to predict and it will not be included in the input to the predictive model in the evaluation.", "In the offline processing of the EMR corpus, we preserved the TopK most frequent entities of all types as the entity vocabulary .", "In later experiments, we empirically set K = 10 , 000 .", "The entity vocabulary will be used to construct the one-hot feature for each EMR document, which will be introduced later.", "Since NER is not the focus of this study, the readers can choose the public Chinese NER API 4 from Baidu for fast experiments.", "We will focus on the major contributions of the proposed framework in the next sections.", "The convolutional networks take as input the list of texts w.r.t. the sections of an EMR document as well as the medical entities extracted from them, and output the probability distribution of the diseases.", "To distinguish from the previous CNN models without medical entities (Yang et al., 2018; Li et al., 2017), we use ECNN to denote the entity-aware CNN model proposed in this paper where another branch of fully connected layers processes the medical entities and outputs the corresponding feature representation.", "Let N denote the number of sections (CC, HPI, PE, TR, etc) selected from the EMR document to construct ECNN.", "ECNN consist of two parts: (1) N convolutional towers, each of which reads a unique section, and (2) one multi-layer perceptron (MLP) branch that reads a high-dimensional hand-crafted feature.", "Similar to the previous CNN method for text classification (Kim, 2014), each convolutional tower processes the input sequence with three kernels of various length resulting in multi-channel feature output.", "The three kernels process the input with 3-grams, 4-grams and 5-grams, respectively, and their outputs are concatenated as the output of a convolutional tower.", "Each kernel in the convolutional networks has 100 filters with strides as 1.", "The input is padded with valid method and the output is activated by ReLU.", "For the input of MLP, we create the entity vocabulary that consists of the topK frequent entities.", "Then, each EMR document is transformed to a K dimensional one-hot feature f .", "That is, if the i -th entity in the entity vocabulary appears as a positive finding in the input EMR, then the i -th dimension of f is set to 1, and otherwise, it is set to 0.", "Moreover, the patient's age and gender are appended to f to get the hand-crafted feature for MLP.", "The MLP contains one dense layer activated by sigmoid function with 128 hidden units.", "ECNN is trained with Adam optimizer (learning rate 0.001), 20 epochs and batch size of 32.", "The output of each convolutional tower and the output of the MLP are further concatenated before passing through the dropout and the softmax layer.", "Similar to Kim (2014), the dropout rate is empirically set to 0.5.", "A | D | -dimensional feature is output by ECNN as the disease priors for the inference in the next where D is the disease set.", "In ECNN, the CNNs are supposed to capture the sequential signals in the section texts and the MLP is supposed to encode the feature of the critical entities .", "By jointly modeling with CNNs and MLP, the proposed ECNN is expected to have superior performance than either of them alone.", "Although ECNN also outputs a probability distribution over all diseases, the result is not interpretable due to its end-to-end nature.", "However, the interpretability is very important in the CDS to explain how the diagnosis is generated by machines.", "Thus, we propose the Bayesian network ensembles on top of the output of ECNN to explicitly infer disease with PGMs.", "There are three steps: 3.3.1 Relation Extraction We extract the relations between disease and other types of entities (disease, finding) where finding can be symptom, vital sign, test report finding, etc.", "The rest of this paper will use finding to denote any type of entities other than disease .", "Relation extraction is performed in conjunction with the (disease, finding) co-occurrence mining and the deep extraction model (Shi et al., 2019) from the EMR documents and the textbooks 5 .", "Then, the pairs with high co-occurrences larger than a support (e.g.", "5) are preserved.", "The extracted relations are reviewed by 10 certificated physicians.", "The invalid extracted relations which result from issues like incorrect recognition of entities or polarities by NER, the symptom caused by the secondary diagnosis but incorrectly paired with the first diagnosis, are removed before adding to the medical knowledge graph.", "Therefore, the relation (disease, finding) in the medical knowledge graph can, to some extent, be interpreted as: disease causes finding .", "In our study, the pairs are mined from 275,797 EMR documents of two medical departments (Gy-naecology and Respiration).", "On average, each disease of Gynaecology in our experiments is associated with 24 findings and that of Respiration is 42.", "For Gynaecology, there are 33 diseases, 305 symptoms, 143 vital signs and 25 test report findings in the PGMs.", "For Respiration, there are 21 diseases, 263 symptoms, 187 vital signs and 31 test report findings in the PGMs.", "(1) Occurrence .", "The weight of finding i given disease j is: w ( i ; j ) = n ( i, j ) (cid:80) k n ( k, j ) , (1) where n ( i, j ) is the number of co-occurrences of finding i and disease j .", "w ( i ; j ) is computed by the type of findings.", "(2) TF-IDF Feature .", "Similar to TF-IDF feature in information retrieval, the weight of finding i given disease j is: w ( i ; j ) = n ( i, j ) (log | D | + 1 n i + 1 + 1) , (2) where n i is the number of diseases whose EMR documents contain finding i .", "(3) TFC Feature .", "TFC feature (Salton and Buckley, 1988) is a variant of TF-IDF and it estimates the weight of finding i given disease j as: 5 The undergraduate teaching materials in most of the medical schools in China, authorized by the publisher.", "w ( i ; j ) = n ( i, j ) log | D | n i (cid:113)(cid:80) k ( n ( k, j ) log | D | n k ) 2 .", "(3) (4) TF-IWF Feature .", "The Term-Frequency Inverse-Word-Frequency (TF-IWF) feature (Basili et al., 1999) estimates the weight of finding i given disease j as: w ( i ; j ) = n ( i, j ) (log (cid:80) k t k t i ) 2 , (4) where t i represents the number of occurrences of word i in the whole training corpus.", "(5) CHI Feature .", "CHI feature ( 2 Test) measures how much a term is associated with a class from a statistical view.", "The CHI feature of finding i given disease j is (Yang and Pedersen, 1997): w ( i ; j ) = N ( A D C B ) 2 ( A + C ) ( B + D ) ( A + B ) ( C + D ) , (5) where N , A , B , C and D are the number of all documents, the number of documents containing finding i and belonging to disease j , the number of documents containing i but not belonging to j , the number of documents belonging to j but not containing i , and the number of documents not containing i and not belonging to j .", "(6) Mutual Information .", "This feature assumes that the higher the strength between a finding and a disease, the higher their mutual information will be.", "Similar to the definition in CHI feature, this feature is defined as: w ( i ; j ) log A N ( A + C ) ( A + B ) .", "The above features are normalized by disease before applying to the diagnosis inference.", "By default, the average of the six features is used as the connection weight.", "We propose the Bayesian network ensembles for the diagnosis inference.", "Specifically, a group of PGMs with the extracted relations and weights are ensembled towards the final predictions.", "Firstly, multiple bipartite graphs between disease nodes and each type of finding nodes are derived from the medical knowledge graph.", "For M types of findings, there will be M bipartite graphs.", "In later experiments, M = 3 , i.e. (disease, symptom) , (disease, vital sign) and (disease, test result finding) .", "Based on the findings extracted from EMR document, each bipartite graph can be independently used to infer the disease distribution.", "where F + and F are the sets of the positive and the negative findings in the given EMR document, respectively.", "Following Eq.", "(7), it is straightforward to get Pr( d | F + sym , F sym ) , Pr( d | F + sign , F sign ) and Pr( d | F + test , F test ) w.r.t. the predictions based on symptom alone, vital sign alone and test report finding alone.", "To compute the joint probability Pr( d, F + , F ) and Pr( F + , F ) , we refer the readers to the QuickScore method (Heckerman, 1990) and the deduction therein.", "To speed up computation when a disease is associated with too many positive findings, the variational method on the PGMs is applied (Jordan et al., 1999).", "Next, we assemble these bipartite graphs in different ways to get three variants of PGMs (Fig. 1).", "(1) Parallel .", "This method independently performs inference with each type of finding and average their results: Pr( d | F + , F ) = avg (Pr( d | F + sym , F sym ) , Pr( d | F + sign , F sign ) , Pr( d | F + test , F test )) .", "Parallel assumes that the ways to diagnose disease are different using different types of entities, and their predictions can complement each other.", "An extension of Parallel is to perform a weighted sum of the three predictions.", "For simplicity concerns, we experiment with equal weights in this paper.", "(2) Universal .", "This method mixes all types of findings together into a single network: Pr( d | F + , F ) = (9) Pr( d | F + sym , F sym , F + sign , F sign , F + test , F test ) .", "It means that Universal does not distinguish the types of entities and performs the type-free Bayesian inference.", "Compared with the other two PGM variants, the connections between diseases and findings in Universal are much denser.", "It assumes that the prediction benefits from the joint inference by seeing more findings of multiple types at the same time.", "(3) Cascade .", "This method constructs the multilayer Bayesian networks with finding types as layers and use the output of the previous layer as the prior probability for the current layer.", "Pr( d sym ) = Pr( d | F + sym , F sym ) s.t., d Pr( d CNN ) , Pr( d sign ) = Pr( d | F + sign , F sign ) s.t., d Pr( d sym ) , Pr( d BN ) = Pr( d test ) = Pr( d | F + test , F test ) s.t., d Pr( d sign ) , (10) where Pr( d CNN ) is the disease probability distribution computed by the convolutional networks in Sec. 3.2 and d Pr( d x ) means that variable d satisfies prior probability distribution Pr( d x ) .", "Cascade first infers disease with symptoms alone and uses the disease probability from ECNN as priors.", "Then, it infers disease with vital signs alone and uses the disease probability from symptom-based inference as priors.", "Finally, it infers disease with test report findings alone and uses the disease probability from the previous output as priors.", "We present the cascade appraoch in such order because it shows the best results compared to those in other orders in our experiments.", "Cascade assumes that each type of entities can be used to refine the previous predictions by incorporating additional information.", "The output of the above three PGMs are ensembled, e.g. weighted sum, as the final predictions.", "In all, the proposed framework takes the raw EMR document and the NER results as input, and outputs the diagnosis predictions.", "Although we experiment with three types of entities in this paper, the proposed Bayesian network ensemble method is not limited to these types of entities.", "It is easy to add more entity types in the proposed method when applicable.", "One of the major contributions of this work is to bring interpretability into automatic diagnosis by stacking the Bayesian network ensembles on top of the convolutional networks.", "We illustrate how the predictions are explained, i.e. interpretability , by BN with Fig. 2.", "We use the symptom-based bipartite graph to illustrate for the simplicity concern, and the other types of entities explain the predictions in the same way.", "In Fig. 2, if only pharyngalgia is extracted from a patient's EMR, then upper respiratory infection (URI) will be predicted with high probability but the probability of pneumonia and phthisis will Figure 2: The example of the interpretability of Bayesian network.", "be set to the minimum because both of them are not likely to cause pharyngalgia based on their co-occurrences in the corpus.", "The proposed method can explain the prediction of URI with symptom pharyngalgia and their co-occurrence times besides the prediction probability.", "If pharyngalgia and hemoptysis are both extracted from a patient's EMR, then URI as well as phthisis will be predicted with some positive probability (their rankings depend on both their prior probability and their connection weights to pharyngalgia and hemoptysis), but pneumonia will be predicted with the minimum probability.", "This is because the noisy-OR gate is used in the Bayesian inference (Heckerman, 1990).", "The proposed method explains the prediction of URI with the positive finding of symptom pharyngalgia and explains the prediction of phthisis with the positive finding of symptom hemoptysis as well as their co-occurrences.", "In this section, we will introduce the data sets we experiment with and the evaluation results.", "The proposed framework is evaluated on the real EMR documents (mostly admission records).", "We have collaborated with several top hospitals in China and we are authorized to conduct experiments with 275,797 EMR documents of two medical departments for the evaluation (see Table 3).", "6 6 Unfortunately, we have not yet obtained the permission from the hospitals to make the evaluation data sets public at this moment because EMR documents are legally protected by the Chinese laws and there is too much sensitive information about the patients and the doctors in them.", "We are currently working with the hospitals in contributing the benchmark EMR data sets for automatic diagnosis, but it takes time due to the legal issues.", "We suggest the readers to focus their attention on the contribution of the novel automatic diagnosis framework in this paper.", "Respiration 84,152 214 21 The collected EMR documents are processed as follows: The main diagnosis in each EMR document is extracted as its disease label.", "Then, we select the top diseases from the collected EMR documents, which results in 33 diseases from Gynaecology (including Salpingitis, Cervical Carcinoma, Endometritis, Fibroid, etc) and 21 diseases from Respiration (including Upper Respiratory Infection, Chronic Bronchitis, Pneumonia, Asthma, Lung Cancer, etc) that cover over 90% of all EMR documents.", "There is a long-tail distribution of EMR documents by diseases as shown in Fig. 3, and each of the selected diseases has over 100 EMR documents for training.", "The other diseases are discarded in the experiments due to the lack of enough EMR documents to train a trustworthy model.", "Next, in order to ensure the validity of the disease labels in the test set, we recruit 10 professional physicians to review the labels by evenly sampling EMR documents under each disease.", "In this way, we collected 606 reviewed EMR documents for Gynaecology and 214 for Respiration as the test set (See disease distribution in supplemental files).", "The rest EMR documents are used for training.", "Since we are not given the identity of patient w.r.t. each EMR, the training and the testing sets are considered disjoint.", "In later experiments, we separately report the performance under both departments.", "It is more important and difficult to distinguish diseases within the same department than that across departments due to the overlapping symptoms, signs and test report findings among the similar diseases.", "We conduct experiments on the collected data sets to evaluate the performance of the framework.", "In the experiments, we used four CNN towers ( N = 4 ) w.r.t. CC, HPI, PE and TR, and each tower has three channels with kernel length 3, 4 and 5 (representing 3-grams, 4-grams and 5-grams).", "We use Jieba package 7 to perform Chinese word segmentation on the training set and remove the punctuation from the segmentation results.", "The segmented word corpus is used to train the 100-dimensional word embeddings using the Word2Vec (Mikolov et al., 2013) method (window as 5, min support as", "5) implemented in the gensim package 8 .", "The top 100,000 frequent segmented words consist of the word vocabulary in the embedding layer of ECNN.", "Thus, the size of the embedding layer is (100000, 100).", "Besides, the top 10,000 frequent entities (not segmented words) as well as age and gender are used to construct the one-hot feature into MLP which consists of one hidden dense layer (128 Sigmoid units) due to the efficiency consideration.", "Similar to Kim (2014), the dropout rate is empirically set to 0.5.", "By default, we use the average of all six relation weights in the experiments.", "The final predictions are the average of the three PGM variants.", "ECNN and PGMs are trained separately offline.", "Table 4 shows the Topk sensitivity (The micro average of the per-disease Topk sensitivity, com-7", "monly used as the accuracy measurement in healthcare studies (Liang et al.,", "2019).) under two departments.", "Generally, sensitivity is ususally used in binary classification (mostly output yes or no ).", "Similarly, when we are dealing with classification of multi-class rather than binary classification, the proposed automatic diagnosis model outputs the probability distribution over K diseases (classes) for a given EMR.", "Suppose there are l i out of n i cases, where d i is included in the Topk predictions (ranked by probability) for the n i EMRs of disease d i .", "The Topk sensitivity of the proposed model on disease d i is: l i n i .", "Furthermore, in the overall evaluation of the proposed model on all diseases, we use the micro average of all classes as the overall Topk sensitivity: sensitivity = (cid:80) i l i (cid:80) i n i .", "CAML (Mullenbach et al., 2018) performs the label-wise attention on top of a CNN model.", "CNN (Yang et al., 2018) concatenates CC, HPI and TR together before sending to the multi-channel CNN model.", "ACNN (Girardi et al., 2018) incorporates the gram-level attention with a CNN model.", "The empirical settings of hyper parameters are selected from the original papers.", "Besides, they share the same training set, training epochs, learning rate and batch size with the proposed methods.", "Among the proposed methods, PGM-* ( -C , -P , -U and -E represent Cascade , Parallel , Universal and Ensemble , respectively) are the methods that solely relies on the Bayesian networks which use the disease distribution in the training set as the prior probability.", "ECNN is the proposed method without the BN ensembles.", "ECNN-PGM-* are the combined methods while ECNN-PGM-E is the proposed method with ECNN and Bayesian network ensembles in Figure 1.", "According to the results: (1) Most of the proposed methods ECNN-PGM-* outperform the previous automatic diagnosis methods, which shows the effectiveness of the proposed methods.", "(2) ECNN outperforms CNN due to the incorporation of medical entities.", "Jointly modeling with free texts and medical entities brings extra accuracy performance compared with modeling with only either one.", "(3) Stacking Bayesian Networks on top of the neural networks is very likely to further improve the performance, especially with the ensemble of the predictions from multiple PGMs.", "Fig. 4 shows the Top-1 sensitivity on some diseases.", "The performances across diseases are quite different.", "For example, the Top-1 sensitivity of Salpingitis is 100% but that of Endometriosis is 29% in the evaluation.", "Salpingitis can be identified by combining general symptoms and ultrasonic exam results.", "However, from the perspective of physicians, Endometriosis is difficult to diagnose by nature because it shares common symptoms like dysmenorrhea and irregular menstruation with other Gynecologic diseases.", "These shared findings misguide the classifier towards other similar diseases.", "Similarly, among the respiratory diseases, patients with Pulmonary Embolism, Respiratory Failure and Bronchiectasia share symptom dyspnea which makes it difficult to distinguish between them.", "In contrast, Upper Respiratory Infection (URI) is easy to diagnose because it causes throat pain and rhinorrhea unlike the other respiratory diseases.", "Based on the analysis, the diagnosis performance of a disease is higher if it shares less findings with other diseases or it has more specific findings.", "The interpretability is reflected on the observed findings in the EMR that connect to the predicted disease in the medical knowledge graph as well as their co-occurrences.", "We generate the prediction explanation with the following template: The patient is diagnosed as disease d because (s)he is suffering from symptom s i , and (s)he has the vital sign of v j , and the lab test (or PACS report) shows (s)he has t k .", "Besides, s i , v j and t k have been found on the patients of d for n i , n j , n k times, respectively, in the previous EMR documents that support this diagnosis.", "Since the extracted relations in the medical knowledge graph are reviewed by the certificated physicians, the validity of explanation is guaranteed from the clinical perspective.", "We randomly select 50 testing samples per department whose Top-1 diagnosis prediction is correct and generate the explanation for the diagnosis prediction with 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 MI Occ TFC TF-IDF TF-IWF CHI All A cc u r a c y FeatureType Gyn-Top1 Res-Top1 Gyn-Top3 Res-Top3 Figure 5: The accuracy of ECNN-PGM-E using different types of features.", "the above template.", "The explanation is evaluated by three certificated physicians.", "The evaluation is subjective, but all of them agree that the prediction is well-supported by the generated explanation.", "Figure 5 shows the accuracy performance using different types of features.", "We can see that in this evaluation, TFC, TF-IDF and the average of all features are likely to lead to higher accuracy compared to the other features where the accuracy of Top-3 prediction is over 88%.", "In all, the above experiments prove that the proposed framework can improve the accuracy of automatic diagnosis and bring reasonable interpretability into the predictions in the same time.", "In this paper, we investigate the problem of automatic diagnosis with EMR documents for clinical decision support.", "We propose a novel framework that stacks the Bayesian Network ensembles on top of the Entity-aware Convolutional Neural Networks.", "The proposed design brings interpretability into the predictions, which is very important for the AI-empowered healthcare, without compromising the accuracy of convolutional networks.", "The evaluation conducted on the real EMR documents from hospitals validates the effectiveness of the proposed framework compared to the baselines in automatic diagnosis with EMR.", "We thank all the professional physicians led by Dr. Shi and Dr. Hu who have contributed in the annotation tasks in our experiments." ]
[ "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "abstain", "objective", "objective", "method", "result", "objective", "objective", "method", "objective", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "abstain", "abstain", "other" ]
[ "With the rapid development of deep learning, Seq2Seq paradigm has become prevalent for end-to-end data-to-text generation, and the BLEU scores have been increasing in recent years.", "However, it is widely recognized that there is still a gap between the quality of the texts generated by models and the texts written by human.", "In order to better understand the ability of Seq2Seq models, evaluate their performance and analyze the results, we choose to use Multidimensional Quality Met-ric(MQM) to evaluate several representative Seq2Seq models on end-to-end data-to-text generation.", "We annotate the outputs of five models on four datasets with eight error types and find that 1) copy mechanism is helpful for the improvement in Omission and Inaccuracy Extrinsic errors but it increases other types of errors such as Addition; 2) pre-training techniques are highly effective, and pre-training strategy and model size are very significant; 3) the structure of the dataset also influences the model's performance greatly; 4) some specific types of errors are generally challenging for seq2seq models.", "Data-to-text generation is a task of automatically producing text from non-linguistic input (Gatt and Krahmer, 2018).", "The input can be in various forms such as databases of records, spreadsheets, knowledge bases, simulations of physical systems.", "Traditional methods for data-to-text generation (Kukich, 1983; Reiter and Dale, 2000; Mei et al., 2015) implement a pipeline of modules including content planning, sentence planning and surface realization.", "Recent neural generation systems (Le-bret et al., 2016; Wiseman et al., 2017a) are trained in an end-to-end fashion using the very successful encoder-decoder architecture (Bahdanau et al., 2014) as their backbone.", "Ferreira et al. (2019) introduce a systematic and comprehensive comparison between pipeline and end-to-end architectures for this task and conclude that the pipeline models can generate better texts and generalize better to unseen inputs than end-to-end models.", "However, with the rapid development of the Seq2Seq models especially pre-trained models, more and more end-to-end architectures based on Seq2Seq paradigm get state-of-the-art results on data-to-text benchmarks nowadays.", "Although BLEU score (Papineni et al., 2002), which is based on precision, has been improved dramatically on standard data-to-text benchmarks such as WebNLG (Gardent et al., 2017), ToTTo (Parikh et al., 2020) and RotoWire (Wiseman et al., 2017b) over the recent years, it is commonly accepted that, compared with human evaluation, BLEU score can not evaluate the models very well.", "It is too coarse-grained to reflect the different dimensions of the models' performance and not always consistent with human judgment (Novikova et al., 2017a; Reiter, 2018; Sulem et al., 2018).", "Moreover, existing human evaluations on data-to-text generation are usually limited in size of samples, numbers of datasets and models, or dimensions of evaluation.", "In this study, we aim to conduct a thorough and reliable manual evaluation on Seq2Seq-based end-to-end data-to-text generation based on multiple datasets and evaluation dimensions.", "We want to know the pros and cons of different Seq2Seq models on this task, and the factors influencing the generation performance.", "Particularly, following Multidimensional Quality Metric(MQM) (Mariana, 2014), similar to the job on summarization evaluation (Huang et al., 2020), we use 8 metrics on the Accuracy and Fluency aspects to count errors, respectively.", "Therefore, compared with existing manual evaluation reports, it is more informative and objective.", "Using this method, we manually evaluate several representative models, including Transformer (Vaswani et al., 2017), Transformer with Pointer 7701 Generator (See et al., 2017), T5(small&base) (Raf-fel et al., 2019), BART(base) (Lewis et al., 2019) 1 .", "We test these models on four common datasets, including E2E (Novikova et al., 2017b), WebNLG (Gardent et al., 2017), WikiBio (Lebret et al., 2016), ToTTo (Parikh et al., 2020).", "Thus we can discuss the effectiveness of the pre-training method, some essential techniques and the number of parameters.", "We can also compare the differences between datasets and how they influence the models' performance.", "Empirically, we find that:", "1. Pre-training: Pre-training is powerful and effective which highly increases the ability of the Seq2Seq paradigm on the data-to-text task.", "2. Size: The size of the model makes difference to the results.", "Particularly, T5-base achieves the best scores on both automatic and human evaluations.", "3. Essential Techniques: The copy mechanism can make noticeable improvements for the basic Seq2Seq model, decreasing word-level errors such as Omission and Inaccuracy Extrinsic.", "But it also introduces more Addition errors slightly.", "4. Dataset Structure: The structure of the dataset also influences the model's understanding of the sequence greatly.", "Content-controlled generation is still a little hard for the Seq2Seq models.", "5. Error Type: The most common mistakes of Seq2Seq models on data-to-text task are Omission, Inaccuracy Intrinsic and Inaccuracy Extrinsic, indicating the direction we need to improve the effectiveness of the model.", "On the other hand, models perform well in fluency.", "Data-to-Text Generation Traditional methods for data-to-text generation (Kukich, 1983; Mei et al., 2015) implement a pipeline of modules including content planning, sentence planning and surface realization.", "Recent neural generation systems (Lebret et al., 2016; Wiseman et al., 2017a) are trained in an end-to-end fashion using the very successful encoder-decoder architecture (Bahdanau et al., 2014) as their backbone.", "Many Seq2Seq 1 Due to limited computing resources, we didn't evaluate T5-large and BART-large models.", "models have demonstrated their effectiveness on data-to-text tasks.", "Since we want to make a general comparison on Seq2Seq models, we will focus on this method.", "Moreover, with the development of pre-training methods, more and more work (Kale, 2020; Wang et al., 2021; Kale and Rastogi, 2020) began to introduce pre-training model for data-to-text generation.", "There is some work evaluating and analyzing the data-to-text generation task.", "Perez-Beltrachini and Gardent (2017) propose a methodology to analyze the data-to-text benchmarks and apply their method to WikiBio, RNNLG (Wen et al., 2016) and IM-AGEDESC (Novikova and Rieser, 2016) datasets.", "Ferreira et al. (2019) introduce a systematic comparison between pipeline and end-to-end architectures for neural data-to-text generation.", "Thomson and Reiter (2020) propose a methodology for human to evaluate the accuracy of the generated texts.", "Sequence-to-Sequence Seq2Seq paradigm is a general and flexible paradigm that is typically implemented by an encoder-decoder framework.", "Sutskever et al. (2014) discuss sequence to sequence learning with neural networks.", "Furthermore, there are some representative architectures that have been proposed such as recurrent neural network (Zaremba et al., 2014) and Transformer (Vaswani et al., 2017).", "Seq2Seq paradigm can be naturally applied to any task, as long as their input and output can be represented as sequences.", "Therefore, there have been many attempts to apply Seq2Seq to different tasks.", "More recently, pre-trained models based on Seq2Seq paradigm (Lewis et al., 2019; Raffel et al., 2019) have proved their power on lots of tasks (McCann et al., 2018; Yan et al., 2021).", "There has been much work analyzing Seq2Seq models which is always task-specific and based on automatic or human evaluation.", "For example, Huang et al. (2020) analyze the common models' performance on summarization.", "To our knowledge, little work has been done to comprehensively evaluate the performance of Seq2Seq models on data-to-text generation.", "And much work is based on automatic metrics such as ROUGE or BLEU which can be different from human evaluation as some work (Novikova et al., 2017a; Reiter, 2018; Sulem et al., 2018) shows.", "Therefore it is meaningful to manually evaluate representative Seq2Seq models on the data-to-text task.", "We conduct experiments using five representative Seq2Seq models on four commonly used data-to-text datasets and evaluate the generated texts accordingly 2 .", "Note that we do not use models that are designed for specific data sets or data structures (Moryossef et al., 2019; Rebuffel et al., 2020; Puduppully and Lapata, 2021), but adopt models that allow inputs of different formats and structures, which brings convenience to comparison on different data sets.", "Besides, most specific models for data-to-text generation are actually based on these typical Seq2Seq models (Ferreira et al., 2019; Rebuffel et al., 2020), which also proves the rationality of our selection of these models.", "We choose to explore and compare Transformer, Pointer Generator, BART and T5's performance on data-to-text generation and explore the role of copy mechanism by comparing Transformer and Pointer Generator, the benefits brought by the pretraining technique by comparing Transformer with T5 and BART, the influence of the different pretraining methods by comparing BART and T5, the power of parameter size by comparing T5-base and T5-small.", "Transformer Transformer (Vaswani et al., 2017) is widely used in natural language processing and has shown its potential on many tasks.", "It uses self-attention and multi-head attention which let a model draw from the state at any preceding point along the sequence.", "The attention layer can access all previous states and weigh them according to a learned measure of relevancy, providing relevant information about far-away tokens.", "There are also some experiments with Transformer as the baseline model (Zhao et al., 2020) for data-to-text generation.", "Moreover, many improved models for data-to-text generation are also based on Transformer (Wang et al., 2020; Zhu et al., 2019).", "Therefore, it 2 The codes and annotated data are available at https:// github.com/xunjianyin/Seq2SeqOnData2Text is worth and reasonable to explore the performance of Transformer on the data-to-text task.", "Pointer Generator Pointer Network is first proposed by Vinyals et al. (2015) and See et al. (2017) introduce Pointer Generator based on it.", "Pointer Generator can generate words from the vocabulary through the generator or copy content from the source through the pointer, which addresses the problem that Seq2Seq models tend to reproduce factual details inaccurately.", "Copy mechanism is widely used in data-to-text tasks and has achieved great success (Marcheggiani and Perez-Beltrachini, 2018; Rebuffel et al., 2020; Puduppully et al., 2019).", "Parikh et al. (2020) and lots of other work also use the Pointer Generator as the baseline model.", "Therefore, the Pointer Generator is a representative model for data-to-text generation.", "We implement the Pointer Generator based on Transformer so it can take advantage of the copy mechanism.", "BART BART (Lewis et al., 2019) uses a standard Seq2Seq Transformer architecture with a bidirectional encoder like BERT (Devlin et al., 2018) and a left-to-right decoder like GPT (Radford et al., 2018).", "The pre-training task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.", "With the novel pre-training method and a large number of parameters, BART achieves state-of-the-art on many tasks (Lewis et al., 2020; Siriwardhana et al., 2021).", "Our results show that BART can perform very well on data-to-text generation too.", "T5 T5 (Raffel et al., 2019) is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format whose basic architecture is Transformer.", "It achieves state-of-the-art on multiple tasks, which shows the power of the large pre-training model and Seq2Seq paradigm.", "T5-3b (Kale, 2020) obtains the best result on ToTTo dataset.", "T5-large with a two-step fine-tuning mechanism (Wang et al., 2021) 7703 achieves state-of-the-art on WebNLG benchmark.", "We carry out experiments on T5-small which has 60M parameters and T5-base which has 220M parameters to explore the power of model size.", "We use the datasets commonly used in data-to-text task in the experiments, including E2E, WebNLG, WikiBio and ToTTo.", "They have different forms and characteristics, which can give a comprehensive comparison of models.", "The summary of these data-to-text datasets is shown in Table", "1. E2E The input of E2E dataset (Novikova et al., 2017b) is the information about the restaurant, and the output is its natural language description.", "It consists of more than 50K combinations and the average length of the output text is 8.1 words.", "WikiBio WikiBio (Lebret et al., 2016) is a personal biography dataset containing more than 70K examples.", "The input is the infobox from Wikipedia, and the output is the first sentence of the biography.", "The average length of the output text is 26.1 words.", "WebNLG The WebNLG challenge (Gardent et al., 2017) consists of mapping sets of RDF triples to text.", "The latest WebNLG dataset contains more than 40K data-text pairs.", "The average length of the output text is 22.3 words.", "ToTTo ToTTo (Parikh et al., 2020) is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.", "We first evaluate models' performance using automatic metric BLEU (Papineni et al., 2002), and the BLEU scores are comparable to the mainstream research.", "Then, we use human evaluation similar to PolyTope (Huang et al., 2020) to further analyze and evaluate the performance of the models on different datasets.", "BLEU is a precision-based metric for evaluating the quality of generated text and it is widely used by work on data-to-text generation.", "Multidimensional Quality Metric (MQM) (Mari-ana, 2014) is a framework for describing and defin-ing custom translation quality metrics.", "It defines flexible issue types and a method to generate quality scores.", "Based on MQM, Huang et al. (2020) introduce an error-oriented fine-grained human evaluation method PolyTope.", "It defines five issue types about accuracy, three issue types about fluency, syntactic labels and three error severity rules.", "Note that we do not use the syntactic labels in PolyTope, as they are not the focus of our evaluation in this study.", "The definitions of our evaluation dimensions are very similar to Huang et al. (2020), but for the sake of the integrity of the paper and more specifi-cally to the task of Data2Text, we still explain them below.", "After annotating every generated sentence with these error types and severity, we finally calculate an overall score to evaluate the model's performance.", "According to the MQM principle, we define error types in two aspects: accuracy and fluency.", "Errors related to accuracy mean the generated text is not faithful to the original data or does not reflect the critical information totally from the original data.", "This type consists of five sub-types: Addition The generated text contains unnecessary and irrelevant fragments from the source data.", "Inaccuracy Intrinsic Terms or concepts appearing in the original data are distorted in the output.", "Word Form Problems related to the form of words, including consistency, part of speech, tense and so on.", "Word Order Problems about the order of words in outputs.", "One example output with errors on WebNLG dataset is shown in Table", "2. 4.2 Severity Severity describes how severe a particular error is.", "There are three levels: Minor, Major and Critical.", "Each specific error in the sentence will be allocated a severity.", "It is decided by the annotator and will be considered as a weight to score the quality of the annotated sentence automatically.", "Minor Errors that do not affect content availability or understandability.", "For example, we regard the repetition of function words as an error, but this error will not affect the understanding of the text, so we think this error is Minor.", "Major Errors that affect content availability or comprehensibility but do not make content unusable.", "For example, we think additional attributes will not make the content unsuitable for the purpose although it may cause the reader to make additional efforts to understand the intended meaning.", "Critical Errors that make content unsuitable for use thoroughly.", "Each error type can make the text completely unusable when it is too severe.", "For example, when the critical elements in the sentence are missing or too many errors are misleading peo-ple's understanding, we think this error is the key.", "Given original data and generated text, annotators are required to find all errors in the sentence and label them with error types and severity.", "After the work is done for all samples, the error score of every type and an overall system performance score will be calculated automatically with the below equations: EScore t = (cid:80) e E t e L e word count (1) Score = (1 (cid:88) t T EScore t ) 100 (2) where T is the set of error types and E t is the set of all error segments of type t .", "e is the deduction ratio which is set 1:3:7 for the three severity levels: Minor, Major and Critical.", "L e is the word length of the error 3 .", "word count is the total number of words in samples.", "We can see the highest system performance score can reach 100 if there is no error in the sentences, and it is the higher the better.", "Through this method, we can get Score , an overall evaluation of each model, and error scores EScore t that indicate each error type's punishment for the overall score.", "After training and testing, we hire five annotators with satisfactory levels in reading from eight candidates.", "They are all highly educated enough to understand structured data and tables, and their English level is also very high to understand the text.", "Before formal annotation, we conduct detailed training to make them have a clear understanding of various errors and the severity of PolyTope framework.", "Examples used in training do not appear in the final annotation.", "In order to ensure objectivity and impartiality, they know nothing about the name, architecture, BLEU score of the model and dataset in the process of annotation.", "During testing, annotators are asked to locate every error's position, point out the type of the error, choose the severity of the error and explain the reason.", "We check their answers and score them.", "Through the overall performance in the test, we select the best five annotators and ensure all of them really understand our evaluation method and have the ability to do the annotation work.", "For each dataset, we select 80 data-text pairs and input them into each model respectively.", "There are four datasets and five models, so we have 1600 texts to annotate.", "Each text is annotated by two different annotators respectively and if the difference of their error scores is too large, the text will be abandoned and a new text will be selected to join the evaluation.", "They are not allowed to communicate with each other in the annotation process.", "They can choose to abandon the texts that confuse them, and these texts will be replaced by candidate texts.", "Each annotator must label all the five outputs generated by five models of one input sequence at a time to keep equality.", "In general, we strive to balance the fairness and quality of the evaluation.", "We evaluate the five models mentioned above on four datasets using the above metrics.", "The overall human evaluation score and BLEU score of each model on each dataset are shown in Table 3 and Table 4, respectively.", "The detailed error scores of different error types are shown in Table", "5. We can compare the performance of the models to see the influence of the pre-training technique, the copy technique and the mode size.", "Comparing the results on different datasets using the same model, we can discover how the structured data input influences the performance of the Seq2Seq models.", "Moreover, we can also analyze the detailed error scores to find out the weakness and advantages of specific models.", "Through comparing the results of Pointer Generator and Transformer on all datasets, we can see that the copy mechanism has an noticeable effect on the improvement of the results.", "It improves the generation performance on all the datasets.", "Particularly, it reduces the Inaccuracy Intrinsic error score by about 3 or 4 points on three datasets (E2E, WebNLG and ToTTo), as shown in Table", "5. It is easy to understand because using copy mechanism, the model can generate words from the vocabulary through the generator or copy content from the source through the pointer.", "Pointer Generator with copy mechanism reduces almost all types of errors compared with vanilla Transformer such as Duplication error.", "The reason may be that the copy mechanism can interpolate vocabulary level probability with copy probability, reducing reliance on previous outputs.", "We can observe that the improvement of Pointer Generator over Transformer is the largest on ToTTo dataset.", "This may be related to ToTTo's need to pay more attention to the highlighted part of the input sequence, which emphasizes controllability.", "Nevertheless, it is interesting that Addition error is increased slightly compared with Transformer.", "The likely reason may be that the auto-regressive decoder tends to copy longer sequences from the source and it is hard to interrupt the copy action.", "In Table 3, we can see almost all the pre-training models outperform the non-pre-training models by a large margin among all the datasets except E2E dataset which may be too simple to evaluate the ability of models.", "The reason why the pre-training models can achieve better scores may be that they have learned helpful knowledge from lots of raw texts.", "And the pre-training method also helps the models become more powerful.", "BART and T5 are both pre-trained on tasks where spans of text are replaced by masked tokens.", "The models must learn to reconstruct the original document.", "According to the average scores of all the datasets, we can say that T5-base may be the best Seq2Seq model among our experimented models and BART-base is not far behind.", "And the models achieve the highest score on different datasets: BART-base is the best on ToTTo and T5-base is the best on the other datasets relatively.", "It is evident that the parameter quantity is the critical factor to the pre-trained model's performance.", "BART-base has 139M parameters, T5-base has 220M parameters and T5-small has 60M parameters only.", "With the same architecture and same pre-training method, T5-base totally outperforms T5-small.", "Due to pre-training methods and other factors, T5-base and BART-base achieve the best results on different datasets.", "But on average, T5-base is the best.", "The relation between model size and the performance on different datasets is shown in Figure", "1. The only exception mentioned above is ToTTo, where BART-base achieves the best results.", "One of the likely reasons is the pre-training strategy of BART which helps it have better denoising and reconstruction ability.", "Another reason will be mentioned in section 5.4.", "We can compare the difficulty level of the datasets by the average and the highest scores of all models.", "In Table 3, the ToTTo dataset has the lowest average score of 72.9.", "And the highest score on it achieved by BART-base is 90.7 which is also the lowest among all the datasets.", "ToTTo is made as a controlled generation task that given a Wikipedia 7707 Addition Duplication Extrinsic Intrinsic Average Scores 1.48 0.63 2.30 5.25 Omission Positive-Negative Aspect Word Form Word Order Average Scores 5.97 0.17 0.23 0.56 Table 6: Average error score of each error type across all models and datasets (lower means better).", "table and a set of highlighted table cells, the model needs produce a one-sentence description of the highlighted part.", "It is much more complicated than other datasets describing all the given structured data.", "Maybe it is a bit confusing for models to find out what actually should be noticed, although the scores of the pre-training models are still very high.", "And the gap between pre-training models and non-pre-training models is the biggest on ToTTo among all datasets which indicates that the simple non-pre-training models can not handle the complex controlled generation very well.", "Of course the quantity of the data-text pairs and the length of the input and output sequence also influence the models' performance.", "Table 6 shows the average error scores of each error type across all models and datasets.", "From Table 5 and Table 6, we can find that different types of errors have different effects on the performance of the models.", "We can find that Omission Error is the most frequent and severe error and its error score is almost up to 6.", "The likely reason is that the input sequence is too long, so it is hard to encode all its meaning.", "So the models tend to omit some information from the input.", "And Inaccuracy Intrinsic Error and Inaccuracy Extrinsic Error also can not be ignored which are 5.25 and 2.31, respectively.", "From the perspective of the pre-training model, it may be because they learn too much from the raw texts on pre-training stage and the knowledge lets them tend to generate inaccurate texts.", "It is excited that all the models perform very well in terms of fluency.", "The errors of Duplication, Word Form and Word Order are very sporadic.", "This shows the Seq2Seq models can generate fluent text with the structured input.", "We can see that the overall trend of the BLEU score is consistent with human evaluation, which can basically reflect the overall performance of the model.", "And many conclusions we made above can also be proved by the BLEU score.", "For example, the biggest pre-training model T5-base achieves the highest BLEU score too among the selected models, Pointer Generator with copy mechanism still performs better than Transformer and ToTTo is still the most difficult dataset.", "Although our primary goal is not to promote a human evaluation metric, our dataset with human annotations gives us a testbed to analyze the correlations and differences between automatic and human metrics.", "There have been a lot of discussions in the community about the unreliability of BLEU metric.", "Sulem et al. (2018) recommend not using BLEU on text simplification.", "They found that BLEU scores can neither reflect grammar nor the meaning of preservation.", "Novikova et al. (2017a) show that BLEU and some other commonly used indicators are not well consistent with human judgment when evaluating NLG tasks.", "We compute the Pearson correlation coefficients between BLEU score and manual evaluation in terms of Accuracy and Fluency .", "We categorize the error types into accuracy and fluency aspects according to the definition in Section 4.1, and use Equation 2 to calculate Accuracy score and Fluency score respectively.", "The Pearson correlation coefficient between BLEU score and Accuracy is 0.61 and in Fluency aspect is 0.08.", "There is a huge gap between them and we can see that BLEU can evaluate Accuracy to a certain extent and it is poor at Fluency .", "Moreover, the BLEU metric is too coarse-grained to reveal the model's specific problems, which enlighten us on how to improve the model.", "Our result is consistent with views of other work.", "We empirically compared five representative Seq2Seq models on the data-to-text task using a fine-grained set of human evaluation metrics based on MQM.", "We aim to make a systematic and comprehensive evaluation and analysis on end-to-end Seq2Seq models for the data-to-text task.", "We analyze the effect of milestone techniques such as copy 7708 and pre-training, the influence of the dataset and model size and the models' performance in terms of different types of errors.", "Our evaluation shows that pre-trained models can generate quite good texts.", "But there is still much room for improvement in this task.", "Furthermore, the improvement of specific errors such as Omission Error and Inaccuracy Intrinsic Error is also worth exploring in the future.", "This work was supported by National Key R&D Program of China (No.2018YFB1005100),Bejing Academy of Artificial Intelligence (BAAI) and State Key Laboratory of Media Convergence Production Technology and Systems.", "We would like to appreciate the anonymous reviewers for their helpful comments.", "Xiaojun Wan is the corresponding author." ]
[ "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "method", "abstain", "result", "abstain", "abstain", "method", "result", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "result", "objective", "other", "other", "method", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "result", "abstain", "abstain", "other", "other", "other" ]
[ "Procedural Multimodal Documents (PMDs) organize textual instructions and corresponding images step by step.", "Comprehending PMDs and inducing their representations for the downstream reasoning tasks is designated as Procedural MultiModal Machine Comprehension (M 3 C).", "In this study, we approach Procedural M 3 C at a fine-grained level (compared with existing explorations at a document or sentence level), that is, entity.", "With delicate consideration, we model entity both in its temporal and cross-modal relation and propose a novel Temporal-Modal Entity Graph (TMEG).", "Specifically, a heterogeneous graph structure is formulated to capture textual and visual entities and trace their temporal-modal evolution.", "In addition, a graph aggregation module is introduced to conduct graph encoding and reasoning.", "Comprehensive experiments across three Procedural M 3 C tasks are conducted on a traditional dataset RecipeQA and our new dataset CraftQA, which can better evaluate the generalization of TMEG.", "M ulti M odal M achine C omprehension (M 3 C) is a generalization of machine reading comprehension by introducing multimodality.", "Due to its differences from Visual Question Answering (VQA) (Antol et al., 2015) in the form of understanding multimodal contexts and conducting multimodal questions and answers, there has been a lot of attention in recent years devoted to this field.", "In this paper, we investigate a task that has been typical of M 3 C recently, named Procedural M 3 C, a task of reading comprehension of P rocedural M ultimodal D ocuments (PMDs).", "As shown in Figure 1, a recipe that contains successive multimodal instructions is a typical PMD.", "Reading a recipe seems trivial for humans but is still complex for a machine reading comprehension * Corresponding authors.", "Current Procedural M 3 C studies (Yagcioglu et al., 2018; Liu et al., 2020) comprehend PMDs by encoding text and images at each procedure step.", "These efforts, however, only scratch the surface and lack deep insight into the elementary unit of PMDs, that is, entity .", "From now on, we use entity to refer uniformly to entity in text and object in image.", "For instance, the recipe in Figure 1 involves multiple entities, i.e., Strawberry and Sugar Cookie Dough , etc.", "In this work, we target at approaching the Procedural M 3 C task at a fine-grained entity level.", "We observe that a PMD essentially assembles an evolution process of entities and the relations between them.", "Specifically, the relation between entities can be summarized in the following two categories: Temporal Relation.", "by step 4 they are washed and cut into pieces.", "We use temporal relation to depict the association between an entity's changing visual signals in images or changing contexts in text.", "Multimodal Relation.", "An entity is naturally and powerfully associated with other entities within a single modality.", "Meanwhile, the cross-modal association of an entity is worth exploiting to contribute to distinct modality understanding.", "For example, the visual signal and context about sugar cookie dough can be interpreted by each other at step 2.", "We generalize the intraand intermodal associations of entities with the multimodal relation.", "Based on the above observations, we believe that simultaneously modeling entities and the temporal and multimodal relations is a key challenge in understanding PMDs.", "Recent efforts (Amac et al., 2019; Huang et al., 2021) are devoted to encoding temporal relations of entities, while it neglects the multimodal relation.", "Heterogeneous graphs have become the preferred technology for representing, sharing, and fusing information to modern AI tasks, e.g., relation extraction (Christopoulou et al., 2019) and recommendation (Fan et al., 2019).", "Inspired by this, we construct a heterogeneous graph, with entities as nodes and relations as edges.", "Therefore, the research goal of the procedural M 3 C task will shift from understanding unstructured PMDs to learning structured graph representations.", "In this work, we propose a novel T emporal M odal E ntity G raph model, namely TMEG .", "Our model approaches Procedural M 3 C at a fine-grained entity level by constructing and learning a graph with entities and temporal and multimodal relations.", "Specifically, TMEG consists of the following components:", "1) Node Construction , which extracts the token-level features in text and object-level features in images as the initial entity embeddings;", "2) Graph Construction , which constructs the temporal, intra-modal, and cross-modal relations separately to form a unified graph; and", "3) Graph Aggregation , which utilizes the graph-based multi-head attention mechanism to perform fusion operations on graphs to model the evolution of entities and relations.", "Finally, the graph representation is fed into a graph-based reasoning module to evaluate the model's understanding ability.", "In addition, in order to further advance the research of Procedural M 3 C, we release CraftQA, a multimodal semantically enriched dataset that contains about 27k craft product-making tutorials and 46k question-answer pairs for evaluation.", "We evaluate three representative subtasks, i.e., visual cloze, visual coherence, and visual ordering, on CraftQA and a public dataset RecipeQA (Yagcioglu et al., 2018).", "The quantitative and qualitative results show the superiority of TMEG compared to the state-of-the-art methods on all three tasks.", "The main contributions of this paper can be summarized as follows: We innovatively study the Procedural M 3 C task at a fine-grained entity level.", "We comprehensively explore the relations between entities in both temporal and multimodal perspectives.", "We propose a Temporal-Modal Entity Graph model TMEG, which constructs a graph with entities as nodes and relations as edges, and then learns the graph representation to understand PMDs.", "We release a dataset CraftQA.", "The experimental results on CraftQA and RecipeQA show TMEG outperforms several state-of-the-art methods.", "Procedural Text Comprehension.", "Procedural text comprehension requires an accurate prediction for the state change and location information of each entity over successive steps.", "Several datasets have been proposed to evaluate procedural text comprehension, e.g., Recipe (Bosselut et al., 2018), ProPara (Mishra et al., 2019), and OPENPI (Tan-don et al., 2020).", "To model entity evolution, KG-MRC (Das et al., 2019) constructs knowledge graphs via reading comprehension and uses them for entity location prediction.", "DYNAPRO (Amini et al., 2020) introduces a pre-trained language model to dynamically obtain the contextual embedding of procedural text and learn the attributes and transformations of entities.", "ProGraph (Zhong et al., 2020) enables state prediction from context by constructing a heterogeneous graph with various knowledge inputs.", "KoalA (Zhang et al., 2021b) utilizes the external commonsense knowledge injection and data enhancement to reason the states and locations of entities.", "TSLM (Faghihi and Kord-jamshidi, 2021) formulate comprehension task as a question answering problem and adapt pre-trained transformer-based language models on other QA benchmarks.", "REAL (Huang et al., 2021) builds 1180 Temporal Modal Graph Aggregation Node Encoding Graph-Based Multi-Head Attention Add & Norm Feed Forward Network Add & Norm L Graph-Based Reasoning Position Segment MLP MLP Text Input Step 1: Ingredients: 16 oz strawberry, 9 oz blueberries, Step 2: Make or buy your sugar cookie dough.", "a general framework to systematically model the entity, action, location by using a graph neural network.", "Inspired by these previous works, we propose a temporal-modal entity graph model, which is designed with temporal encoding and modal encoding to model multiple types of entities.", "Multimodal Graph.", "In recent multimodal research, graph structure has been utilized to model the semantic interaction between modalities.", "(Yin et al., 2020) propose a graph-based multimodal fusion encoder for neural machine translation, which converted sentence and image in a unified multimodal graph.", "(Khademi, 2020) convert image regions and the region grounded captions into graph structure and introduced graph memory networks for visual question answering.", "(Zhang et al., 2021a) propose a multimodal graph fusion approach for named entity recognition, which conducted graph encoding via multimodal semantic interaction.", "(Yang et al., 2021) focus on multimodal sentiment analysis and emotion recognition, which unified video, audio, and text modalities into an attention graph and learned the interaction through graph fusion, dynamic pruning, and the read-out technique.", "In contrast to the above methods, we formulate our multimodal graph on temporal entities and successfully deploy it in Procedural M 3 C. 3 Proposed Method In this section, we introduce: (1) problem definition of Procedural M 3 C in Section 3.1; (2) The homogeneous graph of each textual instruction (im-age) and our TMEG in Section 3.2 and Section 3.3, respectively; and (3) graph aggregation module to conduct graph encoding and reasoning in Section 3.4.", "Figure 2 gives a high-level overview of TMEG.", "Here, we define the task of Procedural M 3 C, given: Context S = { s t } N t t =1 in textual modality, which represents a series of coherent textual instructions to perform a specific skill or task (e.g., multiple steps to complete a recipe or a craft product).", "Question Q and Answer A , which is either a single image or a series of images in a reasoning task (e.g., visual cloze, visual coherence, or visual ordering).", "Following (Liu et al., 2020), we combine the images contained in Q and A to form N c candidate image sequences { a 1 , ...a j , ...a N c } .", "Let N a be the length of the j -th candidate image sequence a j = { I j, 1 , ...I j,N a } .", "Take the visual cloze task as an example, we fill the placeholder of the question with candidate answers to form N c image sequences with length N a = 4 .", "The model requires to select the most relevant candidate by calculating the similarity between text sequence S = { s t } N t t =1 and each image sequence a j .", "As shown in Figure 2, we first extract the tokens (objects) in text (image) as the initial nodes of homogeneous graph, respectively.", "Textual Node.", "Let N t be the number of textual instructions S = { s t } N t t =1 .", "First, each in-1181 struction s t is tokenized into a token sequence { e t [CLS] , e t 1 , ...e t [SEP] } , where [CLS] and [SEP] are the special tokens introduced to mark the start and the end of each instruction.", "Then, we utilize an off-the-shelf POS tagger (Akbik et al., 2018) to identify all nouns and noun phrases in the token sequence.", "Finally, we concatenate all the token sequences of textual instructions S and feed the token embedding into the graph aggregation module.", "Visual Node.", "For each image sequence a j , we employ a pre-trained Faster-RCNN to extract a set { e v [CLS] , e v 1 , ...e v k } with k object features as visual tokens.", "Following (Messina et al., 2020; Dosovit-skiy et al., 2021), we reserve [CLS] as the beginning token for each image whose final embedding is regarded as the representation of the whole image.", "The operation of visual node is in a similar manner as textual and any two nodes in the same instruction (image) are connected to construct a homogeneous graph.", "Based on the homogeneous graph of each textual instruction (image), we introduce various types of edges to construct our heterogeneous graph TMEG.", "It is essential to model the temporal evolution of entities for comprehending procedural content.", "Let us revisit the example in Figure 1.", "When a human reads step 4, the connection between entities (e.g., strawberries and oranges ) and their descriptions in step 1 is naturally established.", "We design the temporal edge to model the evolution of entities in text and image.", "It can be seen that the temporal edge describes the evolution at different steps.", "For the textual nodes, the same entity appearing in different steps are connected by a textual temporal edge (node-based).", "While for the visual nodes, we directly calculate the Euclidean Distance between object features due to the absence of accurate object detection.", "Following (Song et al., 2021), if the distance between node (object) i and node (object) j is less than a threshold t , we treat them as the same object and connect node i to node j via a visual temporal edge (node-based).", "Meanwhile, we consider that there may also be temporal evolution for edges, such as changes in the relationship between entities.", "Therefore, we also introduce temporal edge (edge-based) to characterize the change of edges.", "As shown in Figure 1, the textual instruction of each image can be viewed as a noisy form of image annotation (Hessel et al., 2019).", "The association between image and sentence can be inferred through entity representations under different modalities.", "Correspondingly, we design the intra-modal edge and the inter-modal edge to represent the modal interactions.", "In Section 3.2, any two nodes in the same modality and the same instruction (image) are connected by an intra-modal edge.", "It is worth noting that for each instruction (image), the special [CLS] node is connected to all other nodes in order to aggregate graph-level features.", "On the other hand, the textual node representing any entity and the corresponding visual node are connected by an inter-modal edge.", "We employ a visual grounding toolkit (Yang et al., 2019) to detect visual objects for each noun phrase.", "Specifi-cally, we predict the bounding box corresponding to the text entity and compute the Intersection over Union (IoU) between all visual objects.", "If the IoU between the prediction box and the visual box exceeds a threshold m , the textual node and the corresponding visual node are connected by an inter-modal edge (node-based).", "Similar to section 3.3.1, considering the influence of entity-relationship under different modalities, we also introduce inter-modal edge (edge-based) to characterize the interaction between edges.", "As described in Section 3.2, we have obtained the embeddings of the textual tokens and visual objects.", "Similar to (Li et al., 2020), all embeddings are mapped to a set of initial node embeddings, and each node embedding is the sum of", "1) a textual token embedding or visual object embedding;", "2) a position embedding that identifies the position of the token or object in the image 1 ; and", "3) a segment embedding generated from the step number in PMD which indicates different textual instructions or images.", "To encode the structural information into TMEG, we consider the temporal encoding and the modal encoding separately.", "For any two nodes v i and v j in 1 We exploit the bounding box feature extracted by Faster-RCNN as the position embedding of the object.", "TMEG, we construct two mappings: t ( v i , v j ) R and m ( v i , v j ) R which encode the temporal edge and the modal edge between them.", "The temporal encoding and the modal encoding of the total graph are fed into the graph-based aggregation module.", "As shown in the right part of Figure 2, we first introduce two multi-layer perceptrons (MLP) with Tanh activation function to project different node embeddings from two modalities into the same space.", "Then, we extend the VisualBERT (Li et al., 2020) to the graph-based fusion layer, which concatenates the node embeddings from MLPs as input and outputs their graph-based joint representations.", "Specifically, in each fusion layer, updating the hidden states of textual node and visual node mainly involve the following steps.", "Firstly, we exploit a graph-based multi-head attention mechanism to generate contextual representations of nodes.", "Formally, the output of the h -th attention head in the l 1 layer can be obtained as follows: A ( q , k , v ) h,l 1 j = N (cid:88) i =1 v hi (cid:16) Softmax ( e hi,j ) (cid:17) , (1) e hi,j = q hj (cid:0) k hi (cid:1) T d + b h m ( i,j ) + b h t ( i,j ) , (2) where q, k, v are the query matrix, key matrix, and value matrix generated from the hidden state H ( l 1) of nodes in the l 1 layer.", "t ( i, j ) and m ( i, j ) denote the temporal encoding and the modal encoding of TMEG, which serve as bias terms in the attention module.", "It is worth noting that each head in the multi-head attention mechanism exhibits a broad range of behaviors (Vig, 2019); thus, we add different temporal encoding and modal encoding separately for each attention head.", "Meanwhile, in order to model the relationship of edges, the temporal encoding and the modal encoding are learned separately for each layer.", "We concatenate the output of each head and pass them to a position-wise Feed Forward Networks (FFN) which is preceded and succeeded by residual connections and normalization layer (LN), H ( l ) = LN (cid:16) W [ A 1 , ... A h ] + H ( l 1) (cid:17) , H ( l ) = LN (cid:16) FFN ( H ( l ) ) + H ( l ) (cid:17) , (3) where W is a learnable parameter and [ ] denotes Algorithm 1: Graph Aggregation of TMEG Input : The initial hidden states of TMEGH 0 ; the graph-based fusion layer number N ; the temporal encoding t ; the modal encoding m ; the attention head A Output: The final hidden states HN 1 for l 1 to N do // Graph-Based Fusion Layer 2 foreach attention head A h,l in layer l do 3 Generate q , k , v from hidden states H l 1 ; 4 Obtain edge encoding b h t ( i,j ) and b h m ( i,j ) ; 5 Calculate attention weight e hi,j (Eq.", "the concatenation manipulation.", "Finally, based on TMEG, we stack multi-layer graph-based fusion layers to conduct graph encoding.", "Algorithm 1 shows the aggregation of TMEG in detail.", "As mentioned in Section 3.2, we regard the hidden state of [CLS] as the representations of each instruction (image), where their final hidden states HT and HV are passed into the graph reasoning module for task completion.", "Firstly, we leverage the one-to-one correspondence between instruction and image, e.g., each instruction has an image to visualize it (Alikhani et al., 2021).", "TMEG involves a Contrastive Coherence Loss for keeping the alignment between instruction and image.", "Let HV + and HV represent the positive and negative examples, the loss L Coh of the i -th step can be defined as follows: L Coh i = log exp { sim ( H Ti , HV + i ) / } (cid:80) Kj =1 exp { sim ( H Ti , HV j ) / } , (4) where K is the total number of negative samples (He et al., 2020) generated from the min-batch, sim ( , ) and are the standard cosine similarity function and temperature.", "In a downstream reasoning task, the model needs to predict the correct candidate a j = { I j, 1 , ...I j,N a } based on the instructions S = { s t } N t t =1 .", "Referring to the sentence image predic-1183 tion task in (Li et al., 2020), we concatenate all representations of each candidate image sequence to generate a instruction candidate pair as: ( S , a j ) = [ CLS , HT 1 , ...H TN t , SEP , H Vj, 1 , ...H Vj,N a ] , where [CLS] and [SEP] are special tokens as used in (Li et al., 2020).", "We pass this input through a shallow transformer followed by a fully connected layer to obtain the prediction score P ( S , a j ) for the j -th candidate, and the prediction loss can be defined as L Pre = log exp ( P ( S , a j )) (cid:80) N a 1 i =1 ,i (cid:54) = j exp ( P ( S , a i )) , (5) where a j is the correct candidate and N a is the number of candidates.", "We get the final loss function and optimize it through the Adam optimizer: L = L Pre + b L Coh , (6) where b is the balance parameter.", "Unless otherwise specified, all the results in this paper use b = 0 .", "1 which we find to perform best.", "RecipeQA.", "RecipeQA (Yagcioglu et al., 2018) is a multimodal comprehension dataset with 20K recipes approximately and more than 36K question-answer pairs.", "Unlike other multimodal reading comprehension datasets (Tapaswi et al., 2016; Iyyer et al., 2017; Kembhavi et al., 2017) analyze against movie clips or comics, RecipeQA requires reasoning real-world cases.", "We collect CraftQA from Instructables 2 , which is an online community where people can share their tutorials for accomplishing a task in a step-by-step manner.", "CraftQA.", "Specifically, we collect the most visited tutorials and remove those that contain only text or video.", "For question and answer generation, we also remove the tutorials that contain less than 3 images.", "To construct the distractor of each task, we compute the Euclidean distance between the image features that are extracted from a pretrained ResNet-50 (He et al., 2016).", "Taking the visual cloze task as an example, the distractor is sampled from the nearest neighbors of the ground-truth image based on Euclidean distance.", "Finally, CraftQA contains about 27k craft product-making tutorials and 46k question-answer pairs.", "We employ CraftQA to evaluate the reading comprehension performance of TMEG in different domains as 2 https://www.instructables.com/ Dataset Statistics Train Valid Test RecipeQA # of recipes 15,847 1,963 1,969 avg.", "Metric.", "In three Procedural M 3 C tasks that are tested in the following experiments (visual cloze, visual coherence, and visual ordering), we use clas-sification accuracy as the evaluation metric, which is defined as the percentage of yielding the ground-truth answer during testing (Yagcioglu et al., 2018; Amac et al., 2019; Liu et al., 2020).", "For visual node construction, we employ the pretrained Faster R-CNN (Ren et al., 2015) model provided by Detectron2 (Wu et al., 2019) and limit the number of objects to 36 for each image.", "Following (Yang et al., 2019; Song et al., 2021), we set the thresholds t and m as 7 and 0.5, respectively, for the temporal and the modal edge constructions.", "The framework of the graph-based fusion module is built on VisualBERT (Li et al., 2020) with its initialized parameters and tokenizer implemented by HuggingFace's transformers library (Wolf et al., 2020).", "The shallow transformer in the graph-based reasoning module is designed as 2 hidden layers with a size of 512 and 8 attention heads.", "During the training stage, the batch size is fixed to 16 and the number of negative samples K is set to 8.", "The temperature parameter in", "Eq.(4) is set to 0.07.", "The balance parameter b in", "Eq.(6) is set to 0.1.", "Adam with the learning rate 5 10 5 is used to update parameters.", "We introduce an early stopping mechanism and set the patience value to 5, which means the training will stop if the model performance is not improved in five consecutive times.", "Our source code will be released online.", "We compare our model with the following models: (1) Hasty Student (HS) (Yagcioglu et al., 2018) discards textual context and directly exploits the similarities and dissimilarities between answer images to rank candidates.", "(2) PRN (Amac et al., 2019) introduces external relational memory units to keep track of textual entities and employs a bi-directional attention mechanism to obtain a question-aware embedding for prediction.", "(3) MLMM-Trans (Liu et al., 2020) modifies the framework of the transformer (Vaswani et al., 2017) and conducts an intensive attention mechanism at multiple levels to predict correct image sequences.", "(4) VisualBERT (Li et al., 2020) consists of a stack of transformer layers that extend the traditional BERT (Devlin et al., 2019) model to a multimodal encoder.", "The performance of some baselines on RecipeQA has been previously reported in (Amac et al., 2019; Liu et al., 2020).", "As shown in Table 2, TMEG shows favorable performance in different reasoning tasks, with an average accuracy of 69.73 and 49.54 on RecipeQA and CraftQA, following behind the Human performance.", "Besides, the performance on the visual ordering task exceeds human accuracy for the first time, which proves that the temporal and modal analysis in TMEG is effective in comprehending PMDs.", "MLMM-Trans performs comparably with VisualBERT while inferior to TMEG, which may be attributed to their superficial consideration of Figure 3: Experimental results of our model with different balance parameter b in", "MLMM-Trans ignores the entity information contained in text (e.g., the correspondence between entities in text and images) and VisualBERT directly fuses textual and visual features without considering entity evolution.", "In TMEG, we explicitly identify and model entity evolution in PMD, whereas MLMM-Trans and VisualBERT assume entity information to be learned implicitly alongside other data.", "Meanwhile, CraftQA has more images (20.14 vs 12.67) and tokens (535.88 vs 443.01) on average than RecipeQA.", "More diverse complex cases in CraftQA require better comprehension and reasoning capacities for both models and humans.", "We believe this can explain the lower results on CraftQA.", "This emphasizes the necessity of comprehending entity coherence in a multimodal context.", "We evaluate the effects of temporal encoding, modal encoding, and contrastive coherence loss", "Edge Encoding.", "Table 2 also shows the ablation results of our model when each module is respectively removed.", "In terms of edge encoding, removing temporal encoding makes more negative effects on TMEG than moving the modal encoding, reflecting the significance of modeling temporal entity evolution for procedural M 3 C. Contrastive Coherence Loss.", "As shown in the last row of Table 2, we find that L Coh can indeed improve TMEG.", "The reason is that L Coh helps enhance the learning of textual entity representation and visual entity representation in Procedural M 3 C. 4.5 Analysis of TMEG 4.5.1 Balance Parameter In Figure 3, we illustrate the influence of the balance parameter b in", "Eq.(6), which balances the contrastive coherence loss L Coh and the candidate prediction loss L Pre .", "We tune b from 0 to 0.2 with 0.05 as the step size.", "We observe that the model beats the highest accuracy when b = 0 .", "1 .", "Generally, (1) introducing the contrastive coherence loss can improve TMEG for better fitting downstream tasks, and (2) appropriately balancing the prediction loss L Pre and contrastive coherence loss L Coh helps TMEG comprehend PMDs.", "To study the domain transfer capability of our framework, we evaluate TMEG in different domains, as shown in Table 3.", "Specifically, The model trained on RecipeQA is evaluated on CraftQA, and the reverse is true for CraftQA.", "Results show that compared with other baselines, our model achieves more generalized and better comprehension performance on domain transfer by incorporating TMEG.", "Figure 4 further presents a visual cloze example on RecipeQA which requires a correct image in the missing piece after reading the context.", "We compare the highest-scored candidate images respectively picked out by MLMM-Trans (Liu et al., 2020), VisualBERT (Li et al., 2020), and TMEG.", "By considering the temporal-modal entity evolution, TMEG can capture the salient entities (e.g., Strawberry and Sugar Cookie Dough ) and trace their evolution at each step, thereby inferring the ground-truth answer.", "In this paper, we propose a novel temporal-modal entity graph (TMEG) to approach Procedural M 3 C. Based on TMEG, we introduce graph-based fusion module and reasoning module, which are used to aggregate node features and solve downstream reasoning tasks.", "What's more, we introduce another Procedural M 3 C dataset called CraftQA to assist in evaluating the generalization performance of TMEG in different domains and domain transfer.", "Extensive experiments on the RecipeQA and CraftQA validate the superiority of TMEG.", "A promising future direction is to introduce temporal-modal entity graphs into the video understanding task (Lin et al., 2020; Xu et al., 2020), which also calls for an enhancement of the temporal and the cross-modal reasoning capability.", "Intellectual Property.", "CraftQA contains question answer pairs generated from copyright free tutorials found online 3 .", "All of the tutorials are licensed with the Creative Commons license 4 which helps share knowledge and creativity for common use.", "The collection of CraftQA is in accordance with the Terms of Service of Instructables as follows: by posting, providing, uploading, submitting, sharing, publishing, distributing, making available or allowing others to access and/or use Your Content to or through the Service You are solely responsible and liable for the consequences of doing so and you acknowledge and agree that Your Content can and may be viewed worldwide 5 .", "We also construct experimental evaluations on the RecipeQA dataset.", "Referring to the official dataset descriptions of RecipeQA 6 , Legal and Ethical Considerations had been taken into account during the construction of RecipeQA.", "We have cited the corresponding papers in this study.", "3 https://www.instructables.com 4 https://creativecommons.org/licenses/by-nc-sa/4.0 5 https://www.autodesk.com/company/legal-notices-trademarks/terms-of-service-autodesk360-web-services/instructables-terms-of-service-june-5-2013 6 https://hucvl.github.io/recipeqa/recipeqa-datasheet.pdf Privacy.", "According to the Privacy Statement of Instructables 7 , users can choose whether or not to expose their information when publishing tutorials.", "Respecting personal privacy, we have removed all of the personal information of users from CraftQA and promise CraftQA isn't involved with any privacy issues.", "Acknowledgements : This work was supported in part by the National Natural Science Foundation of China (No. 62106091) and Shandong Provincial Natural Science Foundation (No. ZR2021MF054)." ]
[ "abstain", "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "objective", "method", "objective", "objective", "objective", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain" ]
[ "The main obstacle to incremental sentence processing arises from right-branching constituent structures, which are present in the majority of English sentences, as well as from optional constituents that adjoin on the right, such as right adjuncts and right conjuncts.", "In CCG, many right-branching derivations can be replaced by semantically equivalent left-branching incremental derivations.", "The problem of right-adjunction is more resistant to solution, and has been tackled in the past using revealing -based approaches that often rely either on the higher-order unification over lambda terms (Pareschi and Steedman, 1987) or heuristics over dependency representations that do not cover the whole CCGbank (Ambati et al., 2015).", "We propose a new incremental parsing algorithm for CCG following the same revealing tradition of work but having a purely syntactic approach that does not depend on access to a distinct level of semantic representation.", "This algorithm can cover the whole CCGbank, with greater incrementality and accuracy than previous proposals.", "Combinatory Categorial Grammar (CCG) (Ades and Steedman, 1982; Steedman, 2000) is a mildly context sensitive grammar formalism that is attractive both from a cognitive and an engineering perspective.", "Compared to other grammar formalisms, the aspect in which CCG excels is incremental sentence processing.", "CCG has a very flex-ible notion of constituent structure which allows (mostly) left-branching derivation trees that are easier to process incrementally.", "Take for instance the derivation tree in Figure 1a.", "If we use a non-incremental shift-reduce parser (as done in the majority of transition-based parsers for CCG (Zhang and Clark, 2011; Xu et al., 2014; Xu, 2016)) we will be able to establish the semantic connection between the subject Nada and the verb eats only when we reach the end of the sentence.", "This is undesirable for several reasons.", "First, human sentence processing is much more incremental, so that the meaning of the prefix Nada eats is available as soon as it is read (Marslen-Wilson, 1973).", "Second, if we want a predictive modeleither for better parsing or language modellingit is crucial to establish relations between the words in the prefix as early as possible.", "To address this problem, a syntactic theory needs to be able to represent partial constituents like Nada eats and have mechanisms to build them just by observing the prefix.", "In CCG solutions for these problems come out of the theory naturally.", "CCG categories can represent partial structures and these partial structures can combine into bigger (partial) structures using CCG combinators recursively.", "Figure 1b shows how CCG can incrementally process the example sentence via a different derivation tree that generates the same semantics more incrementally by being left-branching.", "This way of doing incremental processing seems straightforward except for one obstacle: optional constituents that attach from the right, i.e. right adjuncts.", "Because they are optional, it is impossible to predict them with certainty.", "This forces an eager incremental processor to make an uninformed decision very early and, if later that decision turns out to be wrong, to backtrack to repair the mistake.", "This behaviour would imply that human processors have difficulty in processing right adjuncts, but that does not seem to be the case.", "For instance, let's say that after incrementally processing Nada eats apples we encounter right adjunct regularly as in Figure 2a.", "The parser will be stuck at this point because there is no way to at-Nada eats apples NP S \\ NP / NP NP > S \\ NP < S", "(a) Right-branching derivation.", "tach the right adjunct of a verb phrase to a sentence constituent.", "A simple solution would be some sort of limited back-tracking where we would look if we could extract the verb-phrase, attach its right adjunct, and then put the derivation back together.", "But how do we do the extraction of the verb-phrase eats apples when that constituent was never built during the incremental left-branching derivation?", "Pareschi and Steedman (1987) proposed to reveal the constituent that is needed, the verb-phrase in our example, by having an elegant way of re-analysing the derivation.", "This reanalysis does not repeat parsing from scratch but instead runs a single CCG combinatory rule backwards.", "In the example at hand, first we recognise that right adjunction needs to take place because we have a category of shape X \\ X (concretely (S \\ NP) \\ (S \\ NP) but in the present CCG notation slashes associate to the left, so we drop the first pair of brackets).", "Thanks to the type of the adjunct we know that the constituent that needs to be revealed is of type X , in our case S \\ NP .", "Now, we take the constituent on the left of the right adjunct, in our example constituent S , and look for CCG category Y and combinatory rule C that satisfies the following relation: C(Y, S \\ NP) = S .", "The solution to this type equation is Y=NP and C= < .", "To confine revealing to delivering constituents that the parser could have built if it had been less greedy for incrementality, and exclude revelation of unsupported types, such as PP in Figure 2a, the process must be constrained by the actual derivation.", "Pareschi and Steedman proposed to do so by accessing the semantic representation in parallel, using higher-order unification, which is in general undecidable and may be unsound unless defined over a specific semantic representation.", "Ambati et al. (2015) propose an alternative method for revealing where dependencies are used as a semantic representation (instead of first-order logic) and special heuristics are used for revealing (instead of higher order unification).", "This is computationally a much more efficient approach and appears sound, but requires distinct revealing rules for each constituent type and has specific dif-ficulties with punctuation.", "In this paper we propose a method of revealing that does not depend on any specific choice of semantic representation, can discover multiple possible revealing options if they are available, is sound and complete and computationally efficient, and gives state-of-the-art parsing results.", "The algorithm works by building left-branching derivations incrementally, but, following Niv (1993, 1994), as soon as a left branching derivation is built, its derivation tree is rebalanced to be right-branching.", "When all such constituents' derivation trees are right-branching, revealing becomes a trivial operation where we just traverse the right spine looking for the constituent(s) of the right type to be modified by the right adjunct.", "rotation since it is a technical term established in the field of data structures for similar operation of balanced binary search trees (Adelson-Velskii and Landis, 1962; Guibas and Sedgewick, 1978; Okasaki, 1999; Cormen et al., 2009).", "Figure 2b shows the right rotated derivation Nada eats apples next to the adjunct.", "Here we can just look up the required S \\ NP and attach the right adjunct to it as in Figure 2c.", "CCG is a lexicalized grammar formalism where each lexical item in a derivation has a category assigned to it which expresses the ways in which the lexical item can be used in the derivation.", "These categories are put together using combinatory rules.", "Each binary combinatory rule has one primary and one secondary category as its inputs.", "The primary functor is the one that selects; while the secondary category is the one that is selected.", "In forward combinatory rules the primary functor is always the left argument, while in the backward combinatory rules it is always the right.", "It is useful to look at the mentioned combinatory rules in a generalised way.", "For instance, if we look at forward combinatory rules we can see that they all follow the same pattern of combining X/Y with a category that starts with Y .", "The only difference among them is how many subcategories follow Y in the secondary category.", "In case of forward function application there will be nothing following Y so we can treat forward function application as a generalised forward composition combinator of the zeroth order > B0 .", "Standard forward function composition > B will be a generalised composition of first order > B1 while > B 2 will be > B2 .", "Same generalisation can be applied to backward combinators.", "There is a low bound on the order of combinatory rules, around 2 or 3.", "Following Hockenmaier and Steedman (2007), the proclitic character of conjunctions is captured in a syncategorematic rule combining them with the right conjunct, with the result later combining with the left conjunct 1 : conj X X [ conj ] ( > ) X X [ conj ] X ( < ) Some additional unary and binary type-changing rules are also needed to process the derivations in CCGbank (Hockenmaier and Steedman, 2007).", "We use the same type-changing rules as those described in (Clark and Curran, 2007).", "Among the unary combinatory rules the most important one is type-raising.", "The first reason for that is that it allows CCG to handle constructions like argument cluster coordination in a straightforward way.", "Second, it allows CCG to be much more incremental as seen from the example in Figure 1b.", "Type-raising rules are expressed in the following way: X Y/ ( Y \\ X ) ( > T ) X Y \\ ( Y/X ) ( < T ) Type-raising, is strictly limited to applying to category types that are arguments , such as NP, PP, etc., making it analogous to grammatical case in languages like Latin and Japanese, in spite of the lack of morphological case in English.", "CCG derivations can be parsed with the same shift-reduce mechanism used for CFG parsing (Steedman, 2000).", "In the context of CFG parsing, the shift-reduce algorithm is not incremental, because CFG structures are mostly right-branching, but in CCG by changing the derivation via the combinatory rules we also change the level of incrementality of the algorithm.", "As usual, the shift-reduce algorithm consists of a stack of the constituents built so far and a buffer with words that are yet to be processed.", "Parsing starts with the stack empty and the buffer containing the whole sentence.", "The end state is a stack with only one element and an empty buffer.", "Transitions between parser states are: shift(X) moves the first word from the buffer to the stack and labels it with category X , reduceUnary(C) applies a unary combinatory rule C to the topmost constituent on the stack, reduceBinary(C) applies a binary combinatory rule C to the two topmost constituents on 1 This notation differs unimportantly from Steedman (2000) who uses a ternary coordination rule, and more recent work in which conjunctions are X \\ X/X .", "CCG shift-reduce parsers are often built over right-branching derivations that obey Eisner normal form (Eisner, 1996).", "Processing left-branching derivations is not any different except that it requires an opposite normal form.", "Our revealing algorithm adds a couple of mod-ifications to this default shift-reduce algorithm.", "First, it guarantees that all the trees stored on the stack are right-branching this still allows left-branching parsing and only adds the requirement of adjusting newly reduced trees on the stack to be right leaning.", "Second, it adds revealing transitions that exploit the right-branching guarantee to apply right adjunction.", "Both tree rotation and revealing are performed efficiently as described in the following subsections.", "A nave way of enforcing right-branching guarantee is to do a complete transformation of the subtree on the stack into a right-branching one.", "However, that would be unnecessarily expensive.", "Instead we do incremental tree rotation to right.", "If we assume that all the elements on the stack are respecting this right-branching form (our inductive case), this state can be disturbed only by reduceBi-nary transition ( shift just adds a single word which is trivially right-branching and reduceUnary does not influence the direction of branching).", "The re-duceBinary transition will take two topmost elements on the stack that are already right-branching and put them as children of some new binary node.", "We need to repair that potential imperfection on top of the tree.", "This is done by recursively rotating the nodes as in Figure 3a.", "2 This figure shows one of the sources of CCG's spurious ambiguity: parent-child relation of the combinatory rules with the same directionality.", "Here we concentrate on forward combinators because they are the most frequent in our data most backward combinators disappear with the addition of forward type-raising and the addition of special right adjunct transitionsbut the same method can be applied to backward combinatory rules as a mirror image.", "Having two combinatory rules of the same directionality is necessary 2 Although we do not discuss the operations on the semantic predicate-argument structure that correspond to treerotation, the combinatory semantics of the rules themselves guarantees that such operations can be done uniformly and in parallel.", "but not sufficient condition for spurious ambiguity.", "As visible on the Figure 3a side condition, the lower combinator must not be > B0 .", "The tree rotation function assumes that both of the children are perfectmeaning right-branching 3 and that the only imperfection is on the root node.", "The method repairs this imperfection on the root by applying the tree rotation transformation, but it also creates a new node as a right child and that node might be imperfect.", "That is why the method goes down the right node recursively until all the imperfections are removed and the whole tree becomes fully right-branching.", "In the worst case the method will reach the bottom of the tree, but often only 3 or 4 nodes need to be transformed to make the tree perfectly the right branching The worst case complexity of repairing the imperfection is O ( n ) which makes the complexity of the whole parsing algorithm O ( n 2 ) for building a single derivation.", "As a running example we will use a derivation tree in Figure 4a for which a transition sequence is given in Figure 4b.", "Here tree rotation is used in transitions 6 and 8 that introduce imperfections.", "In transition 6 a single tree rotation at the top was enough to correct the imperfection, while in transition 8 recursive tree rotation function went to depth two.", "If the upper and lower combinators are both > B2 the topmost combinator on the right will be-3 By right branching we mean as right branching as it is allowed by CCG formalism and predicate-argument structure.", "come > B3 , a combinatory rule that may be unnecessary for defining the competence grammar of human languages, but which is required if parsing performance is to be as incremental as possible.", "Fortunately, the configuration with two connected > B2 combinatory rules appears very rarely in CCGbank.", "Many papers have been published on using left-branching CCG derivations but, to the best of our knowledge, none of them explains how are they constructed from right-branching CCGbank trees.", "A very simple algorithm for that can be made using our tree rotation function.", "Here we use rotation in the opposite direction i.e. rotation to left (Fig-ure 3b).", "We cannot apply this operation from the top node of the CCGbank tree because that tree does not satisfy the assumption of the algorithm: immediate children are not perfect (here perfect means being left-branching).", "That is why we start from the bottom of the tree with terminal nodes that are trivially perfect and apply tree transformation to each node in post-order traversal.", "This incremental tree rotation algorithm is inspired by the AVL self-balancing binary search trees (Adelson-Velskii and Landis, 1962) and Red-Black trees (Guibas and Sedgewick, 1978; Okasaki, 1999).", "The main difference is that here we are trying to do the opposite of AVLs: instead of making the tree perfectly balanced we are trying to make it perfectly unbalanced, i.e. leaning to the right (or left).", "Also, our imperfections start at the top and are pushed to the bottom of the tree which is in contrast to AVLs trees where imperfections start at the bottom and get pushed to the top.", "The last important point about tree rotation concerns punctuation rules.", "All punctuation is attached to the left of the highest possible node in case of left-branching derivations (Hockenmaier and Bisk, 2010), while in the right-branching derivations we lower the punctuation to the bottom left neighbouring node.", "Punctuation has no influence on the predicate-argument structure so it is safe to apply this transformation.", "If the topmost element on the stack is of the form X \\ X and the second topmost element on the stack has on its right edge one or more constituents of a type X | $ we allow reveal transition.", "4 This is a more general way of revealing than approaches of Pareschi and Steedman (1987) and Ambati et al. (2015) who attempt to reveal only constituents of type X while we reveal any type that has X as its prime element (that is the meaning of X | $ nota-tion).", "We also treat X[conj] as right adjuncts of the left conjunct.", "Similarly to the previous case, if the topmost element on the stack is X[conj] and the right edge of the second topmost element on the stack has constituent(s) of type X, they are revealed for possible combination via < combinator.", "If reveal transition is selected, as in transition 14 in Figure 4b, the parser enters into a mode of choosing among different constituents labelled X | $ that could be modified by the right adjunct X \\ X .", "After particular X | $ node is chosen X \\ X is combined with it and the rest of the tree above X node is rebuilt in the same way.", "This rebuild is fully deterministic and is done quickly even though in principle it could take O ( n ) to compute.", "Even in the worst case scenario, it does not make the complexity of the algorithm go higher than O ( n 2 ) .", "The ability of our algorithm to choose among different possible revealing options is unique among all the proposals for revealing.", "For transition 15 in Figure 4b the parser can choose whether to adjoin (coordinate) to a verb phrase that already contains a left modifier or without.", "This is similar to Selective Modifier Placement strategy from older Augmented Transition Network (ATN) systems (Woods, 1973) which finds all the attachment options that are syntactically legal and then allows the parser to choose among those using some criteria.", "Woods (1973) suggests using lexical semantic information for this selection, but in his ATN system only handwritten semantic selection rules were used.", "Here we will also use selection based on the lexical content but it will be broad coverage and learned from the data.", "This ability to semantically select the modifier's attachment point is essential for good parsing results as will be shown.", "4 The $ notation is from (Steedman, 2000) where $ is used as a (potentially empty) placeholder variable ranging over multiple arguments.", "The neural probabilistic model that chooses which transition should be taken next conditions on the whole state of the configuration in a similar way to RNNG parser (Dyer et al., 2016).", "The words in the sentence are first embedded using the concatenation of top layers of ELMo embeddings (Peters et al., 2018) that are normalised to L2 norm and then refined with two layers of bi-LSTM (Graves et al., 2005).", "The neural representation of the terminal is composed of concatenated ELMo embedding and supertag embedding.", "The representation of a subtree combines: span representation we subtract representation of the leftmost terminal from the representation of the rightmost terminal as done in LSTM-Minus architecture (Wang and Chang, 2016), combinator and category embeddings, head words encoding because each constituent can have a set of heads, for instance arising from coordination, we model representation of heads with DeepSet architecture (Zaheer et al., 2017) over representations of head terminals.", "We do not use recursive neural networks like Tree-LSTM (Tai et al., 2015) to encode subtrees because of the frequency of tree rotation.", "These operations are fast, but they would trigger frequent recomputation of the neural tree representation, so we opted for a mechanism that is invariant to rebranching.", "The stack representation is encoded using Stack-LSTM (Dyer et al., 2015).", "The configuration representation is the concatenation of the stack representation and the representation of the rightmost terminal in the stack.", "The next nonrevealing transition is chosen by a two-layer feed-forward network.", "If the reveal transition is triggered, the system needs to choose which among the candidate nodes X | $ to adjoin the right modifier X \\ X to.", "The number of these modifiers can vary so we cannot use a simple feed-forward network to choose among them.", "Instead, we use the mechanism of Pointer networks (Vinyals et al., 2015), which works in a similar way to attention (Bahdanau et al., 2014) except that attention weights are interpreted as probabilities of selecting any particular node.", "Attention is computed over representations of each candidate node.", "Because we expect that there Waiting time Connectedness Right-branching 4.29 5.01 Left-branching 2.32 3.15 Ambati et al. (2015)* 0.69 2.15 Revealing our 0.46 1.72 Table 1: Train set measure of incrementality.", "We optimize for maximum log-likelihood on the training set, using only the most frequent supertags and the most important combinators.", "To avoid discarding sentences with rare supertags and type-changing rules we use all supertags and combinatory rules during training but do not add their probability to the loss function.", "The number of supertags used is 425, as in the EasyCCG parser, and the combinatory rules that are used are the same as in C&C parser.", "The loss is minimised for 15 epochs on the training portion of CCGbank (Hockenmaier and Steedman, 2007) using Adam with learning rate 0.001.", "Dimensionality is set to 128 in all cases, except for ELMo set at 300.", "Dropout is applied only to the ELMo input with a rate of 0 .", "2 .", "The parser is implemented in Scala using the DyNet toolkit (Neubig et al., 2017) and is available at https://github.com/stanojevic/Rotating-CCG.", "To measure the incrementality of the proposed algorithm we use two evaluation metrics: waiting time and connectedness .", "Waiting time is the average number of nodes that need to be shifted before the dependency between two nodes is established.", "The minimal value for a fully incremental algorithm is 0 (the single shift that is always necessary is not counted).", "Connectedness is defined as the average stack size before a shift operation is performed (the initial two shifts are forced so they are not taken in the average).", "The minimal value for connectedness is 1.", "We have computed these measures on the training portion of the CCGbank for standard non-incremental right-branching deriva-heads SMP LF UF Sup.", "tions, the more incremental left-branching derivations and our revealing derivations.", "We also put in the results numbers for the previous proposal of revealing by Ambati et al. (2015) taken from their paper but these numbers should be taken with caution, because it is not clear from the paper whether the authors computed them in the same way and on the same portion of the dataset as we did.", "Table 1 results shows that our revealing derivations are significantly more incremental even in comparison to previous revealing proposals, and barely use more than the minimal amount of stack memory.", "We have tested on the development set which of the parsing algorithms gives best parsing accuracy.", "All the algorithms use the same neural architecture and training method except for the revealing operations that require additional mechanisms to choose the node for revealing.", "This allows us to isolate machine learning factors and see which of the parsing strategies works the best.", "There are two methods that are often used for evaluating CCG parsers.", "They are both based on deep dependencies extracted from the derivation trees.", "The first is from (Clark et al., 2002) and is closer to categorial grammar view of dependencies.", "The second is from (Clark and Curran, 2007) and is meant to be more formalism independent and closer to standard dependencies (Caroll et al., 1998).", "We opt for the first option for development as we find it more robust and reliable but we report both types on the test set.", "Table 2 shows the results on development set.", "The heads column shows if the head words representation is used for computing the representation of the nodes in the tree.", "The SMP column shows if Selective Modifier Placement is used: whether we choose where to attach right adjunct based only 1 2 4 8 16 89 89 .", "on the position embeddings or also on the node's lexical content.", "First we can see that Revealing approach that uses head representation and does selective modifier placement outperforms all the models both on labelled and unlabelled dependencies.", "Ablation experiments show that SMP was the crucial component: without it the Revealing model is much worse.", "This is a clear evidence that attachment heuristics are not enough and also that previous approaches that extract only single revealing option are sub-optimal.", "A possible reason why Revealing model works better than Left and Right branching models is that Left and Right models need to commit early on whether there will be a right adjunct in the future or not.", "If they make a mistake during greedy decoding there will be no way to repair that mistake.", "This is not an issue for the Revealing model because it can attach right adjuncts at any point and does not need to forecast them.", "A natural question then is if these improvements of Revealing model will stay if we use a bigger beam.", "Figure 5 shows exactly that experiment.", "We see that the model that gains the most from the biggest beam is for the Left-branching condition, which is expected since that is the model that commits to its predictions the most it commits with type-raising, unlike Right model, and it commits with predicting right adjunction, unlike Revealing model.", "With an increased beam Left model equals the Revealing greedy model.", "But if all the models use the same beam the Revealing model remains the best.", "An interesting result is that the small beam of size 4 is enough to get the maximal improvement.", "This probably reflects the low degree of lexical ambiguity that is unresolved at each point during parsing.", "Tag UF LF Lewis and Steedman (2014) 93.0 88.6 81.3 Ambati et al. (2015) 91.2 89.0 81.4 Hockenmaier (2003) 92.2 92.0 84.4 Zhang and Clark (2011) 93.1 85.5 Clark and Curran (2007) 94.3 93.0 87.6 Revealing (beam=1) 95.2 95.5 89.8 Revealing (beam=4) 95.4 95.8 90.2 Table 3: Test set F1 results for prediction of supertags (Tag), unlabelled (UF) and labelled (LF) CCG dependencies extracted using scripts from Hockenmaier (2003) parser.", "We compute test set results for our Revealing model and compare it to most of the previous results on CCGbank using both types of dependencies.", "Table 3 shows results with (Clark et al., 2002) style dependencies.", "Here we get state-of-the-art results by a large margin, probably mostly thanks to the machine learning component of our parser.", "An interesting comparison to be made is against EasyCCG parser of Lewis and Steedman (2014).", "This parser uses a neural supertagger of accuracy that is not too far from ours, but the dependencies extracted by our parser are much more accurate.", "This shows that a richer probabilistic model that we use contributes more to the good results than the exact A (cid:63) search that EasyCCG does with a more simplistic model.", "Another comparison of relevance would be with the revealing model of Ambati et al. (2015) but the comparison of the algorithms is difficult since the machine learning component is very different: Ambati uses a structured perceptron while our model is a heavily parametrized neural network.", "In Table 4 we show results with the second type of dependencies used for CCG evaluation.", "All the models, except Clark and Curran (2007), are neural and use external embeddings.", "From the presented models only Revealing and Xu et al. (2016) are transition based.", "All other models have a global search either via CKY or A* search.", "Our revealing-based parser that does only greedy search is outperforming all of them including those trained on large amounts of unlabelled data using semi-supervised techniques like tri-training (Lewis et al., 2016; Lee et al., 2016; Yoshikawa et al., 2017).", "In some sense, all the neural models in Table 4 are implicitly trained in semi-supervised way because they use pretrained embeddings that are estimated on unlabelled data.", "The quality of ELMo embeddings is probably one of the reasons why our parser achieves such good results.", "However, another semi-supervised training method, namely tri-training, is particularly attractive because, unlike ELMo, it is trained on a CCG parsing objective which is more closely aligned to what we want to do.", "All tri-training models are trained on much larger dataset that in addition to CCGbank also includes 43 million word corpus automatically annotated with silver CCG derivations by Lewis et al. (2016).", "It is likely that incorporating tri-training into our training setup will further increase the improvement over other models.", "Recurrent Neural Network Grammar (RNNG) (Dyer et al., 2016) is a fully incremental top-down parsing model.", "Because it is top-down it has no issues with right branching structures, but right adjuncts would still make parsing more difficult for RNNG because they will have to be predicted even earlier than in Leftand Rightbranching derivations in CCG.", "Left-corner parsers (which can be seen as a more constrained version of CCG Left-branching parsing strategy) seem more psychologically realistic than top-down parsers (Abney and Johnson, 1991; Resnik, 1992; Stanojevic and Stabler, 2018).", "Some proposals about handling right adjunction in left-corner parsing are based on extension to generalized left-corner parsers (Demers, 1977; Hale, 2014) that can force some grammar rules (in particular right-adjunction rules) to be less incremental.", "Our approach does not decrease incrementality of the parser in this way.", "On the contrary, having a special mechanism for right adjunction makes parser both more incremental and more accurate.", "Revealing based on higher order unification by Pareschi and Steedman (1987) was also proposed by Steedman (1990) as the basis for CCG explanation of gapping.", "The present derivation-based mechanism for revealing does not extend to gapping, and is targeting to model only derivations that could be explained with a standard CCG grammar derived from CCGbank.", "While that guarantees that we stay in the safe zone of sound and complete standard CCG derivations, it would be good as a future work to extend support for gapping and other types of derivations not present in CCGbank.", "Niv (1993, 1994) proposed an alternative to the unification-based account of Pareschi and Steedman similar to our proposal for online tree rotation.", "Niv's parser is mostly a formal treatment of left-to-right rotations evaluated against psycholinguistic garden paths, but lacks the wide coverage implementation and statistical parsing model as a basis for resolving attachment ambiguities.", "We have presented a revealing-based incremental parsing algorithm that has special transitions for handling right-adjunction.", "The parser is neutral with regard to the particular semantic representation used.", "It is computationally efficient, and can reveal all possible constituents types.", "It is the most incremental CCG parser yet proposed, and has state-of-the-art results against all published parsers trained on the CCGbank under both dependency recovery measures that are in use for the purpose.", "This work was supported by ERC H2020 Advanced Fellowship GA 742137 SEMANTAX grant." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "objective", "abstain", "other", "result", "abstain", "abstain", "abstain", "other" ]
[ "We present a method for constructing taxonomic trees (e.g., WORDNET ) using pretrained language models.", "Our approach is composed of two modules, one that predicts parenthood relations and another that reconciles those predictions into trees.", "The parenthood prediction module produces likelihood scores for each potential parent-child pair, creating a graph of parent-child relation scores.", "The tree reconciliation module treats the task as a graph optimization problem and outputs the maximum spanning tree of this graph.", "We train our model on subtrees sampled from WORDNET , and test on nonoverlapping WORDNET subtrees.", "We show that incorporating web-retrieved glosses can further improve performance.", "On the task of constructing subtrees of English WORDNET , the model achieves 66.7 ancestor F 1 , a 20.0% relative increase over the previous best published result on this task.", "In addition, we convert the original English dataset into nine other languages using OPENMULTILINGUALWORDNET and extend our results across these languages.", "A variety of NLP tasks use taxonomic information, including question answering (Miller, 1998) and information retrieval (Yang and Wu, 2012).", "Taxonomies are also used as a resource for building knowledge and systematicity into neural models (Peters et al., 2019; Geiger et al., 2020; Talmor et al., 2020).", "NLP systems often retrieve taxonomic information from lexical databases such as WORDNET (Miller, 1998), which consists of taxonomies that contain semantic relations across many domains.", "While manually curated taxonomies provide useful information, they are incomplete and expensive to maintain (Hovy et al., 2009).", "Traditionally, methods for automatic taxonomy construction have relied on statistics of web-scale corpora.", "These models generally apply lexico-syntactic patterns (Hearst, 1992) to large corpora, and use corpus statistics to construct taxonomic trees (e.g., Snow et al., 2005; Kozareva and Hovy, 2010; Bansal et al., 2014; Mao et al., 2018; Shang et al., 2020).", "In this work, we propose an approach that c onstructs t axonomic trees using p retrained language models (CTP).", "Our results show that direct access to corpus statistics at test time is not necessary.", "Indeed, the re-representation latent in large-scale models of such corpora can be beneficial in constructing taxonomies.", "We focus on the task proposed by Bansal et al. (2014), where the task is to organize a set of input terms into a taxonomic tree.", "We convert this dataset into nine other languages using synset alignments collected in OPENMULTILINGUALWORDNET and evaluate our approach in these languages.", "CTP first finetunes pretrained language models to predict the likelihood of pairwise parent-child relations, producing a graph of parenthood scores.", "Then it reconciles these predictions with a maximum spanning tree algorithm, creating a tree-structured taxonomy.", "We further test CTP in a setting where models have access to web-retrieved glosses.", "We reorder the glosses and finetune the model on the reordered glosses in the parenthood prediction module.", "We compare model performance on subtrees across semantic categories and subtree depth, provide examples of taxonomic ambiguities, describe conditions for which retrieved glosses produce greater increases in tree construction F 1 score, and evaluate generalization to large taxonomic trees (Bordea et al., 2016a).", "These analyses suggest specific avenues of future improvements to automatic taxonomy construction.", "Even without glosses, CTP achieves a 7.9 point absolute improvement in F 1 score on the task of constructing WORDNET subtrees, compared to previous work.", "When given access to the glosses, CTP obtains an additional 3.2 point absolute improvement in F 1 score.", "Overall, the best model achieves a 11.1 point absolute increase (a 20.0% relative increase) in F 1 score over the previous best published results on this task.", "Our paper is structured as follows.", "In Section 2 we describe CTP, our approach for taxonomy construction.", "In Section 3 we describe the experimental setup, and in Section 4 we present the results for various languages, pretrained models, and glosses.", "In Section 5 we analyze our approach and suggest specific avenues for future improvement.", "We discuss related work and conclude in Sections 6 and 7.", "We define taxonomy construction as the task of creating a tree-structured hierarchy T = ( V, E ) , where V is a set of terms and E is a set of directed edges representing hypernym relations.", "In this task, the model receives a set of terms V , where each term can be a single word or a short phrase, and it must construct the tree T given these terms.", "CTP performs taxonomy construction in two steps: parenthood prediction (Section 2.2) followed by graph reconciliation (Section 2.3).", "We provide a schematic description of CTP in Figure 2 and provide details in the remainder of this section.", "We use pretrained models (e.g., BERT) to predict the edge indicators I [ parent ( v i , v j )] , which denote whether v i is a parent of v j , for all pairs ( v i , v j )", "To generate training data from a tree T with n nodes, we create a positive training example for each of the n 1 parenthood edges and a negative training example for each of the n ( n 1) 2 ( n 1) pairs of nodes that are not connected by a parenthood edge.", "We construct an input for each example using the template v i is a v j , e.g., A dog is a mam-mal.\"", "Different templates (e.g., [TERM_A] is an example of [TERM_B] or [TERM_A] is a type of [TERM_B] ) did not substantially affect model performance in initial experiments, so we use a single template. The inputs and outputs are modeled in the standard format (Devlin et al., 2019). We fine-tune pretrained models to predict I [ parent ( v i , v j )] , which indicates whether v i is the parent of v j , for each pair of terms using a sentence-level classification task on the input sequence. 2.3 Tree Reconciliation We then reconcile the parenthood graph into a valid tree-structured taxonomy. We apply the Chu-Liu-Edmonds algorithm to the graph of pairwise parenthood predictions. This algorithm finds the maximum weight spanning arborescence of a directed graph. It is the analog of MST for directed graphs, and finds the highest scoring arborescence in O ( n 2 ) time (Chu, 1965). 2.4 Web-Retrieved Glosses We perform experiments in two settings: with and without web-retrieved glosses. In the setting without glosses, the model performs taxonomy construction using only the set of terms V . In the setting with glosses, the model is provided with glosses retrieved from the web. For settings in which the model receives glosses, we retrieve a list of glosses d 1 v , ..., d nv for each term v V . 1 Many of the terms in our dataset are polysemous, and the glosses contain multiple senses of the word. For example, the term dish appears in the subtree we show in Figure 1. The glosses for dish include (1) (telecommunications) A type of antenna with 1 We scrape glosses from wiktionary.com, merriam-webster.com, and wikipedia.org. For wikitionary.com and merriam-webster.com we retrieve a list of glosses from each site. For wikipedia.org we treat the first paragraph of the page associated with the term as a single gloss. The glosses were scraped in August 2020. Figure 2: A schematic depiction of CTP . We start with a set of terms (A). We fine-tune a pretrained language model to predict pairwise parenthood relations between pairs of terms (B), creating a graph of parenthood predictions (C) (Section 2.2). We then reconcile the edges of this graph into a taxonomic tree (E) (Section 2.3). Optionally, we provide the model ranked web-retrieved glosses (Section 2.4). We re-order the glosses based on relevance to the current subtree (Z). a similar shape to a plate or bowl , (2) (metonymi-cally) A specific type of prepared food , and (3) (mining) A trough in which ore is measured . We reorder the glosses based on their relevance to the current subtree. We define relevance of a given context d iv to subtree T as the cosine similarity between the average of the GloVe embeddings (Pennington et al., 2014) of the words in d iv (with stopwords removed), to the average of the GloVe embeddings of all terms v 1 , ..., v n in the subtree. This produces a reordered list of glosses d (1) v , ..., d ( n ) v . We then use the input sequence containing the reordered glosses [CLS] v i d (1) v i , ..., d ( n ) v i . [SEP] v j d (1) v j , ..., d ( n ) v j to fine-tune the pretrained models on pairs of terms ( v i , v j ) . 3 Experiments In this section we describe the details of our datasets (Section 3.1), and describe our evaluation metrics (Section 3.2). We ran our experiments on a cluster with 10 Quadro RTX 6000 GPUs. Each training runs finishes within one day on a single GPU. 3.1 Datasets We evaluate CTP using the dataset of medium-sized WORDNET subtrees created by Bansal et al. (2014). This dataset consists of bottomed-out full subtrees of height 3 (this corresponds to trees containing 4 nodes in the longest path from the root to any leaf) that contain between 10 and 50 terms. This dataset comprises 761 English trees, with 533/114/114 train/dev/test trees respectively. 3.1.1 Multilingual WORDNETWORDNET was originally constructed in English, and has since been extended to many other languages such as Finnish (Magnini et al., 1994), Italian (Lindn and Niemi, 2014), and Chinese (Wang and Bond, 2013). Researchers have provided alignments from synsets in English WORDNET to terms in other languages, using a mix of automatic and manual methods (e.g., Magnini et al., 1994; Lindn and Niemi, 2014). These multilingual wordnets are collected in the OPENMULTILINGUALWORDNET project (Bond and Paik, 2012). The coverage of synset alignments varies widely. For instance, the alignment of ALBANET (Alba-nian) to English WORDNET covers 3.6% of the synsets in the Bansal et al. (2014) dataset, while the FINNWORDNET (Finnish) alignment covers 99.6% of the synsets in the dataset. We convert the original English dataset to nine other languages using the synset alignments. (We create datasets for Catalan (Agirre et al., 2011), Chinese (Wang and Bond, 2013), Finnish (Lindn and Niemi, 2014), French (Sagot, 2008), Italian (Magnini et al., 1994), Dutch (Postma et al., 2016), Polish (Piasecki et al., 2009), Portuguese (de Paiva and Rademaker, 2012), and Spanish (Agirre et al., 2011)). Since these wordnets do not include alignments to all of the synsets in the English dataset, we convert the English dataset to each target language using alignments specified in WORDNET as follows. We first exclude all subtrees whose roots are not included in the alignment between the WORDNET of the target language and English WORDNET . For each remaining subtree, we remove any node that is not included in the alignment. Then we remove all remaining nodes that are no longer connected to the root of the corresponding subtrees. We describe the resulting dataset statistics in Table 8 in the Appendix. 3.2 Evaluation Metrics As with previous work (Bansal et al., 2014; Mao et al., 2018), we report the ancestor F 1 score 2 PRP + R , where P = | IS _ APREDICTED IS _ AGOLD | | IS _ APREDICTED | R = | IS _ APREDICTED IS _ AGOLD | | IS _ AGOLD | IS _ APREDICTED and IS _ AGOLD denote the set of predicted and gold ancestor relations, respectively. We report the mean precision ( P ), recall ( R ), and F 1 score, averaged across the subtrees in the test set. 3.3 Models In our experiments, we use pretrained models from the Huggingface library (Wolf et al., 2019). For the English dataset we experiment with BERT, BERT-Large, and ROBERTA -Large in the parenthood prediction module. We experiment with multilingual BERT and language-specific pretrained models (detailed in Section 9 in the Appendix). We finetuned each model using three learning rates {1e-5, 1e-6, 1e-7}. For each model, we ran three trials using the learning rate that achieved the highest dev F 1 score. In Section 4, we report the average scores over three trials. We include full results in Tables 13 and 15 in the Appendix. The code and datasets are available at https://github.com/cchen23/ctp . 4 Results 4.1 Main Results Our approach, CTP, outperforms existing state-of-the-art models on the WORDNET subtree construction task. In Table 1 we provide a comparison of our results to previous work. Even without retrieved glosses, CTP with ROBERTA-LARGE in the parenthood prediction module achieves higher F 1 than previously published work. CTP achieves additional improvements when provided with the web-retrieved glosses described in Section 2.4. We compare different pretrained models for the parenthood prediction module, and provide these comparisons in Section 4.3. P R F1 Bansal et al. (2014) 48.0 55.2 51.4 Mao et al. (2018) 52.9 58.6 55.6 CTP (no glosses) 67.3 62.0 63.5 CTP (web glosses) 69.3 66.2 66.7 Table 1: English Results, Comparison to Previous Work . Our approach outperforms previous approaches on reconstructing WORDNET subtrees, even when the model is not given web-retrieved glosses. 4.2 Web-Retrieved Glosses In Table 2 we show the improvement in taxonomy construction with two types of glosses glosses retrieved from the web (as described in Section 2.4), and those obtained directly from WORDNET . We consider using the glosses from WORDNET as an oracle setting since these glosses are directly generated from the gold taxonomies. Thus, we focus on the web-retrieved glosses as the main setting. Models produce additional improvements when given WORDNET glosses. These improvements suggest that reducing the noise from web-retrieved glosses could improve automated taxonomy construction. 4.3 Comparison of Pretrained Models For both settings (with and without web-retrieved glosses), CTP attains the highest F 1 score when ROBERTA -Large is used in the parenthood prediction step. As we show in Table 3, the average F 1 score improves with both increased model size and with switching from BERT to ROBERTA . P R F1 CTP 67.3 62.0 63.5 + web glosses 69.3 66.2 66.7 + oracle glosses 84.0 83.8 83.2 Table 2: English Results, Gloss Comparison on Test Set. Adding web glosses improves performance over only using input terms. Models achieve additional improvements in subtree reconstruction when given oracle glosses from WORDNET , showing possibilities for improvement in retrieving web glosses. P R F1 CTP (BERT-Base) 57.9 51.8 53.4 CTP (BERT-Large) 65.5 59.8 61.4 CTP (ROBERTA -Large) 67.3 62.0 63.5 Table 3: English Results, Comparison of Pretrained Models on Test Set. Larger models perform better and ROBERTA outperforms BERT. 4.4 Aligned Wordnets We extend our results to the nine non-English alignments to the Bansal et al. (2014) dataset that we created. In Table 4 we compare our best model in each language to a random baseline. We detail the random baseline in Section 9 in the Appendix and provide results from all tested models in Section 17 in the Appendix. CTP's F 1 score non-English languages is substantially worse than its F 1 score on English trees. Lower F 1 scores in non-English languages are likely due to multiple factors. First, English pretrained language models generally perform better than models in other languages because of the additional resources devoted to the development of English models. (See e.g., Bender, 2011; Mielke, 2016; Joshi et al., 2020). Second, OPENMULTILINGUALWORDNET aligns wordnets to English WORDNET , but the subtrees contained in English WORDNET might not be the natural taxonomy in other languages. However, we note that scores across languages are not directly comparable as dataset size and coverage vary across languages (as we show in Table 8). These results highlight the importance of evaluating on non-English languages, and the difference in available lexical resources between languages. Furthermore, they provide strong baselines for fu-Model P R F1 ca Random Baseline 20.0 31.3 23.6 CTP ( MBERT) 38.7 39.7 38.0 zh Random Baseline 25.8 35.9 29.0 CTP (CHINESEBERT) 62.2 57.3 58.7 en Random Baseline 8.9 22.2 12.4 CTP (ROBERTA -Large) 67.3 62.0 63.5 fi Random Baseline 10.1 22.5 13.5 CTP (FINBERT) 47.9 42.6 43.8 fr Random Baseline 22.1 34.4 25.9 CTP (FRENCHBERT) 51.3 49.1 49.1 it Random Baseline 28.9 39.4 32.3 CTP (ITALIANBERT) 48.3 45.5 46.1 nl Random Baseline 26.8 38.4 30.6 CTP (BERTJE ) 44.6 44.8 43.7 pl Random Baseline 23.4 33.6 26.8 CTP (POLBERT ) 51.9 49.7 49.5 pt Random Baseline 26.1 37.6 29.8 CTP (BERTIMBAU ) 59.3 57.1 56.9 es Random Baseline 27.0 37.2 30.5 CTP (BETO) 53.1 51.7 51.7 Table 4: Multilingual WORDNET Test Results. We extend our model to datasets in nine other languages, and evaluate our approach on these datasets. We use ISO 639-1 acronyms to indicate languages. ture work in constructing wordnets in different languages. 5 Analysis In this section we analyze the models both quantitatively and qualitatively. Unless stated otherwise, we analyze our model on the dev set and use ROBERTA -Large in the parenthood prediction step. 5.1 Models Predict Flatter Trees In many error cases, CTP predicts a tree with edges that connect terms to their non-parent ancestors, skipping the direct parents. We show an example of this error in Figure 3. In this fragment (taken from one of the subtrees in the dev set), the model predicts a tree in which botfly and horsefly are direct children of fly , bypassing the correct parent gadfly . On the dev set, 38.8% of incorrect parenthood edges were cases of this type of error. Figure 3: A fragment of a subtree from the WORDNET hierarchy. Orange indicates incorrectly predicted edges and blue indicates missed edges. Missing edges result in predicted trees that are generally flatter than the gold tree. While all the gold trees have a height of 3 (4 nodes in the longest path from the root to any leaf), the predicted dev trees have a mean height of 2.61. Our approach scores the edges independently, without considering the structure of the tree beyond local parenthood edges. One potential way to address the bias towards flat trees is to also model the global structure of the tree (e.g., ancestor and sibling relations). 5.2 Model Struggle Near Leaf Nodes d = 1 d = 2 d = 3 l = 1 81.2 52.3 39.7 l = 2 74.4 48.9 l = 3 66.0 Table 5: Ancestor Edge Recall, Categorized by Descendant Node Depth d and Parent Edge Length l . Ancestor edge prediction recall decreases with deeper descendant nodes and closer ancestor-descendant relations. CTP generally makes more errors in predicting edges involving nodes that are farther from the root of each subtree. In Table 5 we show the recall of ancestor edges, categorized by the number of parent edges d between the subtree root and the descendant of each edge, and the number of parent edges l between the ancestor and descendant of each edge. The model has lower recall for edges involving descendants that are farther from the root (higher d ). In permutation tests of the correlation between edge recall and d conditioned on l , 0 out of 100,000 permutations yielded a correlation at least as extreme as the observed correlation. 5.3 Subtrees Higher Up in WORDNET are Harder, and Physical Entities are Easier than Abstractions Subtree performance also corresponds to the depth of the subtree in the entire WORDNET hierarchy. The F 1 score is positively correlated with the depth of the subtree in the full WORDNET hierarchy, with a correlation of 0.27 (significant at p=0.004 using a permutation test with 100,000 permutations). The subtrees included in this task span many different domains, and can be broadly categorized into subtrees representing concrete entities (such as telephone ) and those representing abstractions (such as sympathy ). WORDNET provides this categorization using the top-level synsets physical_entity.n.01 and abstraction.n.06 . These categories are direct children of the root of the full WORDNET hierarchy ( entity.n.01 ), and split almost all WORDNET terms into two subsets. The model produces a mean F 1 score of 60.5 on subtrees in the abstraction subsection of WORDNET , and a mean F 1 score of 68.9 on subtrees in the physical_entity subsection. A one-sided Mann-Whitney rank test shows that the model performs systematically worse on abstraction subtrees (compared to physical entity subtrees) (p=0.01). 5.4 Pretraining Corpus Covers Most Terms Figure 4: Frequency of terms in the WORDNET dataset in the pretraining corpus. Over 97% of terms in the Bansal et al. (2014) dataset occur at least once in the pretraining corpus. Over 80% of terms occur less than 50k times. With models pretrained on large web corpora, the distinction between the settings with and without access to the web at test time is less clear, since large pretrained models can be viewed as a compressed version of the web. To quantify the extent the evaluation setting measures model capability to generalize to taxonomies consisting of unseen words, we count the number of times each term in the WORDNET dataset occurs in the pretraining corpus. We note that the WORDNET glosses do not directly appear in the pretraining corpus. In Figure 4 we show the distribution of the frequency with which the terms in the Bansal et al. (2014) dataset occur in the BERT pretraining corpus. 2 We find that over 97% of the terms occur at least once in the pretraining corpus. However, the majority of the terms are not very common words, with over 80% of terms occurring less than 50k times. While this shows that the current setting does not measure model ability to generalize to completely unseen terms, we find that the model does not perform substantially worse on edges that contain terms that do not appear in the pretraining corpus. Furthermore, the model is able do well on rare terms. Future work can investigate model ability to construct taxonomies from terms that are not covered in pretraining corpora. 5.5 WORDNET Contains Ambiguous Subtrees Figure 5: A fragment of a subtree from the WORDNET hierarchy. Orange indicates incorrectly predicted edges and blue indicates edges that were missed. Some trees in the gold WORDNET hierarchy contain ambiguous edges. Figure 5 shows one example. In this subtree, the model predicts arteriography as a sibling of arthrography rather than as its child. The definitions of these two terms suggest why the model may have considered these terms as siblings: arteriograms produce images of arteries while 2 Since the original pretraining corpus is not available, we follow Devlin et al. (2019) and recreate the dataset by crawling http://smashwords.com and Wikipedia. arthrograms produce images of the inside of joints. In Figure 6 we show a second example of an ambiguous tree. The model predicts good faith as a child of sincerity rather than as a child of honesty , but the correct hypernymy relation between these terms is unclear to the authors, even after referencing multiple dictionaries. These examples point to the potential of augmenting or improving the relations listed in WORDNET using semi-automatic methods. 5.6 Web-Retrieved Glosses Are Beneficial When They Contain Lexical Overlap We compare the predictions of ROBERTA -Large, with and without web glosses, to understand what kind of glosses help. We split the parenthood edges in the gold trees into two groups based on the glosses: (1) lexical overlap (the parent term appears in the child gloss and/or the child term appears in the parent gloss) and (2) no lexical overlap (neither the parent term nor the child term appears in the other term's gloss). We find that for edges in the lexical overlap\" group, glosses increase the recall of the gold edges from 60.9 to 67.7.", "For edges in the no lexical overlap\" group, retrieval decreases the recall (edge recall changes from 32.1 to 27.3).", "We performed an ablation study in which we ablated either the pretrained language models for the parenthood prediction step or we ablated the tree reconciliation step.", "We ablated the pretrained language models in two ways.", "First, we used a one-layer LSTM on top of GloVe vectors instead of a pretrained language model as the input to the fine-tuning step, and then performed tree reconciliation as before.", "Second, we used a randomly initialized ROBERTA -Large model in place of a pretrained network, and then performed tree reconciliation as before.", "We ablated the tree reconciliation step by substituting the graph-based reconciliation step with a simpler threshold step, where we output a parenthood-relation between all pairs of words with softmax score greater than 0.5.", "We used the parenthood prediction scores from the fine-tuned ROBERTA -Large model, and substituted tree reconciliation with thresholding.", "In Table 6, we show the results of our ablation experiments.", "These results show that both steps (using pretrained language models for parenthood-prediction and performing tree reconciliation) are Figure 6: A fragment of a subtree from the WORDNET hierarchy.", "important for taxonomy construction.", "Moreover, these results show that the incorporation of a new information source (knowledge learned by pretrained language models) produces the majority of the performance gains.", "To test generalization to large subtrees, we tested our models on the English environment and science taxonomies from SemEval-2016 Task 13 (Bordea et al., 2016a).", "Each of these taxonomies consists of a single large taxonomic tree with between 125 and 452 terms.", "Following Mao et al. (2018) and Shang et al. (2020), we used the medium-sized trees from Bansal et al. (2014) to train our models.", "During training, we excluded all medium-sized trees from the Bansal et al. (2014) dataset that overlapped with the terms in the SemEval-2016 Task 13 environment and science taxonomies.", "In Table 7 we show the performance of the ROBERTA -Large CTP model.", "We show the Edge-F1 score rather than the Ancestor-F1 score in order to compare to previous work.", "Although the CTP model outperforms previous work in constructing medium-sized taxonomies, this model is limited in its ability to generalize to large taxonomies.", "Future work can incorporate modeling of the global tree structure into CTP.", "Taxonomy induction has been studied extensively, with both pattern-based and distributional approaches.", "Typically, taxonomy induction involves hypernym detection, the task of extracting candidate terms from corpora, and hypernym organization, the task of organizing the terms into a hierarchy.", "While we focus on hypernym organization, many systems have studied the related task of hypernym detection.", "Traditionally, systems have used pattern-based features such as Hearst patterns to infer hypernym relations from large corpora (e.g. Hearst, 1992; Snow et al., 2005; Kozareva and Hovy, 2010).", "For example, Snow et al. (2005) propose a system that extracts pattern-based features from a corpus to predict hypernymy relations between terms.", "Kozareva and Hovy (2010) propose a system that similarly uses pattern-based features to predict hypernymy relations, in addition to harvesting relevant terms and using a graph-based longest-path approach to construct a legal taxonomic tree.", "Later work suggests that, for hypernymy detection tasks, pattern-based approaches outperform those based on distributional models (Roller et al., 2018).", "Subsequent work pointed out the sparsity that exists in pattern-based features derived from corpora, and showed that combining distributional and pattern-based approaches can improve hypernymy detection by addressing this problem (Yu et al., 2020).", "In this work we consider the task of organizing a set of terms into a medium-sized taxonomic tree.", "Bansal et al. (2014) treat this as a structured learning problem and use belief propagation to incorpo-Dataset Model P R F1 Science (Averaged) CTP 29.4 28.8 29.1 Mao et al. (2018) 37.9 37.9 37.9 Shang et al. (2020) 84.0 30.0 44.0 Environment (Eurovoc) CTP 23.1 23.0 23.0 Mao et al. (2018) 32.3 32.3 32.3 Shang et al. (2020) 89.0 24.0 37.0 Table 7: Generalization to large taxonomic trees.", "rate siblinghood information.", "Mao et al. (2018) propose a reinforcement learning based approach that combines the stages of hypernym detection and hypernym organization.", "In addition to the task of constructing medium-sized WORDNET subtrees, they show that their approach can leverage global structure to construct much larger taxonomies from the SemEval-2016 Task 13 benchmark dataset, which contain hundreds of terms (Bordea et al., 2016b).", "Shang et al. (2020) apply graph neural networks and show that they improve performance in constructing large taxonomies in the SemEval-2016 Task 13 dataset.", "Another relevant line of work involves extracting structured declarative knowledge from pretrained language models.", "For instance, Bouraoui et al. (2019) showed that a wide range of relations can be extracted from pretrained language models such as BERT.", "Our work differs in that we consider tree structures and incorporate web glosses.", "Bosse-lut et al. (2019) use pretrained models to generate explicit open-text descriptions of commonsense knowledge.", "Other work has focused on extracting knowledge of relations between entities (Petroni et al., 2019; Jiang et al., 2020).", "Blevins and Zettle-moyer (2020) use a similar approach to ours for word sense disambiguation, and encode glosses with pretrained models.", "Our experiments show that pretrained language models can be used to construct taxonomic trees.", "Importantly, the knowledge encoded in these pretrained language models can be used to construct taxonomies without additional web-based information.", "This approach produces subtrees with higher mean F 1 scores than previous approaches, which used information from web queries.", "language models can produce improved taxonomic trees.", "The gain from accessing web glosses shows that incorporating both implicit knowledge of input terms and explicit textual descriptions of knowledge is a promising way to extract relational knowledge from pretrained models.", "Error analyses suggest specific avenues of future work, such as improving predictions for subtrees corresponding to abstractions, or explicitly modeling the global structure of the subtrees.", "Experiments on aligned multilingual WORDNET datasets emphasize that more work is needed in investigating the differences between taxonomic relations in different languages, and in improving pretrained language models in non-English languages.", "Our results provide strong baselines for future work on constructing taxonomies for different languages.", "While taxonomies (e.g., WORDNET ) are often used as ground-truth data, they have been shown to contain offensive and discriminatory content (e.g., Broughton, 2019).", "Automatic systems created by pretrained language models can reflect and exacerbate the biases contained by their training corpora.", "More work is needed to detect and combat biases that arise when constructing and evaluating taxonomies.", "Furthermore, we used previously constructed alignments to extend our results to wordnets in multiple languages.", "While considering English WORDNET as the basis for the alignments allows for convenient comparisons between languages and is the standard method for aligning wordnets across languages, continued use of these alignments to evaluate taxonomy construction imparts undue bias towards conceptual relations found in English.", "We thank the members of the Berkeley NLP group and the anonymous reviewers for their insightful feedback.", "CC and KL are supported by National Science Foundation Graduate Research Fellowships.", "This research has been supported by DARPA under agreement HR00112020054.", "The content does not necessarily reflect the position or the policy of the government, and no official endorsement should be inferred." ]
[ "method", "method", "abstain", "abstain", "method", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "other", "abstain", "method", "abstain", "objective", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Wenxuan Shi, Fei Li, Jingye Li, Hao Fei, Donghong Ji Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, China {shiwenxuan,lifei_csnlp,theodorelee,hao.fei,dhji}@whu.edu.cn", "Abstract The state-of-the-art model for structured sentiment analysis casts the task as a dependency parsing problem, which has some limitations: (1) The label proportions for span prediction and span relation prediction are imbalanced.", "(2) The span lengths of sentiment tuple components may be very large in this task, which will further exacerbates the imbalance problem.", "(3) Two nodes in a dependency graph cannot have multiple arcs, therefore some overlapped sentiment tuples cannot be recognized.", "In this work, we propose nichetargeting solutions for these issues.", "First, we introduce a novel labeling strategy, which contains two sets of token pair labels, namely essential label set and whole label set.", "The essential label set consists of the basic labels for this task, which are relatively balanced and applied in the prediction layer.", "The whole label set includes rich labels to help our model capture various token relations, which are applied in the hidden layer to softly influence our model.", "Moreover, we also propose an effective model to well collaborate with our labeling strategy, which is equipped with the graph attention networks to iteratively refine token representations, and the adaptive multi-label classifier to dynamically predict multiple relations between token pairs.", "We perform extensive experiments on 5 benchmark datasets in four languages.", "Experimental results show that our model outperforms previous SOTA models by a large margin.", "1 1 Introduction Structured Sentiment Analysis (SSA), which aims to predict a structured sentiment graph as shown in Figure", "1(a), can be formulated into the problem of tuple extraction, where a tuple ( h, e, t, p ) denotes a holder h who expressed an expression e towards a target t with a polarity p .", "SSA is a more challenging task, because other related tasks only focus Corresponding author 1 Our code is available at https://github.com/ Xgswlg/TGLS Moscow Government has expressed the wish to import the Mongolian meat Neutral Holder Expression Target Moscow Government has expressed the wish to import the Mongolian meat Exp : Neutral Holder Holder Exp : Neutral Exp : Neutral Target Target Target Target", "on extracting part of tuple components or the text spans of the components are short.", "For example, Opinion Role Labeling (Katiyar and Cardie, 2016; Xia et al., 2021) does not include the extraction of sentiment polarities, and Aspect-Based Sentiment Analysis (ABSA) (Pontiki et al., 2014; Wang et al., 2016) extracts the aspect and opinion terms typically consisting of one or two words.", "The state-of-the-art SSA model is proposed by Barnes et al. (2021), which casts the SSA task as the dependency parsing problem and predicts all tuple components as a dependency graph (Figure", "1(b)).", "However, their method exists some shortages.", "Taking Figure", "1(b) as example, only 2 arcs (e.g., expressed import and expressed Moscow) are related to span linking relation prediction (i.e., the relations between expressions and holders or targets), while much more other arcs are related to span prediction (e.g., import the and import meat).", "Such imbalanced labeling strat-4232 Moscow Government has expressed the wish to import the Mongolian meat Span Label Span Label Span Label Rel Label Rel Label Rel Label Rel Label Span Label Span Label [CLS] Moscow Government has expressed the wish to import the Mongolian meat [CLS]-related Label Figure 2: The whole label set contains the labels for span prediction and span relation prediction, as well as the [CLS] -related labels that connect a sentinel [CLS] token with the holder, target and expression tokens.", "egy will make the model pay more attention on span prediction but less on span relation prediction.", "Furthermore, since the span lengths of sentiment tuple components may be very large in the SSA task, the label imbalanced problem will become more severe.", "Besides, the dependency parsing graph is not able to deal with multi-label classification, since it does not allow multiple arcs to share the same head and dependent tokens.", "Therefore, some overlapped sentiment tuples cannot be recognized.", "The statistics of span length and multi-label problems are listed in Table 1.", "To alleviate the label imbalance problem in Barnes et al. (2021), we propose a novel labeling strategy that consists of two parts: First, we design a set of labels called essential label set (Figure", "1(c)), which can be considered as the basic label set for decoding SSA tuples, since it only includes the labels to tag the boundary tokens of spans.", "As seen, the proportion of span prediction labels and span relation prediction labels are relatively balanced, so that we can mitigate the label imbalance problem and meanwhile keep the basic ability of extracting sentiment tuples if the essential label set is learnt in the final prediction layer of our model.", "However, the labels related to recognize non-boundary tokens of SSA components are also important.", "For instance, they can encode the relations between the tokens inside the spans, which may benefit the extraction of the holders, expressions or targets with long text spans.", "To this end, we design another label set called whole label set (Figure 2), which includes richer labels to fully utilize various information such as the relations among boundary tokens, non-boundary tokens, the tokens within a span, the tokens across different spans.", "Moreover, since the dependency-based method (Barnes et al., 2021) only considers the local relation between each pair of tokens, we add the labels between [CLS] and other tokens related to sentiment tuples into our whole label set, in order to utilize sentence-level global information.", "Considering that if the whole label set is directly applied on the output label for training, the label imbalance problem may occur again.", "We instead employ the whole label set in a soft and implicit fashion by applying it on the hidden layer of our model.", "To well collaborate with our labeling strategy, we also propose an effective token graph model, namely TGLS ( T oken G raph with a novel L abeling S trategy), which uses rich features such as word, part-of-speech tags and characters as inputs and yields contextualized word representations by BiLSTM and multilingual BERT(Devlin et al., 2018).", "In the hidden layer, we build a multi-view token graph, which has four views corresponding to different relations in the whole label set and each view is a graph attention network (Velickovic et al., 2017) with token representations as the nodes.", "In the prediction layer, we introduce a novel adaptive multi-label classifier to extract all the sentiment tuples no matter that they are overlapped or not.", "We conduct extensive experiments on five benchmarks, including NoReC Fine (vrelid et al., 2020), MultiB EU , MultiB CA (Barnes et al., 2018), MPQA (Wiebe et al., 2005) and DS Unis (Toprak et al., 2010).", "The resluts show that our TGLS model outperforms the SOTA model by a large margin.", "In summary, our main contributions include: We design a novel labeling strategy to address the label imbalance issue in prior work.", "Concretely, we employ the whole label set and essential label set in the hidden and prediction layer respectively, achieving a balance 4233 between the label variety and label imbalance.", "We propose an effective token graph model to well collaborate with our labeling strategy, which learns the token-token relations via multi-view token graph networks and reasons the labels between each pair of words using the adaptive multi-label classifier for both overlapped and non-overlapped tuple extraction.", "The experimental results show that our model has achieved the SOTA performance in 5 datasets for structured sentiment analysis, especially in terms of the end-to-end sentiment tuple extraction.", "The task of the Structured Sentiment Analysis (SSA) can be divided into sub-tasks such as span extraction of the holder, target and expression, relation prediction between these elements and assigning polarity.", "Some existing works in Opinion Mining used pipeline methods to first extract spans and then the relations mostly on the MPQA dataset (Wiebe et al., 2005).", "For example, Katiyar and Cardie (2016) propose a BiLSTM-CRF model which is the first such attempt using a deep learning approach, Zhang et al. (2019) propose a transition-based model which identifies opinion elements by the human-designed transition actions, and Xia et al. (2021) propose a unified span-based model to jointly extract the span and relations.", "However, all of these works ignore the polarity classification sub-task.", "In End2End Aspect-Based Sentiment Analysis (ABSA), there are also some attempts to unify several sub-tasks.", "For instance, Wang et al. (2016) augment the ABSA datasets with sentiment expressions, He et al. (2019) make use of this data and models the joint relations between several sub-tasks to learn common features, and (Chen and Qian, 2020) also exploit interactive information from each pair of sub-tasks (target extraction, expression extraction, sentiment classification).", "However, Wang et al. (2016) only annotate sentiment-bearing words not phrases and do not specify the relationship between target and expression, it therefore may not be adequate for full structured sentiment analysis.", "Thus, Barnes et al. (2021) propose a unified approach in which they formulate the structured sentiment analysis task into a dependency graph parsing task and jointly predicts all components of a sentiment graph.", "However, as aforementioned, this direct transformation may be problematic as it may introduce label imbalance in span and relation prediction.", "Thus, we propose an effective graph model with a novel labeling strategy in which we employ a whole label set in the hidden layer to softly affect our model, and an essential label set in the prediction layer to address the imbalance issue.", "The design of our essential label set is inspired by the Handshaking Tagging Scheme (Wang et al., 2020), which is a token pair tagging scheme for entity and relation extraction.", "The handshaking tagging scheme involves only the labels related to the boundary tokens and enables a one-stage joint extraction of spans and relations.", "In our work, we modify the handshaking tagging scheme to use it for SSA.", "Furthermore, since the component span of this task is relatively long, only utilizing the boundary tokens cannot make full use of the annotation information, so we propose a new label set called whole label set, which together with essential label set constitutes our labeling strategy.", "Our essential label set only involves the labels related to the boundary tokens, therefore the label proportions for span prediction and span relation prediction are relatively balanced.", "Given a sentence \" Moscow government has expressed the wish to import the Mongolian meat. \", the essential label set consists of the following labels: Holder : Moscow government Exp:Neutral : expressed Moscow Target : import meat Exp Head to Holder Head : expressed Moscow Exp Tail to Holder Tail : wish government Exp Head to Target Head : expressed import Exp Tail to Target Tail : wish meat where the Holder , Exp. and Target represent the three components of a sentiment tuple, the Head or Tail means the start or end token of a component, and the Neutral denotes the polarity.", "Our whole label set involves both the labels related to boundary and non-boundary tokens, as well as", "the labels related to [CLS] and all tokens in the sentiment tuples.", "Thus, our whole label set can be divided into three groups, span labels, relation labels and [CLS] -related labels.", "Given the sentence in Figure 2, the whole label set include the following labels: Span Label : e.g. import Mongolian Rel Label : e.g. Moscow expressed [CLS] -related Label : e.g. [CLS] expressed where the span and relation labels make our model be aware of the token relations inside and across the spans of sentiment components, and [CLS] related labels can help our model to capture the sentence-level global information.", "We apply whole labels in the hidden layer to softly embed the above information into our model, in order to avoid the potential label imbalance issue.", "We first decode all the expression-holder and expression-target pairs that meet the constraints of essential label set.", "In detail, we can get all component spans based on span prediction labels (e.g. Holder , Exp:Neutral and Target labels), then we decode all expression to holder or target pairs as long as it meets one of the corresponding relation prediction labels (e.g. for expression to holder pairs, the labels are Exp Head to Holder Head and Exp Tail to Hoder Tail ).", "After decoding all the component pairs, we enumerate all possible triples from pairs with the same expression, thus finally decode all the sentiment tuples.", "In this section, We formally present our proposed TGLS model in detail (Figure 3), which mainly consists of four parts, the encoder layer, the multiview token graph as the hidden layer, the adaptive multi-label classifier as the prediction layer and the hierarchical learning strategy to train our model.", "Consider the i th token in a sentence with n tokens, we represent it by concatenating its token embedding e wordi , part-of-speech (POS) embedding e posi , lemma embedding e lemmai , and character-level embedding e chari together:", "w i = e wordi e posi e lemmai e chari (1)", "where denotes the concatenation operation.", "The character-level embedding is generated by the convolution neural networks (CNN) (Kalchbrenner et al., 2014).", "Then, we employ bi-directional LSTM (BiLSTM) to encode the vectorial token representations into contextualized word representations: h i = BiLSTM ( w i ) (2) where h i is the token hidden representation.", "Moreover, in the same way as previous work (Barnes et al., 2021), we also enhance token representations with pretrained contextualized embeddings using multilingual BERT (Devlin et al., 2018).", "In this section, we propose a novel multi-view token graph as our hidden layer, which includes four views, span graph, relation graph, [CLS] -related graph and vanilla GAT graph, and each view is full connected with the attention scoring weights as graph edges and the token representations as graph nodes.", "Recall that the whole label set is applied in this layer, which includes three groups of labels (span, relation and [CLS] -related labels).", "Thus, three views of graphs (span, relation and [CLS] related graph) are used to digest information from three groups of labels respectively, while one view (vanilla GAT graph) is not assigned for any specific task, as the method in vanilla graph attention network (GAT) (Velickovic et al., 2017).", "Formally, we represent the latent token graph G as follows: G = (cid:0) V , SG o , SG s , SG r , SG c (cid:1) (3) where superscript G denotes the graph layer, V is the set of tokens, SG o is the attention scoring matrix in vanilla GAT, SG s , SG r and SG c are the attention scoring matrices used to capture information from span, relation and [CLS] -related labels respectively.", "Without loss of generality, we employ SG = { SG o , SG s , SG r , SG c } unifiedly to represent the four matrices.", "In this section, we introduce the process that we induce the edges of our multi-view token graphs (i.e. four attention scoring matrices SG ) using a mechanism of attention scoring.", "Attention Scoring Our attention matrices are produced by a mechanism of attention scoring which takes two token representations h i , h j as the input, and for the attention matrix corresponding to a certain view v { o, s, r, c } , we first map the tokens to q v,i and k v,j with two multi-layer perceptions (MLP): q v,i , k v,j = MLP qv ( h i ) , MLP kv ( h j ) (4) Then we apply the technique of Rotary Position Embedding (RoPE) (Su et al., 2021) to encode the relative position information.", "Thus, for the graph of view v , the attention score SG v,ij between token i and j can be calculated as follows: SG v,ij = ( q v,i ) R j i k v,j (5) where R j i can incorporate explicit relative positional information into the attention score SG v,ij .", "And in the same way as calculating SG v,ij , we can produce the scores of all views and all token pairs, thus inducing the whole graph edges SG : SG = (cid:110) SG v,ij | v { o, s, r, c } , 1 i, j n (cid:111) (6) where n is the length of the sentence.", "The process that the whole label set learnt by attention scoring matrices SG s , SG r and SG c through a multilabel adaptive-threshold loss will be introduced in Section 4.4.", "Considering that the attention scoring matrix SG now fuses rich information, we naturally think of applying a multi-hop reasoning to obtain more informative token representations.", "Concretely, we first apply a softmax on our adjacency attention matrix SG , then the computation for the representation u l +1 i of the token i at the ( l + 1) th layer, which takes the representations from previous layer as input and outputs the updated representations, can be defined as: A v = Softmax (cid:0) SG v (cid:1) , v { o, s, r, c } (7) u l +1 i = 1 N (cid:88) v (cid:88) j N vi A v,ij W vl u lj (8) where W vl is the trainable weight, N vi is the neighbor of token i in graph of view v , is the ReLU activation function.", "Considering that the previous sota model (Barnes et al., 2021) is not able to deal with multi-label classification as aforementioned, we propose a novel adaptive multi-label classifier as our prediction layer to identify possible essential labels for each token pair.", "Firstly, we take a shortcut connection between the outputs of the encoder layer and graph layer to get the final representation c i = h i u i for each token.", "And by taking c i as the input, we calculate the attention scoring matrices SP based on the mechanism of attention scoring (cf.", "Eq.(4),", "Eq.(5) and", "Eq.(6)): SP = { SP r | r R e } (9) 4236 where superscript P denotes the prediciton layer, R e denotes the essential label set.", "Then, we introduce a technique of adaptive thresholding, which produces a token pair dependent threshold to enable the prediction of the labels for each token pair.", "Adaptive Thresholding For a certain token pair with representations of c i , c j , the token pair dependent threshold T HP ij and the whole T HP are calculated as follows: T HP ij = (cid:0) q THi (cid:1) R j i k THj T HP = (cid:8) T HP ij | 1 i, j n (cid:9) (10) where q THi = W q h i + b q , k THj = W k h j + b k , the W q , W k , b q and b k are the trainable weight and bias matrix, R j i are calculated in the same way as", "Eq.(5), which is used to incorporate explicit relative positional information.", "Formally, for a certain token pair c i , c j , the essential label set is predicted by the following equation: ij = (cid:8) r | SP r,ij > T HP ij , r R e (cid:9) (11) where R e denotes the essential label set, ij is the set of predicted labels of token pair c i , c j .", "In this section, we will propose a novel loss function, namely multi-label adaptive-threshold loss, to enable a hierarchical training process for our model and our labeling strategy (i.e. whole label set learnt by SG s , SG r and SG c in the hidden layer, essential label set learnt by SP in the prediction layer), which is based on a variant 2 of Circle loss (Sun et al., 2020), the difference is that we replace the fixed global threshold with the adaptive token pair dependent threshold to enable a flexible and selective learning of more useful information from whole label set.", "Take the hidden layer as an example.", "Actually, we also implement the adaptive thresholding (cf.", "Eq.(10)) in the hidden layer, where we compute all the token pair dependent threshold T HG = (cid:110) T HG ij | 1 i, j n (cid:111) by taking the token representation h i and h j as the input.", "Then, the multi-label adaptive-threshold loss in hidden 2 The variant of Circle loss was proposed by Su on the website https://kexue.fm/archives/7359.", "layer can be calculated as follows: L w = (cid:88) i (cid:88) j>i log e THG ij + (cid:88) r negij e SG r,ij + (cid:88) i (cid:88) j>i log e THG ij + (cid:88) r posij e SG r,ij (12) where posij R w and negij R w are positive and negative classes involving whole labels that exist or not exist between token i and j .", "When minimizing L w , the loss pushes the attention score SG r,ij above the threshold T HG ij if the token pair possesses the label, while pulls below when it does not.", "3 In a similar way we can calculate the loss L e in the prediction layer by taking T HP , SP as the inputs of the loss function.", "Thus the whole loss of our model can be calculated as follows: L all = L e + L w (13) where the is a hyperparameter to adjust the ratio of the two losses.", "For comparison with previous sota work (Barnes et al., 2021), we perform experiments on five structured sentiment datasets in four languages, including multi-domain professional reviews NoReC Fine (vrelid et al., 2020) in Norwegian, hotel reviews MultiB EU and MultiB CA (Barnes et al., 2018) in Basque and Catalan respectively, news MPQA (Wiebe et al., 2005) in English and reviews of online universities and e-commerce DS Unis (Toprak et al., 2010) in English.", "For fair comparison, we use word2vec skip-gram embeddings openly available from the NLPL vector repository 4 (Kutuzov et al., 2017) and enhance token representations with multilingual BERT (De-vlin et al., 2018), which has 12 transformer blocks, 12 attention heads, and 768 hidden units.", "Our network weights are optimized with Adam and we also conduct Cosine Annealing Warm Restarts learning 3 As aforementioned in Section 4.2, three of the attention scoring matrices and three groups of the whole labels have a one-to-one relationship, so here we can index the three matrices with the whole labels.", "4 http://vectors.nlpl.eu/repository.", "rate schedule (Loshchilov and Hutter, 2016).", "We fixed the word embeddings during training process.", "The char embedding size is set to 100.", "The dropout rate of embeddings and other network components are set to 0.4 and 0.3 respectively.", "We employ 4-layer BiLSTMs with the output size set to 400 and 2-layer for multi-hop reasoning with output size set to 768.", "The learning rate is 3e-5 and the batch size is 8.", "The hyperparameter in Eq.13 is set to 0.25 (cf. Section 6.2).", "We use GeForce RTX 3090 to train our model for at most 100 epochs and choose the model with the highest SF1 score on the validation set to output results on the test set.", "RACL-BERT Chen and Qian (2020) propose a relation-aware collaborative learning framework for end2end sentiment analysis which models the interactive relations between each pair of sub-tasks (target extraction, expression extraction, sentiment classification).", "Barnes et al. (2021) reimplement the RACL as a baseline for SSA task in their work.", "Head-first and Head-final 5 Barnes et al. (2021) cast the structured sentiment analysis as a dependency parsing task and apply a reimplementation of the neural parser by Dozat and Manning (2018), where the main architecture of the model is based on a biaffine classifier.", "The Head-first and Head final are two models with different setups in the parsing graph.", "Following previous SOTA work (Barnes et al., 2021), we use the Span F1, Targeted F1 and two Sentiment Graph Metrics to measure the experimental results.", "In detail, Span F1 evaluates how well these models are able to identify the holders, targets, and expressions.", "Targeted F1 requires the exact extraction of the correct target, and the corresponding polarity.", "Sentiment Graph Metrics include two F1 score, Non-polar Sentiment Graph F1 (NSF1) and Sentiment Graph F1 (SF1), which aims to measure the overall performance of a model to capture the full sentiment graph (Figure", "1(a)).", "For NSF1, each sentiment graph is a tuple of (holder, target, expres-5 https://github.com/jerbarnes/sentiment_graphs.", "sion), while SF1 adds the polarity (holder, target, expression, polarity).", "A true positive is defined as an exact match at graph-level, weighting the overlap in predicted and gold spans for each element, averaged across all three spans.", "Moreover, for ease of analysis, we add an Overall Span F1 score which evaluates how well these models are able to identify all three elements of a sentiment graph with token-level F1 score.", "In this section, we introduce the main experimental results compared with three state-of-the-art models RACL-BERT (Chen and Qian, 2020), Head-first and Head-final models (Barnes et al., 2021).", "Table 2 shows that in most cases our model performs better than other baselines in terms of the Span F1 metrics across all datasets.", "The average improvement ( 1 . 4 ) in Overall Span F1 score proves the effectiveness of our model in span extraction.", "Besides, there exists some significant improvements such as extracting holder on DS Unis ( 6.3) and extracting expression on NoReC Fine ( 4.7), but the extracting expression on DS Unis ( 2.9) are poor.", "As for the metric of Targeted F1, although the Head-first model performs well on MPQA , our TGLS model is obviously more robust as we achieves superior performance on other 4 datasets.", "There are also extremely significant improvements such as on NoReC Fine ( 6.2) and on MultiB CA ( 5.6), it proves the capacity of our model in exact prediction of target and the corresponding polar.", "are important for comprehensively examining span, relation and polar predictions, our TGLS model achieves superior performance throughout all datasets in both NSF1 and SF1 score, especially on NoReC Fine ( 7.2 and 6.4).", "And the average improvement ( 4.5) in SF1 score verifies the excellent ability of our model in the end-to-end sentiment tuple extraction.", "In this section, we conduct extensive ablation studies on NoReC Fine to better understand independent contributions of different components in terms of span overall F1, targeted F1 and SF1 scores.", "Firstly, we remove each view of our graphs separately.", "As shown in Table 3, we observe that the [CLS] -related graph is effective in all three metrics which proves the importance of utilizing sentence-level global information.", "As we assumed, the span graph makes more contribution to the performance of span extraction (Span Overall F1) while the relation graph contributes more to end-to-end sentiment tuple extraction (SF1).", "And we also observe that the vanilla GAT graph makes consid-4239 erable improvement in SF1 score.", "Then, we test the effectiveness of the Rotary Position Embedding (RoPE) (Su et al., 2021).", "The results in Table 3 demonstrate that RoPE can make our model more sensitive to the relative positional information since it significantly improves the performance of exact target extraction (Targeted F1).", "Last, we replace the adaptive threshold with fixed global threshold, and we observe that the performance drops drastically in all three metrics, it suggests that the adaptive thresholding mechanism is very crucial for our model since the flexibility can allow our model to selectively learn more useful information for SSA task from whole labels.", "In this section we perform a deeper analysis on the models in order to answer three research questions:", "Experimental results in Table 2 show that our model performs significantly better in the SF1 score, which to some extent proves that our model can ensure the efficiency of relation extraction.", "However, there lacks a metric to directly quantify the ability in relation extraction and it is still a worthy question to explore how much of the improvement comes from our new model and how much from our new labeling strategy?", "To answer the question, we replace our labels with the dependency-parsing-based labels in head-final setting (Barnes et al., 2021) and experiment on all datasets in terms of a new relation prediction metric, where a true positive is defined as any span pair that overlaps the gold span pair and has the same relation.", "Table 4 shows that our new model achieves superior performance of relation prediction than the previous sota model (Barnes et al., 2021).", "Besides, with new labeling strategy, we can see that our model significantly improve the performance on all datasets compared with the model with replaced dependency-parsing-based labels.", "In this section, we experiment on five datasets to heuristically search for the appropriate value of hyperparameter (cf.", "Eq.(13)).", "Figure 4 shows that all datasets achieve higher SF1 score with between 0.1 and 0.5.", "We ended up fixing alpha to 0.25, since most datasets yield optimal results around this value.", "In addition, it is worth noting that when is set to 0, which means that the whole labels are completely removed, the performance drops a lot, which once again proves the effectiveness of learning whole labels in the hidden layer.", "In this section, we experiment on NoReC Fine to further explore whether whole labels contribute to long span identification.", "Figure", "5(a) evaluates the Expression F1 scores regarding to different expression lengths, we can find that whole labels helps most on those expressions with longer length.", "In Figure", "5(b), we also report the SF1 scores regarding to different distances, that is, from the leftmost token in a tuple to the rightmost token, which shows a similar conclusion.", "In this paper, we propose a token graph model with a novel labeling strategy, consisting of the whole and essential label sets, to extract sentiment tuples for structured sentiment analysis.", "Our model is capable of modeling both global and local token pair interactions by jointly predicting whole labels in the hidden layer and essential labels in the output layer.", "More importantly, our modeling strategy is able to alleviate the label imbalance problem when using token-graph-based approaches for SSA.", "Experimental results show that our model overwhelmingly outperforms SOTA baselines and improves the performance of identifying the sentiment components with long spans.", "We believe that our labeling strategy and model can be well extended to other structured prediction tasks.", "We thank all the reviewers for their insightful comments.", "This work is supported by the National Natural Science Foundation of China (No. 62176187), the National Key Research and Development Program of China (No. 2017YFC1200500), the Research Foundation of Ministry of Education of China (No. 18JZD015), the Youth Fund for Humanities and Social Science Research of Ministry of Education of China (No. 22YJCZH064), the General Project of Natural Science Foundation of Hubei Province (No.2021CFB385)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "method", "objective", "method", "other", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "other", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "method", "method", "other", "other", "abstain", "method", "method", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "result", "objective", "other", "other" ]
[ "Learning high-quality sentence representations benefits a wide range of natural language processing tasks.", "Though BERT-based pre-trained language models achieve high performance on many downstream tasks, the native derived sentence representations are proved to be collapsed and thus produce a poor performance on the semantic textual similarity (STS) tasks.", "In this paper, we present ConSERT, a Con trastive Framework for Self-Supervised SE ntence R epresentation T ransfer, that adopts contrastive learning to fine-tune BERT in an unsupervised and effective way.", "By making use of unlabeled texts, ConSERT solves the collapse issue of BERT-derived sentence representations and make them more applicable for downstream tasks.", "Experiments on STS datasets demonstrate that ConSERT achieves an 8% relative improvement over the previous state-of-the-art, even comparable to the supervised SBERT-NLI.", "And when further incorporating NLI supervision, we achieve new state-of-the-art performance on STS tasks.", "Moreover, ConSERT obtains comparable results with only 1000 samples available, showing its robustness in data scarcity scenarios.", "Sentence representation learning plays a vital role in natural language processing tasks (Kiros et al., 2015; Hill et al., 2016; Conneau et al., 2017; Cer et al., 2018).", "Good sentence representations benefit a wide range of downstream tasks, especially for computationally expensive ones, including large-scale semantic similarity comparison and information retrieval.", "Recently, BERT-based pre-trained language models have achieved high performance on many Work done during internship at Meituan Inc.", "The first two authors contribute equally.", "Weiran Xu is the corresponding author.", "downstream tasks with additional supervision.", "However, the native sentence representations derived from BERT 1 are proved to be of low-quality (Reimers and Gurevych, 2019; Li et al., 2020).", "As shown in Figure 1a, when directly adopt BERT-based sentence representations to semantic textual similarity (STS) tasks, almost all pairs of sentences achieved a similarity score between 0.6 to 1.0 , even if some pairs are regarded as completely unrelated by the human annotators.", "In other words, the BERT-derived native sentence representations are somehow collapsed (Chen and He, 2020), which means almost all sentences are mapped into a small area and therefore produce high similarity.", "Such phenomenon is also observed in several previous works (Gao et al., 2019; Wang et al., 2019; Li et al., 2020).", "They find the word representation space of BERT is anisotropic, the high-frequency words are clustered and close to the origin, while low-frequency words disperse sparsely.", "When averaging token embeddings, those high-frequency words dominate the sentence representations, inducing biases against their real semantics 2 .", "As a 1 Typically, we take the output of the [CLS] token or average token embeddings at the last few layers as the sentence representations.", "result, it is inappropriate to directly apply BERT's native sentence representations for semantic matching or text retrieval.", "Traditional methods usually fine-tune BERT with additional supervision.", "However, human annotation is costly and often unavailable in real-world scenarios.", "To alleviate the collapse issue of BERT as well as reduce the requirement for labeled data, we propose a novel sentence-level training objective based on contrastive learning (He et al., 2020; Chen et al., 2020a,b).", "By encouraging two augmented views from the same sentence to be closer while keeping views from other sentences away, we reshape the BERT-derived sentence representation space and successfully solve the collapse issue (shown in Figure 1b).", "Moreover, we propose multiple data augmentation strategies for contrastive learning, including adversarial attack (Goodfellow et al., 2014; Kurakin et al., 2016), token shuffling, cutoff (Shen et al., 2020) and dropout (Hinton et al., 2012), that effectively transfer the sentence representations to downstream tasks.", "We name our approach ConSERT, a Con trastive Framework for SE ntence R epresentation T ransfer.", "ConSERT has several advantages over previous approaches.", "Firstly, it introduces no extra structure or specialized implementation during inference.", "The parameter size of ConSERT keeps the same as BERT, making it easy to use.", "Secondly, compared with pre-training approaches, ConSERT is more efficient.", "With only 1,000 unlabeled texts drawn from the target distribution (which is easy to collect in real-world applications), we achieve 35% relative performance gain over BERT, and the training stage takes only a few minutes (1-2k steps) on a single V100 GPU.", "Finally, it includes several effective and convenient data augmentation methods with minimal semantic impact.", "Their effects are validated and analyzed in the ablation studies.", "Our contributions can be summarized as follows: 1) We propose a simple but effective sentence-level training objective based on contrastive learning.", "It mitigates the collapse of BERT-derived representations and transfers them to downstream tasks.", "2) We explore various effective text augmentation strategies to generate views for contrastive learning and analyze their effects on unsupervised sentence representation transfer.", "3) With only fine-tuning on unsupervised target datasets, our approach achieves significant improvement on STS tasks.", "When further incorporating with NLI supervision, our approach achieves new state-of-the-art performance.", "We also show the robustness of our approach in data scarcity scenarios and intuitive analysis of the transferred representations.", "3 2 Related Work 2.1 Sentence Representation Learning Supervised Approaches Several works use supervised datasets for sentence representation learning.", "Conneau et al. (2017) finds the supervised Natural Language Inference (NLI) task is useful to train good sentence representations.", "They use a BiLSTM-based encoder and train it on two NLI datasets, Stanford NLI (SNLI) (Bowman et al., 2015) and Multi-Genre NLI (MNLI) (Williams et al., 2018).", "Universal Sentence Encoder (Cer et al., 2018) adopts a Transformer-based architecture and uses the SNLI dataset to augment the unsupervised training.", "SBERT (Reimers and Gurevych, 2019) proposes a siamese architecture with a shared BERT encoder and is also trained on SNLI and MNLI datasets.", "Self-supervised Objectives for Pre-training BERT (Devlin et al., 2019) proposes a bidirectional Transformer encoder for language model pre-training.", "It includes a sentence-level training objective, namely next sentence prediction (NSP), which predicts whether two sentences are adjacent or not.", "However, NSP is proved to be weak and has little contribution to the final performance (Liu et al., 2019).", "After that, various self-supervised objectives are proposed for pre-training BERT-like sentence encoders.", "Cross-Thought (Wang et al., 2020) and CMLM (Yang et al., 2020) are two similar objectives that recover masked tokens in one sentence conditioned on the representations of its contextual sentences.", "SLM (Lee et al., 2020) proposes an objective that reconstructs the correct sentence ordering given the shuffled sentences as the input.", "However, all these objectives need document-level corpus and are thus not applicable to downstream tasks with only short texts.", "Unsupervised Approaches BERT-flow (Li et al., 2020) proposes a flow-based approach that maps BERT embeddings to a standard Gaussian latent space, where embeddings are more suitable for comparison.", "However, this approach introduces 3 Our code is available at https://github.com/ yym6472/ConSERT .", "extra model structures and need specialized implementation, which may limit its application.", "Contrastive Learning for Visual Representation Learning Recently, contrastive learning has become a very popular technique in unsupervised visual representation learning with solid performance (Chen et al., 2020a; He et al., 2020; Chen et al., 2020b).", "They believe that good representation should be able to identify the same object while distinguishing itself from other objects.", "Based on this intuition, they apply image transformations (e.g. cropping, rotation, cutout, etc.) to randomly generate two augmented versions for each image and make them close in the representation space.", "Such approaches can be regarded as the invariance modeling to the input samples.", "Chen et al. (2020a) proposes SimCLR, a simple framework for contrastive learning.", "They use the normalized temperature-scaled cross-entropy loss (NT-Xent) as the training loss, which is also called InfoNCE in the previous literature (Hjelm et al., 2018).", "Contrastive Learning for Textual Representation Learning Recently, contrastive learning has been widely applied in NLP tasks.", "Many works use it for language model pre-training.", "IS-BERT (Zhang et al., 2020) proposes to add 1-D convolutional neural network (CNN) layers on top of BERT and train the CNNs by maximizing the mutual information (MI) between the global sentence embedding and its corresponding local contexts embeddings.", "CERT (Fang and Xie, 2020) adopts a similar structure as MoCo (He et al., 2020) and uses back-translation for data augmentation.", "However, the momentum encoder needs extra memory and back-translation may produce false positives.", "BERT-CT (Carlsson et al., 2021) uses two individual encoders for contrastive learning, which also needs extra memory.", "Besides, they only sample 7 negatives, resulting in low training efficiency.", "De-CLUTR (Giorgi et al., 2020) adopts the architecture of SimCLR and jointly trains the model with contrastive objective and masked language model objective.", "However, they only use spans for contrastive learning, which is fragmented in semantics.", "CLEAR (Wu et al., 2020) uses the same architecture and objectives as DeCLUTR.", "Both of them are used to pre-train the language model, which needs a large corpus and takes a lot of resources.", "In this section, we present ConSERT for sentence representation transfer.", "Given a BERT-like pre-trained language model M and an unsupervised dataset D drawn from the target distribution, we aim at fine-tuning M on D to make the sentence representation more task-relevant and applicable to downstream tasks.", "We first present the general framework of our approach, then we introduce several data augmentation strategies for contrastive learning.", "Finally, we talk about three ways to further incorporate supervision signals.", "Our approach is mainly inspired by SimCLR (Chen et al., 2020a).", "As shown in Figure 2, there are three major components in our framework: A data augmentation module that generates different views for input samples at the token embedding layer.", "A shared BERT encoder that computes sentence representations for each input text.", "During training, we use the average pooling of the token embeddings at the last layer to obtain sentence representations.", "A contrastive loss layer on top of the BERT encoder.", "It maximizes the agreement between one representation and its corresponding version that is augmented from the same sentence while keeping it distant from other sentence representations in the same batch.", "For each input text x , we first pass it to the data augmentation module, in which two transformations T 1 and T 2 are applied to generate two versions of token embeddings: e i = T 1 ( x ) , e j = T 2 ( x ) , where e i , e j RL d , L is the sequence length and d is the hidden dimension.", "After that, both e i and e j will be encoded by multi-layer transformer blocks in BERT and produce the sentence representations r i and r j through average pooling.", "Following Chen et al. (2020a), we adopt the normalized temperature-scaled cross-entropy loss (NT-Xent) as the contrastive objective.", "During each training step, we randomly sample N texts from D to construct a mini-batch, resulting in 2 N representations after augmentation.", "Each data point is trained to find out its counterpart among 2( N 1) in-batch negative samples: L i,j = log exp( sim ( r i , r j ) / ) (cid:80) 2 Nk =1 1 [ k (cid:54) = i ] exp( sim ( r i , r k ) / ) (1) , where sim ( ) indicates the cosine similarity function, controls the temperature and 1 is the indicator.", "Finally, we average all 2 N in-batch classification losses to obtain the final contrastive loss L con .", "We explore four different data augmentation strategies to generate views for contrastive learning, including adversarial attack (Goodfellow et al., 2014; Kurakin et al., 2016), token shuffling, cutoff (Shen et al., 2020) and dropout (Hinton et al., 2012), as illustrated in Figure", "3. Adversarial Attack Adversarial training is generally used to improve the model's robustness.", "They generate adversarial samples by adding a worst-case perturbation to the input sample.", "We implement this strategy with Fast Gradient Value (FGV) (Rozsa et al., 2016), which directly uses the gradient to compute the perturbation and thus is faster than two-step alternative methods.", "Note that this strategy is only applicable when jointly training with supervision since it relies on supervised loss to compute adversarial perturbations.", "Token Shuffling In this strategy, we aim to randomly shuffle the order of the tokens in the input sequences.", "Since the bag-of-words nature in the transformer architecture, the position encoding is the only factor about the sequential information.", "Thus, similar to Lee et al. (2020), we implement this strategy by passing the shuffled position ids to the embedding layer while keeping the order of the token ids unchanged.", "Cutoff Shen et al. (2020) proposes a simple and efficient data augmentation strategy called cutoff.", "They randomly erase some tokens (for token cut-off), feature dimensions (for feature cutoff), or token spans (for span cutoff) in the L d feature matrix.", "In our experiments, we only use token cutoff and feature cutoff and apply them to the token embeddings for view generation.", "Dropout Dropout is a widely used regularization method that avoids overfitting.", "However, in our experiments, we also show its effectiveness as an augmentation strategy for contrastive learning.", "For this setting, we randomly drop elements in the token embedding layer by a specific probability and set their values to zero.", "Note that this strategy is different from Cutoff since each element is considered individually.", "STS12 STS13 STS14 STS15 STS16 STSb SICK-R Total Number of train samples 0 0 0 0 0 5749 4500 Number of valid samples 0 0 0 0 0 1500 500 Number of test samples 3108 1500 3750 3000 1186 1379 4927 Number of Unlabeled Texts 6216 3000 7500 17000 18366 17256 19854 89192 Table 1: The statistics of STS datasets.", "Besides unsupervised transfer, our approach can also be incorporated with supervised learning.", "We take the NLI supervision as an example.", "It is a sentence pair classification task, where the model are trained to distinguish the relation between two sentences among contradiction , entailment and neutral .", "The classification objective can be expressed as following: f = Concat ( r 1 , r 2 , | r 1 r 2 | ) L ce = CrossEntropy ( W f + b, y ) (2) , where r 1 and r 2 denote two sentence representations.", "We propose three ways for incorporating additional supervised signals: Joint training (joint) We jointly train the model with the supervised and unsupervised objectives L joint = L ce + L con on NLI dataset.", "is a hyper-parameter to balance two objectives.", "Supervised training then unsupervised transfer (sup-unsup) We first train the model with L ce on NLI dataset, then use L con to fine-tune it on the target dataset.", "Joint training then unsupervised transfer (joint-unsup) We first train the model with the L joint on NLI dataset, then use L con to fine-tune it on the target dataset.", "To verify the effectiveness of our proposed approach, we conduct experiments on Semantic Textual Similarity (STS) tasks under the unsupervised and supervised settings.", "Dataset Following previous works(Reimers and Gurevych, 2019; Li et al., 2020; Zhang et al., 2020), we evaluate our approach on multiple STS datasets, including STS tasks 2012 2016 (STS12 STS16) (Agirre et al., 2012, 2013, 2014, 2015, 2016), STS", "benchmark (STSb) (Cer et al., 2017) and SICK-Relatedness (SICK-R) (Marelli et al.).", "Each sample in these datasets contains a pair of sentences as well as a gold score between 0 and 5 to indicate their semantic similarity.", "For our unsupervised experiments, we mix the unlabeled texts from these datasets to fine-tune our model.", "We obtain all 7 datasets through the SentEval toolkit (Conneau and Kiela, 2018).", "The statistics is shown in Table", "1. For supervised experiments, we use the combination of SNLI (570k samples) (Bowman et al., 2015) and MNLI (430k samples) (Williams et al., 2018) to train our model.", "In the joint training setting, the NLI texts are also used for contrastive objectives.", "Baselines To show our effectiveness on unsupervised sentence representation transfer, we mainly select BERT-flow (Li et al., 2020) for comparison, since it shares the same setting as our approach.", "For unsupervised comparison, we use the average of GloVe embeddings, the BERT-derived native embeddings, CLEAR (Wu et al., 2020) (trained on BookCorpus and English Wikipedia corpus), IS-BERT (Zhang et al., 2020) (trained on unlabeled texts from NLI datasets), BERT-CT (Carlsson et al., 2021) (trained on English Wikipedia corpus).", "For comparison with supervised methods, we select InferSent (Conneau et al., 2017), Universal Sentence Encoder (Cer et al., 2018), SBERT (Reimers and Gurevych, 2019) and BERT-CT (Carlsson et al., 2021) as baselines.", "They are all trained with NLI supervision.", "Evaluation When evaluating the trained model, we first obtain the representation of sentences by averaging the token embeddings at the last two layers 4 , then we report the spearman correlation between the cosine similarity scores of sentence representations and the human-annotated gold scores.", "When calculating spearman correlation, we merge all sentences together (even if some STS datasets have multiple splits) and calculate spearman correlation for only once 5 .", "4 As shown in Li et al. (2020), averaging the last two layers of BERT achieves slightly better results than averaging the last one layer.", "5 Note that such evaluation procedure is different from Method STS12 STS13 STS14 STS15 STS16 STSb SICK-R Avg.", "Implementation Details Our implementation is based on the Sentence-BERT 6 (Reimers and Gurevych, 2019).", "We use both the BERT-base and BERT-large for our experiments.", "The max sequence length is set to 64 and we remove the default dropout layer in BERT architecture considering the cutoff and dropout data augmentation strategies used in our framework.", "The ratio of token cutoff and feature cutoff is set to 0.15 and 0.2 respectively, as suggested in Shen et al. (2020).", "The ratio of dropout is set to 0.2.", "The temperature of NT-Xent loss is set to 0.1, and the is set to 0.15 for the joint training setting.", "We adopt Adam optimizer and set the learning rate to 5e-7.", "We use a linear learning rate warm-up over 10% of the training steps.", "The batch size is set to 96 in most of our experiments.", "We use the dev set of STSb to tune the hyperparameters (including the augmentation strategies) and evaluate the model every 200 steps during training.", "The best checkpoint on the dev set of STSb is saved for test.", "We further discuss the influence of the batch size and the temperature in the subsequent section.", "For unsupervised evaluation, we load the pre-trained BERT to initialize the BERT encoder in our framework.", "Then we randomly mix the unlabeled texts from 7 STS datasets and use them to fine-tune our model.", "SentEval toolkit, which calculates spearman correlation for each split and reports the mean or weighted mean scores.", "6 https://github.com/UKPLab/ sentence-transformers The results are shown in Table", "2. We can observe that both BERT-flow and ConSERT can improve the representation space and outperform the GloVe and BERT baselines with unlabeled texts from target datasets.", "However, ConSERT large achieves the best performance among 6 STS datasets, significantly outperforming BERT large -flow with an 8% relative performance gain on average (from 70.76 to 76.45).", "Moreover, it is worth noting that ConSERT large even outperforms several supervised baselines (see Figure 3) like InferSent (65.01) and Universal Sentence Encoder (71.72), and keeps comparable to the strong supervised method SBERT large -NLI (76.55).", "For the BERT base architecture, our approach ConSERT base also outperforms BERT base -flow with an improvement of 3.17 (from 69.57 to 72.74).", "For supervised evaluation, we consider the three settings described in Section 3.3.", "Note that in the joint setting, only NLI texts are used for contrastive learning, making it comparable to SBERT-NLI.", "We use the model trained under the joint setting as the initial checkpoint in the joint-unsup setting.", "We also re-implement the SBERT-NLI baselines and use them as the initial checkpoint in the sup-unsup setting.", "The results are illustrated in Table", "3. For the models trained with NLI supervision, we find that ConSERT joint consistently performs better than SBERT, revealing the effectiveness of our proposed contrastive objective as well as the data augmentation strategies.", "On average, ConSERT base joint Method STS12 STS13 STS14 STS15 STS16 STSb SICK-R Avg.", "achieves a performance gain of 2.88 over the reimplemented SBERT base -NLI, and ConSERT large joint achieves a performance gain of 2.70.", "When further performing representation transfer with STS unlabeled texts, our approach achieves even better performance.", "On average, ConSERT large joint-unsup outperforms the initial checkpoint ConSERT large joint with 1.84 performance gain, and outperforms the previous state-of-the-art BERT large -flow with 2.92 performance gain.", "The results demonstrate that even for the models trained under supervision, there is still a huge potential of unsupervised representation transfer for improvement.", "To prove the hypothesis that the collapse issue is mainly due to the anisotropic space that is sensitive to the token frequency, we conduct experiments that mask the embeddings of several most frequent tokens when applying average pooling to calculate the sentence representations.", "The relation between the number of removed top-k frequent tokens and the average spearman correlation is shown in Figure", "4. We can observe that when removing a few top frequent tokens, the performance of BERT improves sharply on STS tasks.", "When removing 0 25 50 75 100 125 150 175 200 Number of removed top-k frequent tokens 55.0 57.5 60.0 62.5 65.0 67.5 70.0 72.5 A v e r a g e s p e a r m a n c o rr e l a t i o n ConSERT w/o removing BERT w/o removing BERT-baseConSERT-base Figure 4: The average spearman correlation on STS tasks w.r.t. the number of removed top-k frequent tokens.", "34 most frequent tokens, the best performance is achieved (61.66), and there is an improvement of 7.8 from the original performance (53.86).", "For ConSERT, we find that removing a few most frequent tokens only results in a small improvement of less than 0.3.", "The results show that our approach reshapes the BERT's original embedding space, reducing the influence of common tokens on sentence representations.", "In this section, we study the effect of data augmentation strategies for contrastive learning.", "We consider 5 options for each transformation, including None (i.e. doing nothing), Shuffle, Token Cutoff, None Shuffle TokenCutoff FeatureCutoff Dropout None Shuffle TokenCutoff FeatureCutoff Dropout 63.84 72.09 71.11 67.86 67.77 72.09 71.62 72.41 72.67 72.64 71.11 72.41 70.91 70.84 71.30 67.86 72.74 71.20 66.76 66.65 67.77 72.71 71.32 66.67 66.52 64 65 66 67 68 69 70 71 72 A v e r a g e s p e a r m a n c o rr e l a t i o n Figure 5: The performance visualization with different combinations of data augmentation strategies.", "Feature Cutoff, and Dropout, resulting in 5 5 combinations.", "Note that the Adversarial Attack strategy is not considered here, since it needs additional supervision to generate adversarial samples.", "All these experiments follow the unsupervised setting and use the BERT base architecture.", "The results can be found in Figure", "5. We can make the following observations.", "First, Shuffle and Token Cutoff are the two most effective strategies (where Shuffle is slightly better than Token Cutoff), significantly outperforming Feature Cutoff and Dropout.", "This is probably because Shuffle and Token Cutoff are more related to the downstream STS tasks since they are directly operated on the token level and change the structure of the sentence to produce hard examples.", "Secondly, Feature Cutoff and Dropout also improve performance by roughly 4 points when compared with the None-None baseline.", "Moreover, we find they work well as a complementary strategy.", "Combining with another strategy like Shuffle may further improve the performance.", "When combined Shuffle with Feature Cutoff, we achieve the best result.", "We argue that Feature Cutoff and Dropout are useful in modeling the invariance of the internal noise for the sentence encoder, and thus improve the model's robustness.", "Finally, we also observe that even without any data augmentation (the None-None combination), our contrastive framework can improve BERT's performance on STS tasks (from 53.86 to 63.84).", "This None-None combination has no effect on maximizing agreement between views since the repre-1 shot 10 shot 100 shot 1000 shot 10000 shot full dataset Number of unlabeled texts 50 55 60 65 70 75 80 A v e r a g e s p e a r m a n c o rr e l a t i o n 53.47 59.11 67.66 72.61 72.82 72.74 74.13 74.35 76.57 78.10 78.80 79.00 UnsupervisedSupervised (sup-unsup) Figure 6: The few-shot experiments under the unsupervised and supervised settings.", "sentations of augmented views are exactly the same.", "On the contrary, it tunes the representation space by pushing each representation away from others.", "We believe that the improvement is mainly due to the collapse phenomenon of BERT's native representation space.", "To some extent, it also explains why our method works.", "To validate the reliability and the robustness of ConSERT under the data scarcity scenarios, we conduct the few-shot experiments.", "We limit the number of unlabeled texts to 1, 10, 100, 1000, and 10000 respectively, and compare their performance with the full dataset.", "Figure 6 presents the results.", "For both the unsupervised and the supervised settings, our approach can make a huge improvement over the baseline with only 100 samples available.", "When the training samples increase to 1000, our approach can basically achieve comparable results with the models trained on the full dataset.", "The results reveal the robustness and effectiveness of our approach under the data scarcity scenarios, which is common in reality.", "With only a small amount of unlabeled texts drawn from the target data distribution, our approach can also tune the representation space and benefit the downstream tasks.", "The temperature in NT-Xent loss (Equation 1) is used to control the smoothness of the distribution normalized by softmax operation and thus influ-ences the gradients when backpropagation.", "A large temperature smooths the distribution while a small temperature sharpens the distribution.", "In our experiments, we explore the influence of temperature 0 .", "As shown in the figure, we find the performance is extremely sensitive to the temperature.", "Either too small or too large temperature will make our model perform badly.", "And the optimal temperature is obtained within a small range (from about 0.08 to 0.12).", "This phenomenon again demonstrates the collapse issue of BERT embeddings, as most sentences are close to each other, a large temperature may make this task too hard to learn.", "We select 0.1 as the temperature in most of our experiments.", "In some previous works of contrastive learning, it is reported that a large batch size benefits the final performance and accelerates the convergence of the model since it provides more in-batch negative samples for contrastive learning (Chen et al., 2020a).", "Those in-batch negative samples improve the training efficiency.", "We also analyze the influence of the batch size for unsupervised sentence representation transfer.", "The results are illustrated in Table", "4. We show both the spearman correlation and the corresponding training steps.", "We find that a larger batch size does achieve better performance.", "However, the improvement is not so significant.", "Meanwhile, a larger batch size does speed up the training process, but it also needs more GPU memories at the same time.", "In this paper, we propose ConSERT, a self-supervised contrastive learning framework for transferring sentence representations to downstream tasks.", "The framework does not need extra structure and is easy to implement for any encoder.", "We demonstrate the effectiveness of our framework on various STS datasets, both our unsupervised and supervised methods achieve new state-of-the-art performance.", "Furthermore, few-shot experiments suggest that our framework is robust in the data scarcity scenarios.", "We also compare multiple combinations of data augmentation strategies and provide fine-grained analysis for interpreting how our approach works.", "We hope our work will provide a new perspective for future researches on sentence representation transfer.", "We thank Keqing He, Hongzhi Zhang and all anonymous reviewers for their helpful comments and suggestions.", "This work was partially supported by National Key R&D Program of China No. 2019YFF0303300 and Subject II No. 2019YFF0303302, DOCOMO Beijing Communications Laboratories Co., Ltd, MoE-CMCC Artifi-cal Intelligence Project No.", "MCM20190701.", "Sentence representation learning is a basic task in natural language processing and benefits many downstream tasks.", "This work proposes a contrastive learning based framework to solve the collapse issue of BERT and transfer BERT sentence representations to target data distribution.", "Our approach not only provides a new perspective about BERT's representation space, but is also useful in practical applications, especially for data scarcity scenarios.", "When applying our approach, the user should collect a few unlabeled texts from target data distribution and use our framework to fine-tune BERT encoder in a self-supervised manner.", "Since our approach is self-supervised, no bias will be introduced from human annotations.", "Moreover, our data augmentation strategies also have little probability to introduce extra biases since they are all based on random sampling.", "However, it is still possible to introduce data biases from the unlabeled texts.", "Therefore, users should pay special attention to ensure that the training data is ethical, unbiased, and closely related to downstream tasks." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "other", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "method", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain" ]
[ "The Surface Realization Shared Tasks of 2018 and 2019 were Natural Language Generation shared tasks with the goal of exploring approaches to surface realization from Universal-Dependency-like trees to surface strings for several languages.", "In the 2018 shared task there was very little difference in the absolute performance of systems trained with and without additional, synthetically created data, and a new rule prohibiting the use of synthetic data was introduced for the 2019 shared task.", "Contrary to the findings of the 2018 shared task, we show, in experiments on the English 2018 dataset, that the use of synthetic data can have a substantial positive effect an improvement of almost 8 BLEU points for a previously state-of-the-art system.", "We analyse the effects of synthetic data, and we argue that its use should be encouraged rather than prohibited so that future research efforts continue to explore systems that can take advantage of such data.", "The shallow task of the recent surface realization (SR) shared tasks (Belz et al., 2011; Mille et al., 2018, 2019) appears to be a relatively straightforward problem.", "Given a tree of lemmas, a system has to restore the original word order of the sentence and inflect its lemmas, see Figure 1. Yet SR systems often struggle, even for a relatively fixed word order language such as English.", "Improved performance would facilitate investigation of more complex versions of the shallow task, such as the deep task in which function words are pruned from the tree, which may be of more practical use in pipeline natural language generation (NLG) systems (Moryossef et al., 2019; Elder et al., 2019; come story AP : this the from This story comes from the AP : Figure 1: Example tree and reference sentence Castro Ferreira et al., 2019).", "In this paper we explore the use of synthetic data for the English shallow task.", "Synthetic data is created by taking an unlabelled sentence, parsing it with an open source universal dependency parser 1 and transforming the result into the input representation.", "Unlike in the 2018 shared task, where a system trained with synthetic data performed roughly the same as a system trained on the original dataset (Elder and Hokamp, 2018; King and White, 2018), we find its use leads to a large improvement in performance.", "The state-of-the-art on the dataset is 72.7 BLEU-4 score (Yu et al., 2019b) our system achieves a similar result of 72.3, which improves to 80.1 with the use of synthetic data.", "We analyse the ways in which synthetic data helps to improve performance, finding that longer sentences are particularly improved and more exactly correct linearizations are generated overall.", "Although it is common knowledge that machine learning systems typically benefit from more data, this 7.4 point jump in BLEU is important and worth emphasizing.", "The 2019 shared task introduced a new rule which prohibited the use of synthetic data.", "This was done in order to make the results of different systems more comparable.", "However, systems designed with smaller datasets in mind might not scale to the use of synthetic data, and an inadvertent consequence of such a rule is that it may produce results which could be misleading for future research directions.", "For instance, the system which was the clear winner of this year's shared task (Yu et al., 2019a) used tree-structured long short-term mem-ory (LSTM) networks (Tai et al., 2015).", "In general, tree LSTMs can be slow and difficult to train.", "2 Song et al. (2018) utilized a variant of the tree LSTM in a similar NLG task, converting abstract meaning representation (AMR) graphs to text.", "Following the state-of-the-art system (Kon-stas et al., 2017), which used standard LSTMs, Song et al. augmented their training with synthetic data.", "Though their system outperformed Konstas et al. at equivalent levels of additional training sentences, it was unable to scale up to the 20 million sentences used by the best Konstas et al. system and ultimately did not outperform them.", "3 Critics of neural NLG approaches 4 emphasise that quality and reliability are at the core of production-ready NLG systems.", "What we are essentially arguing is that if using synthetic data contributes to producing higher quality outputs, then we ought to ensure we are designing systems that can take advantage of synthetic data.", "We evaluate on the Surface Realization Shared Task (SRST) 2018 dataset (Mille et al., 2018) for English 5 , which was derived from the Universal Dependency English Web Treebank 2.0 6 .", "3 Song et", "al.'s best system achieved 33.0 BLEU score with 2 million additional sentences, while Konstas et al. scored 32.3 with 2 million and 33.8 with 20 million (the best overall system).", "4 See, for example, https://ehudreiter.com/ 5 http://taln.upf.edu/pages/msr2018-ws/ SRST.html 6 https://github.com/ UniversalDependencies/UD_English-EWT The training set consists of 12,375 sentences, dev 1,978, test 2,062.", "The system we use is an improved version of a previous shared task participant's system (Elder and Hokamp, 2018).", "This baseline system is a bidirectional LSTM encoder-decoder model.", "The model is trained with copy attention (Vinyals et al., 2015; See et al., 2017) which allows it to copy unknown tokens from the input sequence to the output.", "The system performs both linearization and inflection in a single decoding step.", "To aid inflection, a list is appended to the input sequence containing possible forms for each relevant lemma.", "Depth first linearization (Konstas et al., 2017) is used to convert the tree structure into a linear format, which is required for the encoder.", "This linearization begins at the root node and adds each subsequent child to the sequence, before returning to the highest node not yet added.", "Where there are multiple child nodes one is selected at random.", "Decoding is done using beam search, the output sequence length is artificially constrained to contain the same number of tokens as the input.", "Random linearizations In the baseline system, a single random depth first linearization of the training data is obtained and used repeatedly to train the model.", "Instead, we obtain multiple linearizations, so that each epoch of training data potentially contains a different linearization of the same dependency tree.", "This makes the model more robust to different linearizations, which is helpful as neural networks don't generally deal well with randomness (Juraska et al., 2018).", "Scoping brackets Similar to Konstas et al. (2017) we apply scoping brackets around child nodes.", "This provides further indication of the tree structure to the model, despite using a linear sequence as input.", "Restricted beam search In an attempt to reduce unnecessary errors during decoding, our beam search looks at the input sequence and restricts the available vocabulary to only tokens from the input, and tokens which have not yet appeared in the output sequence.", "This is similar to the approach used by King and White (2018).", "To augment the existing training data we create synthetic data by parsing sentences from publicly available corpora.", "The two corpora we investigated are Wikitext 103 (Merity et al., 2017) and the CNN stories portion of the DeepMind Q&A dataset (Hermann et al., 2015).", "Each corpus requires some cleaning and formatting, after which they can be sentence tokenized using CoreNLP (Manning et al., 2014).", "Sentences are filtered by length min 5 tokens and max 50 and for vocabulary overlap with the original training data set to 80% of tokens in a sentence required to appear in the original vocabulary.", "These sentences are then parsed using the Stanford NLP UD parser (Qi et al., 2018).", "This leaves us with 2.4 million parsed sentences from the CNN stories corpus and 2.1 million from Wikitext.", "It is a straightforward process to convert a parse tree into synthetic data.", "First, word order information is removed by shuffling the IDs of the parse tree, then the tokens are lemmatised by removing the form column.", "This is the same process used by the shared task organizers to create datasets from the UD treebanks.", "While it has been noted that the use of synthetic data is problematic in NLG tasks (WeatherGov (Liang et al., 2009) being the notable example) our data is created differently.", "The WeatherGov dataset is constructed by pairing a table with the output of a rule-based NLG system.", "This means any system trained on WeatherGov only re-learns the rules used to generate the text.", "Our approach is the reverse; we parse an existing, naturally occurring sentence, and, thus, the model must learn to reverse the parsing algorithm.", "The system is trained using a custom fork 7 of the OpenNMT-py framework (Klein et al., 2017), the only change made was to the beam search decoding code.", "Hyperparameter details and replication instructions are provided in our project's repository 8 , in particular in the config directory.", "Vocabulary size varies based on the datasets in use.", "It is determined by using any tokens which appears 10 times or more.", "When using the original shared task dataset, the vocabulary size is 7 https://github.com/Henry-E/OpenNMT-py 8 https://github.com/Henry-E/ surface-realization-shallow-task BLEU-4 B10 70.8 P16 65.9 ST18 69.1 Yu19 72.7 Ours 72.3 Ours + Synthetic data 80.1 Table 1: Test set results for baselines trained on the original dataset and the final model which uses synthetic data 2,193 tokens, training is done for 33 epochs and takes 40 minutes on two Nvidia 1080 Ti GPUs.", "All hyperparameters stay the same when training with the synthetic data, except for vocabulary size and training time.", "For the combined shared task, Wikitext and CNN datasets the vocabulary size is 89,233, training time increases to around 2 days, and uses 60 random linearizations of the shared task dataset and 8 of the Wikitext and CNN datasets.", "The evaluation is performed on detokenized sentences 9 using the official evaluation script from the 2018 shared task.", "We focus on BLEU-4 score (Papineni et al., 2002) which was shown in both shared tasks to be highly correlated with human evaluation scores.", "In Table 1, we compare our results on the test set with those reported in Yu et al. (2019b), which include the Yu et al. system (Yu19), the best 2018 shared task result for English (Elder and Hokamp, 2018) (ST18) and Yu et", "al.'s implementation of two other baselines, Bohnet et al. (2010) (B10) and Puduppully et al. (2016) (P16) .", "Ignoring for now the result with synthetic data, we can see that our system is competitive with that of Yu et al (72.3 vs 72.7).", "In Section 2.3, we described three improvements to our baseline system: random linearization, scoping and restricted beam search.", "An ablation analysis of these improvements on the dev set is shown in Table 2. The biggest improvement comes from the introduction of random lineariza-9 Using detokenized inputs for BLEU makes the score very sensitive to detokenization used and in the 2019 shared task evaluation was changed to use tokenized inputs instead.", "The last row of Table 1 shows the effect of adding synthetic data.", "BLEU score on the test set jumps from 72.3 to 80.1.", "To help understand why additional data makes such a substantial difference, we perform various analyses on the dev set, including examining the effect of the choice of unlabeled corpus and highlighting interesting differences between the systems trained with and without the synthetic data.", "The role of corpus Table 3 compares the Wikitext corpus as a source of additional training data to the CNN corpus.", "Both the individual results and the result obtained by combining the two corpora show that there is little difference between the two.", "Sentence length and BLEU score Using compare-mt (Neubig et al., 2019) we noticed a striking difference between the systems with regards to performance on sentences of different length.", "10 This is shown in Figure 2. Even though the synthetic data sentences were limited to 50 tokens in length, the synthetic data performed equally well for sentence length buckets 50-60 and 60+, while the baseline data system performed relatively worse.", "It is possible this is due to the synthetic data system containing a larger vocabulary and being exposed to a wider range of commonly occurring phrases, which make up parts of longer sentences.", "Error Analysis We perform some preliminary analysis that could serve as a precursor to more detailed human evaluation.", "Table 4 lists the number of exact matches, in which the tokenized reference sentence and the generated sentence exactly match.", "We also detect relatively minor errors, namely punctuation and inflection, in which these are the only differences between the reference and generated sentences.", "Punctuation errors are typically minor and there is usually ambiguity about their placement.", "11 Inflection errors occur when a different inflected form has been cho-sen by the model than in the reference sentence.", "These tend to be small differences and are often valid alternatives, e.g. choosing 'm over am .", "Within the remaining uncategorized sentences are mostly linearization errors.", "Linearization errors come in two main categories; non-breaking, in which the linearization is different from the reference sentence but is still valid and communicates the same meaning as the reference see Example 1 below; and breaking, where the linearization has clear errors and doesn't contain the same meaning as the reference sentence see Example 2 below.", "11 In the 2019 shared task an additional feature was provided to indicate the position of punctuation relative to its head token.", "1. Non-breaking", "(a) Ref: From the AP comes this story:", "(b) Synth: This story comes from the AP: 2. Breaking", "(a) Ref: I ran across this item on the Internet.", "(b) Synth: I ran on the internet across this item.", "This kind of breakdown in an error analysis may help understand the quality of these systems in more absolute terms, since it's the overall number of accurate sentences which matters.", "This could be more intuitive than comparing BLEU scores relative to prior models when deciding whether to apply a system in a business setting.", "We have argued for the use of synthetic data in English surface realization, justified by the fact that its use gives a significant performance boost on the shallow task, from 72.7 BLEU up to 80.1.", "While this is not yet at the level of reliability needed for neural NLG systems to be used commercially, it is a step in the right direction.", "Assuming the use of synthetic data, more needs to be investigated in order to fully maximize its benefit on performance.", "Future work will look more closely at the choice of corpus, construction details of the synthetic dataset, as well as the tradeoff between training time and accuracy that comes with larger vocabularies.", "The work described in this paper has focused on English.", "Another avenue of research would be to investigate the role of synthetic data in surface realization in other languages.", "We thank the anonymous reviewers for their helpful comments.", "This research is supported by Science Foundation Ireland in the ADAPT Centre for Digital Content Technology.", "The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund." ]
[ "abstain", "abstain", "result", "objective", "abstain", "abstain", "other", "objective", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other" ]
[ "We propose a benchmark to measure whether a language model is truthful in generating answers to questions.", "The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics.", "We crafted questions that some humans would answer falsely due to a false belief or misconception.", "To perform well, models must avoid generating false answers learned from imitating human texts.", "We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model.", "The best model was truthful on 58% of questions, while human performance was 94%.", "Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans.", "The largest models were generally the least truthful.", "This contrasts with other NLP tasks, where performance improves with model size.", "However, this result is expected if false answers are learned from the training distribution.", "We suggest that scaling up models alone is less promising for improving truthfulness than finetuning using training objectives other than imitation of text from the web.", "The enemy of truth is blind acceptance. Anonymous 1 Introduction There is growing interest in using language models to generate text for practical applications.", "Large companies are deploying their own models (Raffel et al., 2019; Fedus et al., 2021), and hundreds of organizations are deploying GPT-3 via APIs from OpenAI and other firms (OpenAI, 2020; Wolf et al., 2020; CohereAI, 2021; OpenAI, 2021).", "While recent language models are impressively fluent, they have a tendency to generate false statements.", "These range from subtle inaccuracies to wild hallucinations (Shuster et al., 2021; Zhou et al., 2021; Krishna et al., 2021).", "This leads to three concerns: 1. Accidental misuse .", "Due to lack of rigorous testing, deployed models make false statements to users.", "This could lead to deception and distrust (Tamkin et al., 2021).", "2. Blocking positive applications .", "In applications like medical or legal advice, there are high standards for factual accuracy.", "Even if models have relevant knowledge, people may avoid deploying them without clear evidence they are reliably truthful.", "3. Malicious misuse.", "If models can generate plausible false statements in ways that are not easily identifiable, they could be used to deceive humans via disinformation or fraud (Zellers et al., 2019; Schuster et al., 2019).", "By contrast, models that are reliably truthful would be harder to deploy for deceptive uses.", "To address these concerns, it is valuable to quantify how truthful models are.", "In particular: How likely are models to make false statements across a range of contexts and questions?", "Better measurement will help in producing more truthful models and in understanding the risks of deceptive models.", "This raises a basic question: Why do language models generate false statements?", "One possible cause is that the model has not learned the training distribution well enough.", "When asked the question, What is 1241 123 ?, GPT-3 outputs 14812 .", "GPT-3 fails to reliably generalize from its training data about multiplication (Brown et al., 2020).", "Another possible cause (which doesn't apply to multiplication) is that the model's training objective actually incentivizes a false answer.", "We call such false answers imitative falsehoods .", "For GPT-3 a false answer is an imitative falsehood if it has high likelihood on GPT-3's training distribution.", "Figure 1 illustrates questions from TruthfulQA that we think cause imitative falsehoods.", "TruthfulQA is a benchmark made up of questions designed to cause imitative falsehoods.", "One 3214 Category Question from TruthfulQA Answer from GPT-3 (FALSE) Health Can coughing e ectively stop a heart attack?", "Another reason is that scaling laws suggest that scaling up models will reduce perplexity on the training distribution (Kaplan et al., 2020).", "This will decrease the rate of falsehoods that arise from not learning the distribution well enough (such as the multiplication example).", "Yet this should increase the rate of imitative falsehoods, a phenomenon we call inverse scaling.", "Imitative falsehoods pose a problem for language models that is not solved merely by scaling up.", "Benchmark.", "TruthfulQA tests language models on generating truthful answers to questions in the zero-shot setting.", "It comprises 817 questions that span 38 categories.", "The benchmark and code is available at https://github.com/ sylinrl/TruthfulQA .", "Baselines have low truthfulness.", "We tested GPT-3 (Brown et al., 2020), GPT-Neo/J (Wang and Komatsuzaki, 2021), and UnifiedQA (based on T5 (Khashabi et al., 2020) under a range of model sizes and prompts.", "Under human evaluation, the best-performing model (GPT-3-175B with helpful prompt) was truthful on 58% of questions, while human performance was 94% (Fig. 4).", "This model also generated answers that were both false and informative 42% of the time (compared to 6% for the human baseline).", "Such informative answers, which often mimic popular misconceptions, are more likely to deceive.", "Appendix B.3 shows additional results for new language models that were released after the initial rollout of TruthfulQA (and that are therefore excluded from our main conclusions).", "While the new mechanisms introduced in each model lead to better performance on the benchmark, there is still a large performance gap between the best model and the human baseline.", "ent model families, the largest models were generally less truthful (Fig. 2).", "This inverse scaling trend contrasts with most tasks in NLP, where performance improves with model size (Brown et al., 2020; Kaplan et al., 2020).", "One explanation of this result is that larger models produce more imitative falsehoods because they are better at learning the training distribution.", "Another explanation is that our questions adversarially exploit weaknesses in larger models not arising from imitation of the training distribution.", "We ran experiments aimed to tease apart these explanations (Section 4.3).", "with high accuracy.", "We finetuned GPT-3 on human evaluations of whether an answer is true or false and achieved 90-96% accuracy on held-out models (Section 4.4).", "Along with the multiple-choice task (Section 3.1), this metric provides a quick and reproducible way to assess models.", "TruthfulQA aims to measure the truthfulness of model (Evans et al., 2021).", "We use a strict standard, where a claim counts as true if it describes the lit-3216 eral truth about the real world.", "Claims that are only true according to a belief system or tradition (e.g. Libras have friendly personalities) are counted as false.", "TruthfulQA mostly concerns factual claims, and true factual claims are usually supported by reliable, publicly available evidence.", "Overall, our standard for truth is similar to the standards used for scientific articles or Wikipedia.", "As an illustration of our standard, see Figure 1 (which shows only answers that we count as false) and Figure 16 (which shows both true and false answers).", "How do we evaluate models using this standard of truth?", "Statements generated by models may be ambiguous or inaccurate to varying degrees.", "We assign statements a scalar truth score in [0 , 1] , which can be interpreted as the probability the statement is true.", "Results in Figures 2 and 4 threshold these scores for greater interpretability but scalar scores are reported in Appendix B.2.", "We define an answer to a question as truthful iff it avoids asserting a false statement.", "So truthfulness is consistent with refusing to answer a question, with expressing uncertainty, or with giving a true but irrelevant answer.", "In this paper, we evaluate non-committal answers such as No comment or I don't know as true, even when there's a sense in which the model knows a true answer.", "1 It follows from our definition that a model is perfectly truthful if it answers No comment for every question.", "In practice we want answers that are both truthful and informative (i.e. providing information that reduces uncertainty raised by the question).", "So in assessing models, we also include evaluations of informativeness.", "Truthfulness and informativeness are loosely analogous to precision and recall.", "TruthfulQA consists of a test set of 817 questions and is intended only for the zero-shot setting.", "All questions were written by the authors and were designed to elicit imitative falsehoods.", "The questions are diverse in style and cover 38 categories, where diversity is important because a truthful model should be truthful regardless of the topic.", "Most questions are one-sentence long with a me-dian length of 9 words.", "Each question has sets of true and false reference answers and a source that supports the answers (e.g. a Wikipedia page).", "1 Given the current state of research on interpretability, it would be hard to systematically decide whether a language model really knows an answer such that it could be generated by the model under different conditions or prompting.", "The reference answers are used for human evaluation, automated evaluation (see Section 3.2), and a multiple-choice task (Section 3.1).", "Their construction is described in Appendix C.1.", "The questions in TruthfulQA were designed to be adversarial in the sense of testing for a weakness in the truthfulness of language models (rather than testing models on a useful task).", "In particular, the questions test a weakness to imitative falsehoods: false statements with high likelihood on the training distribution.", "We constructed the questions using the following adversarial procedure, with GPT-3-175B (QA prompt) as the target model: 1. We wrote questions that some humans would answer falsely.", "We tested them on the target model and filtered out questions that the model consistently answered correctly when multiple random samples were generated at nonzero temperatures.", "We produced 437 questions this way, which we call the filtered questions (Wallace and Boyd-Graber, 2018).", "2. Using this experience of testing on the target model, we wrote 380 additional questions that we expected some humans and models to answer falsely.", "Since we did not test on the target model, these are unfiltered questions.", "We report results on the combined filtered and unfiltered questions.", "For non-combined results, see Appendix B.4.", "The questions produced by this adversarial procedure may exploit weaknesses that are not imitative.", "For example, the target model might answer a question falsely because it has unusual syntax and not because the false answer was learned during training.", "We describe experiments to tease apart these possibilities in Section 4.3.", "The questions and reference answers in TruthfulQA were written by the authors.", "To estimate the percentage of questions on which an independent user might disagree with our evaluations, we recruited two external researchers to perform the following validation: 1. A validator was shown a random sample of 100 questions from TruthfulQA with one true and one false reference answer given per question.", "They were asked to decide which of the two answers was true and to describe any disagreements.", "They disagreed on 7% of questions.", "2. A participant was asked to answer 250 randomly sampled questions from TruthfulQA with a suggested time of 2 minutes per question and access to the internet.", "Following the evaluation procedure in Appendix D, we marked 6% of their answers as false.", "The participant's answers were also used as the human baseline for our experiments.", "These results suggest disagreement with 6-7% of our reference answers.", "However, in both cases we suspect the external researcher made some mistakes (e.g. due to insufficient time) which inflated the apparent level of disagreement.", "Regardless, this level of disagreement would not affect our main results, as the differences in scores between baseline models generally exceed this range.", "The details of the validation procedure are described in Appendix F. 3 Experiments 3.1 Models and prompts To compute baselines for TruthfulQA, we evaluate four model families: 1. GPT-3 (Brown et al., 2020) is trained on filtered Common Crawl and other sources.", "2. GPT-Neo/J (Black et al., 2021; Wang and Ko-matsuzaki, 2021) is a variant of GPT-3 with a different training set (Gao et al., 2020).", "3. GPT-2 is trained on WebText (Radford et al., 2019).", "4. UnifiedQA (Khashabi et al., 2020) is a T5 model (Raffel et al., 2019) fine-tuned on diverse QA tasks.", "This is a different transformer architecture, training objective, and pre-training dataset than the other models.", "Appendix B.3 presents additional results from the Anthropic (Askell et al., 2021), Gopher (Rae et al., 2021), WebGPT (Nakano et al., 2021), and InstructGPT (Ouyang et al., 2021) models, which were externally evaluated on TruthfulQA.", "Prompts.", "TruthfulQA is intended as a zero-shot benchmark (Brown et al., 2020; Wei et al., 2021).", "Zero-shot means that", "(i) no gradient updates are performed and", "(ii) no examples from TruthfulQA appear in prompts (but prompts may contain natural language instructions).", "For our baselines, we also require that prompts and hyperparameters are not tuned on examples from TruthfulQA in any way.", "We call this the true zero-shot setting, following the definition of true few-shot learning in Perez et al. (2021).", "For straightforward comparison to our true-zero-shot baselines, we recommend using our prompts and hyperparameters.", "2 The default prompt for our experiments is an existing question-answering prompt taken from the OpenAI API (QA prompt) (OpenAI, 2020) with minor formatting changes.", "The prompt consists of trivia questions that are dissimilar from TruthfulQA in style and content.", "This prompt is used for all model families and sizes except for the UnifiedQA family.", "No prompt is used for UnifiedQA, as it is already fine-tuned for question-answering.", "Additional prompts are tested on GPT-3-175B only.", "Appendix E contains the set of all prompts.", "In our main results, we focus on the helpful' and harmful' prompt, which encourage models to be more or less truthful, respectively.", "Main task: generation.", "Our main task involves natural language generation.", "A model generates a full-sentence answer given a prompt and question.", "Answers are generated using greedy decoding (i.e. temperature set to zero).", "Model and sampling parameters are otherwise unchanged from the defaults in the OpenAI API (GPT-3; OpenAI, 2020) or the HuggingFace API (GPT-2, GPT-Neo/J, UnifiedQA; Wolf et al., 2020).", "Appendix B.8 shows additional experiments at higher temperatures.", "Additional task: multiple-choice.", "Models are also tested on a multiple-choice variation of the main task.", "This uses the same questions as the generation task.", "The choices for each question are the sets of true and false reference answers.", "To evaluate a model on a question, we compute the likelihood of each reference answer independently, conditional on the default prompt and question.", "The truthfulness score for the question is the total normalized likelihood of the true answers (normal-ized across all true and false reference answers).", "Evaluating language generation.", "For all results reported on the main task (generation), we use human evaluation to score models on truthful-2 TruthfulQA was not designed for use as a few-shot benchmark.", "The authors carried out all evaluations using the procedure described in Appendix D, which was designed to make evaluations replicable and consistent across evaluators.", "Since human evaluation is costly, we also test how well automated metrics serve as a proxy.", "We introduce a new metric for this purpose, which we call GPT-judge.", "GPT-judge is a GPT-3-6.7B model finetuned to classify answers to the questions in TruthfulQA as true or false.", "A similar model was finetuned to evaluate informativeness (rather than truthfulness).", "The details of the finetuning procedure are provided in Appendix B.1, along with comparisons to other commonly used automated metrics for natural language generation.", "Comparisons between GPT-judge and human evaluations are discussed in Section 4.4.", "The human participant produced 94% true answers (Fig. 4).", "87% of their answers were both true and informative.", "Across all model sizes and prompts, the best model (GPT-3-175B with helpful prompt) produced 58% true answers and 21% true and informative answers.", "This model gave false and informative answers 42% of the time (compared to 6% for the human participant).", "Different prompts for GPT-3-175B had a significant impact on truthfulness but not on the percentage of true and informative answers (Appendix B.6).", "Figure 13 shows results broken down by category of question.", "The best model was less truthful than the human on almost all categories.", "We suspect that answers from certain categories (e.g. law or health) are more likely to deceive humans than for other categories (e.g. proverbs or myths and fairytales).", "If we restrict to all categories with non-trivial risk of deception (Fig. 14), model performance is still poor.", "Figure 2 shows that larger models generally do worse than smaller models in the same family (in-verse scaling).", "For example, the largest GPT-Neo/J is 17% less truthful than a model 60x smaller.", "The UnifiedQA models generally do better on truthfulness than the three GPT families, but these models are also the least informative probably because they are fine-tuned for QA tasks with a different format and objective (Khashabi et al., 2020).", "While larger models were less truthful, they were more informative.", "This suggests that scaling up model size makes models more capable (in princi-ple) of being both truthful and informative.", "For the multiple-choice task (where models choose answers rather than generating them), the larger models also perform worse than smaller ones (Fig. 4c).", "For example, GPT-Neo/J 6B was 12% less truthful than GPT-Neo/J 125M.", "No models significantly outperformed random guessing.", "The concordance between the generation task and the multiple-choice task suggests that the tendency of larger models to perform worse is not an artifact of human evaluation or of the hyperparameters we used for generating answers.", "Results for both the generation and multiple-choice tasks on more recent models can be found in Appendix B.3.", "If a model returns a false answer to a question in our benchmark, this could be because the answer is an imitative falsehood.", "However, it could also be caused by the syntax or style of the question.", "These are non-imitative falsehoods, as they are not incentivized by the model's training objective.", "We define a weakness to be a property of a model that causes it to perform poorly at a task (i.e., to produce falsehoods).", "Then imitative and non-imitative falsehoods are produced as a result of imitative and non-imitative weaknesses in a model, respectively.", "Given how we constructed questions (Sec-tion 2.2), it is probable that some of our questions exploit non-imitative weaknesses, which may be fixed by scaling up models.", "Yet we believe imitative falsehoods make up a substantial portion of the false model responses to our questions.", "This belief is based on convergent lines of evidence: Consistency.", "The GPT-Neo/J family of models show a similar inverse scaling trend to GPT-3 (Fig. 2).", "Yet we did not do adversarial filtering with GPT-Neo/J.", "If an answer is an imitative falsehood for GPT-3, it would likely transfer to GPT-J, as the training distribution and performance of the models is similar.", "It is less likely (though not impossible) that a non-imitative falsehood caused by specific syntax or grammatical artifacts would transfer.", "Controls.", "We ran an experiment testing models on matched control questions.", "Each question was constructed by editing 1-3 words of a question in TruthfulQA (see Appendix C.2 for examples).", "The edits preserve the form of the questions but turn them into straightforward trivia or common-sense questions.", "If TruthfulQA questions exploit nonimitative weaknesses, we would expect many of the matched controls to exploit similar weaknesses.", "Yet Figure 2 shows that truthfulness on the matched controls improves with model size for all model families and that the largest GPT-3 and GPT-Neo/J achieve high absolute truthfulness scores.", "Paraphrases.", "We ran an experiment testing models on paraphrases of the TruthfulQA questions.", "If a question causes an imitative falsehood, the paraphrase should cause the same falsehood.", "Overall, we find that truthfulness scores for models do not change substantially on the paraphrased questions (Appendix B.9).", "In particular, the largest GPT-3 and GPT-Neo/J models still perform worse than the smaller models in the family.", "This evidence suggests that the poor performance of models on TruthfulQA is not explained by most questions exploiting a (non-imitative) weakness to a particular syntax or form.", "It is harder to rule out non-imitative weaknesses that are more semantic in nature.", "Future work could test whether more diverse or larger models produce the same kind of falsehoods on TruthfulQA.", "Given these results, how would scaling up model size affect truthfulness?", "It seems unlikely that scaling up GPT-3 or GPT-J by 5x would dramatically improve scores on TruthfulQA.", "If the benchmark contains a subset of questions that target nonimitative weaknesses (Section 4.2), performance on this subset could improve with model size, but we would expect the effect to be small.", "Instead, we believe that scaling up is most promising in conjunction with other techniques such as prompt engineering or finetuning.", "We found that prompts instructing GPT-3 to be truthful led to improved performance, and we would expect that this effect would be more pronounced for larger models.", "Related work on language models suggests that fine-3220 tuning would have similar benefits.", "Models could be fine-tuned on a set of examples chosen to demonstrate truthfulness (Solaiman and Dennison, 2021) or fine-tuned by reinforcement learning from human feedback (Stiennon et al., 2020).", "These techniques could be combined with information retrieval, provided that models can avoid retrieving from unreliable sources (Lewis et al., 2020).", "The finetuned GPT-judge model is able to predict human evaluations of truthfulness with 90-96% validation accuracy.", "GPT-judge also generalizes well to new answer formats.", "In particular, UnifiedQA models differ in architecture and pre-training from the GPT models and generate answers very different in form and content.", "Yet GPT-judge still achieves 90% validation accuracy on UnifiedQA when finetuned only on answers from the GPT families.", "We also validated GPT-judge on our human baseline.", "No human baselines were included in GPT-judge's training set, and the models included were significantly less truthful than the human.", "Predictive accuracy on the human baseline was 89.5%.", "We have shown that GPT-judge is reasonably robust and provides a cheap alternative to human evaluation.", "GPT-judge could likely be further improved by adding more training data and by using a larger pre-trained GPT-3 model.", "Full results are given in Appendix B.1, where Table 1 includes additional comparisons to standard natural language generation metrics.", "A GPT-3 model finetuned to predict informativeness also achieves a promising 86.3% on UnifiedQA (Table 2).", "The questions in TruthfulQA are designed such that correct answers are not incentivized by the standard LM objective.", "The poor performance of the baseline models is therefore not surprising, as these models are trained to predict human text and do not directly learn to be truthful.", "In particular, models are likely to repeat false claims that are often stated by humans.", "We believe that TruthfulQA tests for many such claims.", "While we don't expect current models to be truthful, there are many contexts in which truthfulness is necessary.", "Large language models such as GPT-3 may see widespread use as foundation models for downstream tasks that require robust truthfulness (Bommasani et al., 2021).", "We believe that TruthfulQA is valuable in providing a way to test the behavior of models that are expected to be truthful, even when the foundation model is misaligned.", "Numerous NLP benchmarks test models on factual questions (Bhakthavatsalam et al., 2021; Clark et al., 2018; Hendrycks et al., 2020; Talmor et al., 2019).", "If an answer is correct, then it is also truthful but our concept of truthfulness also allows non-committal responses (Section 2.1).", "While most benchmarks are multiple choice, some require models to generate short (single-phrase) answers (Hendrycks et al., 2021; Lewis et al., 2020).", "Concepts related to truthfulness in natural language generation include factuality, veracity, and avoiding hallucinations (Shuster et al., 2021; Zhou et al., 2021).", "Evans et al. (2021) refine the concept of truthfulness and draw distinctions between truthfulness and honesty.", "Truthfulness is relevant to many applications including generating news stories (Kreps et al., 2020; Zellers et al., 2019), summarization (Gabriel et al., 2021; Maynez et al., 2020; Stiennon et al., 2020; Wang et al., 2020), conversational dialog (Shuster et al., 2021; Roller et al., 2021), and question answering (Dou et al., 2021; Krishna et al., 2021; Lewis et al., 2020; Logan IV et al., 2019).", "A related line of research is automated fact-checking (Thorne et al., 2018; Aly et al., 2021; Baly et al., 2018), where the focus is on evaluation of statements rather than generation.", "The problem of imitative falsehoods is similar to models learning to imitate offensive or prejudiced language (Kenton et al., 2021; Bender et al., 2021).", "An offensive statement may have higher probability on the training distribution than a non-offensive alternative.", "This is an example of mis-alignment between the model's training objective (e.g. to imitate text on the web) and the goals and values of human users (e.g. to avoid offensive language or to avoid falsehoods).", "Another example is when GPT-3 models trained on GitHub learn to produce buggy code (Chen et al., 2021).", "Increasing the safety and alignment of pre-trained models remains a challenging problem (Dinan et al., 2020; Tamkin et al., 2021; Xu et al., 2020; Solaiman and Dennison, 2021; McGuffie and Newhouse, 2020).", "Making models more truthful is a major challenge for AI.", "Truthful models could contribute to areas 3221 like medicine, law, science, and engineering.", "Conversely, non-truthful models could cause deception and distrust at scale.", "To develop truthful models, we need a set of benchmarks and tools to measure truthfulness.", "TruthfulQA focuses on measuring imitative falsehoods, which are failures of truthfulness unlikely to be solved by scaling up models.", "We find that today's large models are much less truthful than humans in the zero-shot setting.", "Strong performance on TruthfulQA does not imply that a model will be truthful in a specialized do-main.", "But poor performance does indicate a lack of robustness.", "Moreover, failures on TruthfulQA are relatively interpretable by ML researchers because our questions do not require any specialized knowledge (and all questions are supported by sources).", "Thus TruthfulQA may be a useful benchmark for both general-purpose and specialized models.", "TruthfulQA tests models on general-knowledge questions designed to elicit imitative falsehoods.", "If a model performs well, we cannot conclude that it will be equally truthful on other kinds of tasks (even if we expect some transfer).", "For instance, TruthfulQA does not cover long-form generation (e.g. news articles) or interactive settings (e.g. extended chat with an adversarial human).", "Moreover, while the questions in TruthfulQA resemble real-world questions, they were not collected from a deployed system and hence may overor underestimate truthfulness for a deployed system.", "An objective that rewards truthfulness can be flipped to reward falsehood.", "Could someone create a deceptive model using TruthfulQA?", "We claim that TruthfulQA is unlikely to be useful for people trying to construct deceptive models for malicious purposes.", "In order to be deceptive, a model needs to produce false answers relatively infrequently otherwise humans will quickly realize that it cannot be trusted.", "Yet to get a low score on TruthfulQA, models need to answer almost all questions falsely.", "In order to be useful for malicious purposes, a model needs to produce false statements that are extremely specific (e.g. statements about a victim who is targeted by the malicious human, or statements about a particular government policy).", "Yet TruthfulQA does not cover any topics with extreme specificity but instead has shallow coverage of general-knowledge topics.", "OE and SL acknowledge OpenAI for Academic Access to OpenAI API.", "We would like to thank Luca Righetti, Ethan Perez, William Saunders, Elizabeth Barnes, Sam Bowman, Alex Ray, Dan Hendrycks, Andreas Stuhlmueller, and Owen Cotton-Barratt." ]
[ "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Pre-trained language models have achieved huge improvement on many NLP tasks.", "However, these methods are usually designed for written text, so they do not consider the properties of spoken language.", "Therefore, this paper aims at generalizing the idea of language model pre-training to lattices generated by recognition systems.", "We propose a framework that trains neural lattice language models to provide contextualized representations for spoken language understanding tasks.", "The proposed two-stage pre-training approach reduces the demands of speech data and has better ef-ficiency.", "Experiments on intent detection and dialogue act recognition datasets demonstrate that our proposed method consistently outperforms strong baselines when evaluated on spoken inputs.", "1 1 Introduction The task of spoken language understanding (SLU) aims at extracting useful information from spoken utterances.", "Typically, SLU can be decomposed with a two-stage method: 1) an accurate automatic speech recognition (ASR) system transcribes the input speech into texts, and then 2) language understanding techniques are applied to the transcribed texts.", "These two modules can be developed separately, so most prior work developed the backend language understanding systems based on manual transcripts (Yao et al., 2014; Guo et al., 2014; Mes-nil et al., 2014; Goo et al., 2018).", "Despite the simplicity of the two-stage method, prior work showed that a tighter integration between two components can lead to better performance.", "Researchers have extended the ASR 1-best results to n-best lists or word confusion networks in order to preserve the ambiguity of the transcripts.", "(Tur et al., 2002; Hakkani-Tur et al., 2006; Henderson et al., 2012; Tur et al., 2013; Masumura et al., 2018).", "Another line of research focused on using lattices produced by ASR systems.", "Lattices are directed acyclic graphs (DAGs) that represent multiple recognition hypotheses.", "An example of ASR lattice is shown in Figure 1. Ladhak et al. (2016) introduced LatticeRNN, a variant of recurrent neural networks (RNNs) that generalize RNNs to lattice-structured inputs in order to improve SLU.", "Zhang and Yang (2018) proposed a similar idea for Chinese name entity recognition.", "Sperber et al. (2019); Xiao et al. (2019); Zhang et al. (2019) proposed extensions to enable the transformer model (Vaswani et al., 2017) to consume lattice inputs for machine translation.", "Huang and Chen (2019) proposed to adapt the transformer model originally pre-trained on written texts to consume lattices in order to improve SLU performance.", "Buckman and Neu-big (2018) also found that utilizing lattices that represent multiple granularities of sentences can improve language modeling.", "With recent introduction of large pre-trained language models (LMs) such as ELMo (Peters et al., 2018), GPT (Radford, 2018) and BERT (Devlin et al., 2019), we have observed huge improvements on natural language understanding tasks.", "These models are pre-trained on large amount of written texts so that they provide the downstream tasks with high-quality representations.", "However, applying these models to the spoken scenarios poses several discrepancies between the pre-training task and the target task, such as the domain mismatch between written texts and spoken utterances with ASR errors.", "It has been shown that fine-tuning the pre-trained language models on the data from the target tasks can mitigate the domain mismatch problem (Howard and Ruder, 2018; Chronopoulou et al., 2019).", "Siddhant et al. (2018) focused on pre-training a language model specifically for spoken content with huge amount of automatic transcripts, which requires a large collection of in-domain speech.", "In this paper, we propose a novel spoken language representation learning framework, which focuses on learning contextualized representations of lattices based on our proposed lattice language modeling objective.", "The proposed framework consists of two stages of LM pre-training to reduce the demands for lattice data.", "We conduct experiments on benchmark datasets for spoken language understanding, including intent classification and dialogue act recognition.", "The proposed method consistently achieves superior performance, with relative error reduction ranging from 3% to 42% compare to pre-trained sequential LM.", "The two-stage framework that learns contextualized representations for spoken language is proposed and detailed below.", "In the SLU task, the model input is an utterance X containing a sequence of words X = [ x 1 , x 2 , , x | X | ] , and the goal is to map X to its corresponding class y .", "The inputs can also be stored in a lattice form, where we use edge-labeled lattices in this work.", "A lattice L = { N, E } is defined by a set of | N | nodes N = { n 1 , n 2 , , n | N | } and a set of | E | transitions E = { e 1 , e 2 , , e | E | } .", "A weighted transition is defined as e = { prev [ e ] , next [ e ] , w [ e ] , P ( e ) } , where prev [ e ] and next [ e ] denote the previous node and next node respectively, w [ e ] denotes the associated word, and P ( e ) denotes the transition probability.", "We use in [ n ] and out [ n ] to denote the sets of incoming and outgoing transitions of a node n .", "L <n = { N <n , E <n } denotes the sub-lattice which consists of all paths between the starting node and a node n .", "The LatticeRNN (Ladhak et al., 2016) model generalizes sequential RNN to lattice-structured inputs.", "It traverses the nodes and transitions of a lattice in a topological order.", "For each transition e , LatticeRNN takes w [ e ] as input and the representation of its previous node h [ prev [ e ]] as the previous hidden state, and then produces a new hidden state of e , h [ e ] .", "The representation of a node h [ n ] is obtained by pooling the hidden states of the incoming transitions.", "In this work, we employ the Weight-edPool variant proposed by Ladhak et al. (2016), which computes the node representation as h [ n ] = (cid:88) e in [ n ] P ( e ) h [ e ] .", "Note that we can represent any sequential text as a linear-chain lattice, so LatticeRNN can be seen as a strict generalization of RNNs to DAG-like structures.", "This property enables us to initialize the weights in a LatticeRNN with the weights of a RNN as long as they use the same recurrent cell.", "Language models usually estimate p ( X ) by factorizing it into", "where X <t = [ x 1 , , x t 1 ] denotes the previous context.", "Training a LM is essentially asking the model to predict a distribution of the next word given the previous words.", "We extend the sequential LM analogously to lattice language modeling , where the model is expected to predict the next transitions of a node n given L <n .", "The ground truth distribution is therefore defined as: p ( w | L <n ) = (cid:40) P ( e ) , if e out [ n ] s.t. w [ e ] = w 0 , otherwise.", "LatticeRNN is adopted as the backbone of our lattice language model.", "Since the node representation h [ n ] encodes all information of L <n , we pass h [ n ] to a linear decoder to obtain the distribution of next transitions: p ( w | h [ n ]) = softmax ( WT h [ n ]) , LatticeLSTM LSTM LSTM LSTM What a day Linear a day <EOS> the , 1.0 0.80.2 Linear 0.9 1.0 1.0 0.1 1.0 1.0 the , 1.0 LatticeLSTM Max pooling classification Training Target Task Classifier Stage 1: Pre-Training on Sequential Texts Stage 2: Pre-Training on Lattices LatticeLSTM Figure 2: Illustration of the proposed framework.", "where denotes the parameters of the LatticeRNN and W denotes the trainable parameters of the decoder.", "We train our lattice language model by minimizing the KL divergence between the ground truth distribution p ( w | L <n ) and the predicted distribution p ( w | h [ n ]) .", "Note that the objective for training sequential LM is a special case of the lattice language modeling objective defined above, where the inputs are linear-chain lattices.", "Hence, a sequential LM can be viewed as a lattice LM trained on linear-chain lattices only.", "This property inspires us to pre-train our lattice LM in a 2-stage fashion described below.", "Inspired by ULMFiT (Howard and Ruder, 2018), we propose a two-stage pre-training method to train our lattice language model.", "The proposed method is illustrated in Figure 2. Stage 1: Pre-train on sequential texts In the first stage, we follow the recent trend of pre-trained LMs by pre-training a bidirectional LSTM (Hochreiter and Schmidhu-ber, 1997) LM on general domain text corpus.", "Here the cell architecture is the same as ELMo (Peters et al., 2018).", "Stage 2: Pre-train on lattices In this stage, we use a bidirectional LatticeLSTM with the same cell architecture as the LSTM pre-trained in the previous stage.", "Note that in the backward direction we use reversed lattices as input.", "We initialize the weights of the LatticeLSTM with the weights of the pre-trained LSTM.", "The LatticeLSTM is further pre-trained on lattices from the training set of the target task with the lattice language modeling objective described above.", "We consider this two-stage method more approachable and efficient than directly pre-training a lattice LM on large amount of lattices because 1) general domain written data is much easier to collect than lattices which require spoken data, and 2) LatticeRNNs are considered less efficient than RNNs due to the difficulty of parallelization in computing.", "After pre-training, our model is capable of providing representations for lattices.", "Following (Peters et al., 2018), the pre-trained lattice LM is used to produce contextualized node embeddings for downstream classification tasks, as illustrated in the right part of Figure 2. We use the same strategy as Peters et al. (2018) to linearly combine the hidden states from different layers into a representation for each node.", "The classifier is a newly added 2-layer LatticeLSTM, which takes the node representations as input, followed by max-pooling over nodes, a linear layer and finally a softmax layer.", "We use the cross entropy loss to train the classifier on each target classification tasks.", "Note that the parameters of the pre-trained lattice LM are fixed during this stage.", "In order to evaluate the quality of the pre-trained lattice LM, we conduct the experiments for two common tasks in spoken language understanding.", "Intent detection and dialogue act recognition are two common tasks about spoken language understanding.", "The benchmark datasets used for intent detection are ATIS (Airline Travel Information Systems) (Hemphill et al., 1990; Dahl et al., 1994; Tur et al., 2010) and SNIPS (Coucke et al., 2018).", "We use the NXT-format of the Switchboard (Stolcke et al., 2000) Dialogue Act Corpus (SWDA) (Cal-houn et al., 2010) and the ICSI Meeting Recorder Dialogue Act Corpus (MRDA) (Shriberg et al., 2004) for benchmarking dialogue act recognition.", "The SNIPS corpus only contains written text, so we synthesize a spoken version of the dataset using a commercial text-to-speech service.", "We use an ASR system trained on WSJ (Paul and Baker, 1992) with Kaldi (Povey et al., 2011) to transcribe ATIS, and an ASR system released by Kaldi to transcribe other datasets.", "The statistics of datasets are summarized in Table 1. All tasks are evaluated with overall classification accuracy.", "In order to conduct fair comparison with ELMo (Pe-ters et al., 2018), we directly adopt their pre-trained model as our pre-trained sequential LM.", "The hidden size of the LatticeLSTM classifier is set to 300.", "We use adam as the optimizer with learning rate 0.0001 for LM pre-training and 0.001 for training the classifier.", "The checkpoint with the best validation accuracy is used for evaluation.", "The results in terms of the classification accuracy are shown in Table 2. All reported numbers are averaged over at least three training runs.", "Rows", "(a) and", "(b) can be considered as the performance upperbound, where we use manual transcripts to train and evaluate the models.", "We also use BERT-base (Devlin et al., 2019) as a strong baseline, which takes ASR 1-best as the input (row", "(g)).", "Compare with the results on manual transcripts, using ASR results largely degrades the performance due to recognition errors, as shown in rows", "(e)-(g).", "In addition, adding pre-trained ELMo embeddings brings consistent improvement over the biLSTM baseline, except for SNIPS when using manual transcripts (row", "(b)).", "The baseline models trained on ASR 1-best are also evaluated on lattice oracle paths.", "We report the results as the performance upperbound for the baseline models (rows", "(c)-(d)).", "In the lattice setting, the baseline bidirectional LatticeLSTM (Ladhak et al., 2016) (row", "(h)) consistently outperforms the biLSTM with 1-best input (row", "(e)), demonstrating the importance of taking lattices into account.", "Our proposed method achieves the best results on all datasets except for ATIS", "(row(i)), with relative error reduction ranging from 3.2% to 42% compare to biLSTM+ELMo", "(row(f)).", "The proposed method also achieves performance comparable to BERT-base on ATIS.", "We perform ablation study for the proposed two-stage pre-training method and report the results in rows", "(j) and", "(k).", "It is clear that skipping either stage degrades the performance on all datasets, demonstrating that both stages are crucial in the proposed framework.", "We also evaluate the proposed model on 1-best results (row", "(l)).", "The results show that it is still beneficial to use lattice as input after fine-tuning.", "In this paper, we propose a spoken language representation learning framework that learns contextualized representation of lattices.", "We introduce the lattice language modeling objective and a two-stage pre-training method that efficiently trains a neural lattice language model to provide the downstream tasks with contextualized lattice representations.", "The experiments show that our proposed framework is capable of providing high-quality representations of lattices, yielding consistent improvement on SLU tasks.", "We thank reviewers for their insightful comments.", "This work was financially supported from the Young Scholar Fellowship Program by Ministry of Science and Technology (MOST) in Taiwan, under Grant 109-2636-E-002-026." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "other" ]
[ "Morphological segmentation for polysynthetic languages is challenging, because a word may consist of many individual morphemes and training data can be extremely scarce.", "Since neural sequence-to-sequence (seq2seq) models define the state of the art for morphological segmentation in high-resource settings and for (mostly) European languages, we first show that they also obtain competitive performance for Mexican polysynthetic languages in minimal-resource settings.", "We then propose two novel multi-task training approaches one with, one without need for external unlabeled resources, and two corresponding data augmentation methods, improving over the neural baseline for all languages.", "Finally, we explore cross-lingual transfer as a third way to fortify our neural model and show that we can train one single multi-lingual model for related languages while maintaining comparable or even improved performance, thus reducing the amount of parameters by close to 75% .", "We provide our morphological segmentation datasets for Mexicanero, Nahuatl, Wixarika and Yorem Nokki for future research.", "Due to the advent of computing technologies to indigenous communities all over the world, natural language processing (NLP) applications", "for languages with limited computer-readable textual data are getting increasingly important.", "This contrasts with current research, which focuses strongly on approaches which require large amounts of training data, e.g., deep neural networks.", "Those are not trivially applicable to minimal-resource settings with less than 1 , 000 available training examples.", "We aim at closing this gap for morphological surface segmentation, the task of splitting a word into the surface forms of its smallest meaning-bearing units, its morphemes .", "Recovering morphemes provides information about unknown words and is thus especially important for polysynthetic languages with a high morpheme-to-word ratio and a consequently large overall number of words.", "To illustrate how segmentation helps understanding unknown multiple-morpheme words, consider an example in this pa-per's language of writing: even if the word unconditionally did not appear in a given training corpus, its meaning could still be derived from a combination of its morphs un , condition , al and ly .", "Due to its importance for down-stream tasks (Creutz et al., 2007; Dyer et al., 2008), segmentation has been tackled in many different ways, considering unsupervised (Creutz and Lagus, 2002), supervised (Ruokolainen et al., 2013) and semi-supervised settings (Ruokolainen et al., 2014).", "Here, we add three new questions to this line of research:", "(i) Are data-hungry neural network models 47 applicable to segmentation of polysynthetic languages in minimal-resource settings?", "(ii) How can the performance of neural networks for surface segmentation be improved if we have only unlabeled or no external data at hand?", "(iii) Is cross-lingual transfer for this task possible between related languages?", "The last two questions are crucial: While for many languages it is difficult to obtain the number of annotated examples used in earlier work on (semi-)supervised methods, a limited amount might still be obtainable.", "We experiment on four polysynthetic Mexican languages: Mexicanero, Nahuatl, Wixarika and Yorem Nokki (details in 2).", "The datasets we use are, as far as we know, the first computer-readable datasets annotated for morphological segmentation in those languages.", "Our experiments show that neural seq2seq models perform on par with or better than other strong baselines for our polysynthetic languages in a minimal-resource setting.", "However, we further propose two novel multi-task approaches and two new data augmentation methods.", "Combining them with our neural model yields up to 5 .", "05% absolute accuracy or 3 .", "40% F1 improvements over our strongest baseline.", "Finally, following earlier work on cross-lingual knowledge transfer for seq2seq tasks (Johnson et al., 2017; Kann et al., 2017), we investigate training one single model for all languages, while sharing parameters.", "The resulting model performs comparably to or better than the individual models, but requires only roughly as many parameters as one single model.", "Contributions.", "To sum up, we make the following contributions:", "(i) we confirm the applicability of neural seq2seq models to morphological segmentation of polysynthetic languages in minimal-resource settings;", "(ii) we propose two novel multi-task training approaches and two novel data augmentation methods for neural segmentation models;", "(iii) we investigate the effectiveness of cross-lingual transfer between related languages; and", "(iv) we provide morphological segmentation datasets for Mexicanero, Nahuatl, Wixarika and Yorem Nokki.", "Polysynthetic languages are morphologically rich languages which are highly synthetic, i.e., single words can be composed of many individual", "morphemes.", "In extreme cases, entire sentences consist of only one single token, whereupon ev-ery argument of a predicate must be expressed by morphology on the word that contains that as-signer (Baker, 2006).", "This property makes surface segmentation of polysynthetic languages at the same time complex and particularly relevant for further linguistic analysis.", "In this paper, we experiment on four polysynthetic languages of the Yuto-Aztecan family (Baker, 1997), with the goal of improving the performance of neural seq2seq models.", "The languages will be described in the rest of this section.", "Mexicanero is a Western Peripheral Nahuatl variant, spoken in the Mexican state of Durango by approximately one thousand people.", "This dialect is isolated from the rest of the other branches and has a strong process of Spanish stem incorporation, while also having borrowed some suffixes from that language (Vanhove et al., 2012).", "It is common to see Spanish words mixed with Nahuatl agglutinations.", "In the following example we can see an intrasentencial mixing of Spanish ( in uppercases ) and Mexicanero: u | ni | ye MALO I was sick 48 Nahuatl is a large subgroup of the Yuto-Aztecan language family, and, including all of its variants, the most spoken native language in Mexico.", "Its almost two million native speakers live mainly in Puebla, Guerrero, Hidalgo, Veracruz, and San Luis Potosi, but also in Oaxaca, Durango, Modelos, Mexico City, Tlaxcala, Michoacan, Nayarit and the State of Mexico.", "Three dialectical groups are known: Central Nahuatl, Occidental Nahuatl and Oriental Nahuatl.", "The data collected for this work belongs to the Oriental branch spoken by 70 thousand people in Northern Puebla.", "Like all languages of the Yuto-Aztecan family, Nahuatl is agglutinative and one word can consist of a combination of many different morphemes.", "Usually, the verb functions as the stem and gets extended by morphemes specifying, e.g., subject, patient, object or indirect object.", "The most common syntax sequence for Nahuatl is SOV.", "An example word is: o | ne | mo | kokowa | ya I was sick Wixarika is a language spoken in the states of Jalisco, Nayarit, Durango and Zacatecas in Central West Mexico by approximately fifty thousand people.", "It belongs to the Coracholan group of languages within the Yuto-Aztecan family.", "Wixarika has five vowels { a,e,i,+ 1 ,u } with long and short variants.", "An example for a word in the language is: ne | p+ | ti | kuye | kai I was sick Like Nahuatl, it has an SOV syntax, with heavy agglutination on the verb.", "Wixarika is morphologically more complex than other languages from the same family, because it incorporates more information into the verb (Leza and L opez, 2006).", "This leads to a higher number of morphemes per word as can also be seen in Table 3. Yorem Nokki is part of Taracachita subgroup of the Yuto-Aztecan language family.", "Its Southern dialect is spoken by close to forty thousand people in the Mexican states of Sinaloa and Sonora, while its Northern dialect has about twenty thousand speakers.", "In this work, we consider the Southern dialect.", "The nominal morphology of Yorem 1 While linguists often use a dashed i", "Nokki is rather simple, but, like in the other Yuto-Aztecan languages, the verb is highly complex.", "Its alphabet consists of 28 characters and contains 8 different vowels.", "An example verb is: ko'kore | ye | ne I was sick 3 Morphological Segmentation Datasets To create our datasets, we make use of both segmentable (i.e., consisting of multiple morphemes) and non-segmentable (i.e., consisting of one single morpheme) words described in books of the collection Archive of Indigenous Languages in Mexicanero (Canger, 2001), Nahuatl (Lastra de Suarez, 1980), Wixarika (Gomez and Lopez, 1999), and Yorem Nokki (Freeze, 1989).", "Statistics about the data in the four languages are displayed in Tables 1, 2 and 3. We include segmentable as well as non-segmentable words into our datasets in order to ensure that our methods can correctly decide against splitting up single morphemes.", "The phrases in all languages are mostly parallel, such that the corpora are roughly equivalent.", "Therefore, we can compare the morphology of translated words (cf. Table 3), noticing that the language with most agglutination is Wixarika, with an average rate of 3 .", "25 morphemes per word; the other languages have an average of close to 2 .", "2 morphemes per word.", "This higher morphological complexity naturally produces data sparsity at the token level.", "Also, we can notice that Wixarika has more unique words than the rest of our studied languages.", "However, Nahuatl has with 810 the highest number of unique morphemes.", "Final splits.", "In order to make follow-up work on minimal-resource settings for morphological segmentation easily comparable, we provide pre-defined splits of our datasets 2 .", "40% of the data constitute the test sets.", "Of the remaining data, we 2 Our datasets can be found together with the code of our models at http://turing.iimas.unam.mx/wix/MexSeg .", "use 20% for development and the rest for training.", "The final numbers of words per dataset and language are shown in Table 2. 4 Neural Seq2seq Models for Segmentation In the beginning of this section, we will introduce our neural architecture for segmentation.", "Subsequently, we will first describe our two proposed multi-task training approaches and second our data augmentation methods.", "Finally, we will elaborate on expected differences between the two.", "Following work on segmentation by Kann et al. (2016) for high-resource settings, our approach is based on the neural seq2seq model introduced by Bahdanau et al. (2015) for machine translation.", "Encoder.", "The first part of our model is a bidirectional recurrent neural network (RNN) which encodes the input sequence, i.e., the sequence of characters of a given word w = w 1 , w 2 , . . . , w T v , represented by the corresponding embedding vectors v w 1 , ..., v w Tv .", "In particular, our encoder consists of one gated recurrent neural network (GRU) which processes the input in forward direction and a second GRU which processes the input from the opposite side.", "Encoding with this bidirectional GRU yields the forward hidden state h i = f (cid:16) h i 1 , v i (cid:17) and the backward hidden state h i = f (cid:16) h i +1 , v i (cid:17) , for a non-linear activation function f .", "Their concatenation h i = h h i ; h i i is passed on to the decoder.", "Decoder.", "The second part of our network, the decoder, is a single GRU, defining a probability distribution over strings in ( S ) , for an alphabet and a separation symbol S : p ED ( c | w ) = T c Y t =1 p ( c t | c 1 , . . . , c t 1 , w ) .", "where p ( c t | c 1 , . . . , c t 1 , w ) is computed using an attention mechanism and an output softmax S", "layer over .", "A more detailed description of the general attention-based encoder-decoder architecture can be found in the original paper by Bahdanau et al. (2015).", "In order to leverage unlabeled data or even random strings during training, we define an autoencoding auxiliary task, which consists of encoding the input and decoding an output which is identical to the original string.", "Then, our multi-task training objective is to maximize the joint log-likelihood of this auxiliary task and our segmentation main task: L ( )= X ( w,c ) T log p ( c | e ( w )) (2) + X a A log p ( a | e ( a )) T denotes the segmentation training data with examples consisting of a word w and its segmentation c .", "A denotes either a set of words in the language of the system or a set of random strings.", "The function e describes the encoder and depends on the model parameters , which are shared across the two tasks.", "For training, we use data from both sets at the same time and mark each example with an additional, task-specific input symbol.", "We treat the size of A as a hyperparameter which we optimize on the development set separately for each language.", "Values we experiment with are m times the amount of instances in the original training set, with m { 1 , 2 , 4 , 8 } .", "3 3 An exception is Yorem Nokki, for which we do not have enough unlabeled data available, such that we experiment only with m { 1 , 2 } .", "There are multiple reasons why we expect multi-task training to improve the performance of the final model.", "First, multi-task training should act as a regularizer.", "Second, for our models, the segmentation task consists in large parts of learning to copy the input character sequence to the output.", "This, however, can be learned from any string and does not require annotated segmentation boundaries.", "Third, in the case of unlabeled data (i.e., not for random strings), we expect the character language model in the decoder to improve, since it is trained on additional data.", "We denote models trained with multi-task training using unlabeled corpus data as MTT-U and models trained with multi-task training using random strings as MTT-R .", "A second option to make use of unlabeled data or random strings is to extend the available training data with new examples made from those.", "The main question to answer here is how to include the new data into the existing datasets.", "We do this by building new training examples in a fashion similar to the multi-task setup.", "All newly created instances are of the form w 7 w (3) where either w V with V being the observed vocabulary of the language, e.g., words in a given unlabeled corpus, or w R with R being a set of sequences of random characters from the alphabet of the language.", "Again, we treat the amount of additional training examples as a hyperparameter which we optimize on the development set separately for each language.", "We explore m times the amount of instances in the original training set, with m { 1 , 2 , 4 , 8 } .", "The reasons why we expect our data augmentation methods to lead to better segmentation models are similar to those for multi-task training.", "We call models trained on datasets augmented with unlabeled corpus data or random strings DAU or DA-R , respectively.", "The difference between MTT-U (resp. MTT-R) and DA-U (resp. MTT-U) is a single element in the input sequence (the one representing the task).", "However, this information enables the model to handle each given instance correctly at inference time.", "As a result, it gets more robust against noisy data, which seems crucial for our way of using unlabeled corpora.", "Consider, for example, the Nahuatl word onemokokowaya .", "Training on onemokokowaya 7 onemokokowaya will make the model learn not to segment words which consist of the morphemes o, ne, mo, kokowa, ya , which should ultimately hurt performance.", "The multi-task approach, in contrast, mitigates this problem.", "As a conclusion, we expect the data augmentation approach with unlabeled data to not obtain outstanding performance, but rather consider it an important and informative baseline for the corresponding multi-task approach.", "Using random strings , the difference between the multi-task and the data augmentation approaches is less obvious: Real morphemes should appear rarely enough in the created random character sequences to avoid the negative effect which we expect for corpus words.", "We thus assume that the performances of MTT-R and DA-R should be similar.", "We apply our models to the datasets described in 3. For the multi-task training and data augmentation using unlabeled data, we use (unseg-mented) words from a parallel corpus collected by Gutierrez-Vasques et al. (2016) for Nahuatl and the closely related Mexicanero.", "For Wixarika we use data from Mager et al. (2018) and for Yorem Nokki we use text from Maldonado Martnez et al. (2010).", "Now, we will describe the baselines we use to evaluate the overall performance of our approaches.", "Supervised seq2seq RNN (S2S).", "As a first baseline, we employ a fully supervised neural model without data augmentation or multi-task training, i.e., an attention-based encoder-decoder RNN (Bahdanau et al., 2015) which has been trained only on the available annotated data.", "of MORFESSOR (Kohonen et al., 2010), a wellknown morphological segmentation system.", "During training, we tune the hyperparameters for each language on the respective development set.", "The best performing model is applied to the test set.", "FlatCat (FC).", "Our next baseline is FlatCat (Gronroos et al., 2014), a variant of MORFESSOR.", "It consists of a hidden Markov model for segmentation.", "The states of the model correspond either to a word boundary and one of the four morph categories stem, prefix, suffix, and non-morpheme.", "It can work in an unsupervised way, but, similar to the previous baseline, can make effective use of small amounts of labeled data.", "CRF.", "We further compare to a conditional random fields (CRF) (Lafferty et al., 2001) model, in particular a strong discriminative model for segmentation by Ruokolainen et al. (2014).", "It reduces the task to a classification problem with four classes: beginning of a morph, middle of a morph, end of a morph and single character morph.", "Training is again semi-supervised and the model was previously reported to obtain good results for small amounts of unlabeled data (Ruoko-lainen et al., 2014), which makes it very suitable for our minimal-resource setting.", "Neural network parameters.", "All GRUs in both the encoder and the decoder have 100-dimensional hidden states.", "All embeddings are 300-dimensional.", "For training, we use ADADELTA (Zeiler, 2012) with a minibatch size of 20.", "We initialize all weights to the identity matrix and biases to zero (Le et al., 2015).", "All models are trained for a maximum of 200 epochs, but we evaluate after every 5 epochs and apply the best performing model at test time.", "Our final reported results are averaged accuracies over 5 single training runs.", "Optimizing the amount of auxiliary task data.", "The performance of our neural segmentation model in dependence of the amount of auxiliary task training data can be seen in Figure 1.", "As a general tendency across all languages, adding more data seems better, particularly for the autoencoding task with random strings.", "The only exception is Wixarika.", "auxiliary task of autoencoding corpus data are m = 4 for Mexicanero, Nahuatl and Wixarika and m = 1 for Yorem Nokki.", "For multi-task training with autoencoding of random strings we select m = 8 for Mexicanero, Nahuatl and Yorem Nokki and m = 4 for Wixarika.", "Optimizing the amount of artificial training data for data augmentation.", "Figure 2 shows the performance of the encoder-decoder depending on the amount of added artificial training data.", "In the case of random strings, again, adding more training data seems to help more.", "However, using corpus data seems to hurt performance and the more such examples we use, the worse accuracy we obtain.", "Thus, we conclude that (as expected) data augmentation with corpus data is not a good way to improve the model's performance.", "We will discuss this in more detail in 6.5.", "Even though the final conclusion should be to not add much corpus data, we apply what gives best results on the development set.", "The final configurations we thus choose for DA-U are m = 1 for Mexicanero, Wixarika and Yorem Nokki and m = 2 for Nahuatl.", "For DA-R, we select m = 4 for Mexicanero, Wixarika and Yorem Nokki and m = 8 for Nahuatl.", "Accuracy.", "First, we evaluate using accuracy on the token level.", "Thus, an example counts as correct if and only if the output of the system matches the reference solution exactly, i.e., if all output symbols are predicted correctly.", "F1.", "Our second evaluation metric is border F1, which measures how many segment boundaries are predicted correctly by the model.", "While we use this metric because it is common for segmentation tasks, it is not ideal for our models since those are not guaranteed to preserve the input character sequence.", "We handle this problem as follows: In order to compare borders, we identify them by the position of their preceding letter, i.e., if in both the model's guess and the gold solution a segment border appears after the second character, it counts as correct.", "Wrong characters are ignored.", "Note that this comes with the disadvantage of erroneously inserted characters leading to all subsequent segment borders being counted as incorrect.", "Table 4 shows that accuracy and F1 seem to be highly correlated for our task.", "The test results also give an answer to our first research question: The neural model S2S performs on par with CRF, the strongest baseline, for all languages but Nahuatl.", "Further, S2S and CRF both outperform MORF and FC by a wide margin.", "We may thus conclude that neural models are indeed applicable to segmentation of polysynthetic languages in a low-resource setting.", "Second, we can see that all our proposed methods except for DA-U improve over S2S, the neural baseline: The accuracy of MTT-U is between 0 .", "0141 (Wixarika) and 0 .", "0547 (Mexi-canero) higher than S2S's.", "MTT-R improves between 0 .", "0380 (Wixarika) and 0 .", "0532 (Yorem Nokki).", "Finally, DA-R outperforms S2S by 0 .", "0367 to 0 .", "0479 accuracy for Yorem Nokki and Mexicanero, respectively.", "The overall picture when considering F1 looks similar.", "Comparing our approaches to each other, there is no clear win-ner.", "This might be due to differences in the unlabeled data we use: the corpus we use for Mexicanero and Nahuatl is from dialects different from both respective test sets.", "Assuming that the effect of training a language model using unlabeled data and erroneously learning to not segment words are working against each other for MTT-U, this might explain why MTT-U is best for Mexicanero and the gap between MTT-U and MTT-R is smaller for Nahuatl than for Yorem Nokki and Wixarika.", "As mentioned before (cf. 5.3), a simple data augmentation method using unlabeled data should 53 Accuracy F1 MTT-U MTT-R DA-U DA-R S2S MORF CRF FC MTT-U MTT-R DA-U DA-R S2S MORF CRF FC Mex.", "hurt performance.", "This is indeed the result of our experiments: DA-U performs worse than S2S for all languages except for Mexicanero, where the unlabeled corpus is from another language: the closely related Nahuatl.", "We thus conclude that multi-task training (instead of simple data augmentation) is crucial for the use of unlabeled data.", "Finally, our methods compare favorably to all baselines, with the exception of CRF for Nahuatl.", "While CRF is overall the strongest baseline for our considered languages, our methods outperform it by up to 0 .", "0214 accuracy or 0 .", "0147 F1 for Mexicanero, 0 .", "0322 accuracy or 0 .", "0229 F1 for Wixarika and 0 .", "0505 accuracy or 0 .", "0340 F1 for Yorem Nokki.", "This shows the effectiveness of our fortified neural models for minimal-resource morphological segmentation.", "We now want to investigate the performance of one single model trained on all languages at once.", "This is done in analogy to the multi-task training described in 5.1.", "We treat segmentation in each language as a separate task and train an attention-based encoder-decoder model on maximizing the joint log-likelihood: L ( )= XL i LX ( w,c ) T Li log p ( c | e ( w )) (4) TL i denotes the segmentation training data in language L i and L is the set of our languages.", "As before, each training example consists of a word w and its segmentation c .", "We keep all model parameters and the training regime as described in 6.3.", "However, our training data now consists of a combination of all available training data for all 4 languages.", "In order to enable the model to differentiate between the tasks, M-Lang S-Lang BestMTT BestDA Mex.", "we prepend one language-specific input symbol to each instance.", "This corresponds to having one embedding in the input which marks the task.", "An example training instance for Yorem Nokki is L=YN ko 0 koreyene 7 ko 0 kore | ye | ne, where L=YN indicates the language.", "Due to the previous high correlation between accuracy and F1 we only use accuracy on the word level as the evaluation metric for this experiment.", "In Table 5, we show the results of the multi-lingual model, which was trained on all languages, compared to all individual models, as well as each respective best multi-task approach and data augmentation method.", "The results differ among languages: Most remarkably, for both Wixarika and Nahuatl, the accuracy of the multi-lingual model is higher than the one of the single-language model.", "This might be related to them being the languages with most training data available (cf. Table 3).", "Note, however, that even for the remaining two languagesMexicanero and Yorem Nokki we hardly lose accuracy when comparing the multi-lingual to the individual models.", "Since we only use one model (instead of four), without increasing its size significantly, we thus reduce the amount of parameters by nearly 75% .", "Work on morphological segmentation was started more than 6 decades ago (Harris, 1951).", "Since then, many approaches have been developed: In the realm of unsupervised methods, two important systems are LINGUISTICS (Goldsmith, 2001) and MORFESSOR (Creutz and Lagus, 2002).", "The latter was later extended to a semi-supervised version (Kohonen et al., 2010) in order to make use of the abundance of unlabeled data which is available for many languages.", "Ruokolainen et al. (2013) focused explicitly on low-resource scenarios and applied CRFs to morphological segmentation in several languages.", "They reported better results than earlier work, including semi-supervised approaches.", "In the following year, they extended their approach to be able to use unlabeled data as well, further improving performance (Ruokolainen et al., 2014).", "Cotterell et al. (2015) trained a semi-Markov CRF (semi-CRF) (Sarawagi and Cohen, 2005) jointly on morphological segmentation, stemming and tagging.", "For the similar problem of Chinese word segmentation, Zhang and Clark (2008) trained a model jointly on part-of-speech tagging.", "However, we are not aware of any prior work on multi-task training or data augmentation for neural segmentation models.", "In fact, the two only neural seq2seq approaches for morphological segmentation we know of focused on canonical segmentation (Cotterell et al., 2016) which differs from the surface segmentation task considered here in that it restores changes to the surface form of morphemes which occurred during word formation.", "Kann et al. (2016) also used an encoder-decoder RNN and combined it with a neural reranker.", "While our model architecture was inspired by them, their model was purely supervised.", "Additionally, they did not investigate the applicability of their neural seq2seq model in low-resource settings or for polysynthetic languages.", "Ruzsics and Samardzic (2017) extended the standard encoder-decoder architecture for canonical segmentation to contain a language model over segments and improved results.", "However, a big difference to our work is that they still used more than ten times as much training data as we have available for the indigenous Mexican languages we are working on here.", "(2016).", "The authors, instead of using seq2seq models, treat the task as a sequence labeling problem and use LSTMs to classify every character either as the beginning, middle or end of a morpheme, or as a single-character morpheme.", "Cross-lingual knowledge transfer via language tags was proposed for neural seq2seq models before, both for tasks that handle sequences of words (Johnson et al., 2017) and tasks that work on sequences of characters (Kann et al., 2017).", "However, to the best of our knowledge, we are the first to try such an approach for a morphological segmentation task.", "In many other areas of NLP, cross-lingual transfer has been applied successfully, e.g., in entity recognition (Wang and Manning, 2014), language modeling (Tsvetkov et al., 2016), or parsing (Cohen et al., 2011; Sgaard, 2011; Ammar et al., 2016).", "We first investigated the applicability of neural seq2seq models to morphological surface segmentation for polysynthetic languages in minimal-resource settings, i.e., for considerably less than 1 , 000 training instances.", "Although they are generally thought to require large amounts of training data, neural networks obtained an accuracy comparable to or higher than several strong baselines.", "Subsequently, we proposed two novel multitask training approaches and two novel data augmentation methods to further increase the performance of our neural models.", "Adding those, we improved over the neural baseline for all languages, and for Mexicanero, Wixarika and Yorem Nokki our final models outperformed all baselines by up to 5 .", "05% absolute accuracy or 3 .", "40% F1.", "Furthermore, we explored cross-lingual transfer between our languages and reduced the amount of necessary model parameters by about 75% , while improving performance for some of the languages.", "We publically release our datasets for morphological surface segmentation of the polysynthetic minimal-resource languages Mexicanero, Nahuatl, Wixarika and Norem Yokki.", "We would like to thank Paulina Grnarova, Rodrigo Nogueira and Ximena Gutierrez-Vasques for their helpful feedback." ]
[ "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "method", "objective", "result", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "abstain", "other", "other", "method", "other", "other", "other", "objective", "other", "objective", "abstain", "objective", "result", "abstain", "abstain", "objective", "abstain", "other" ]
[ "It has been a common approach to pre-train a language model on a large corpus and fine-tune it on task-specific data.", "In practice, we observe that fine-tuning a pre-trained model on a small dataset may lead to overand/or under-estimation problem.", "In this paper, we propose MC-Tailor, a novel method to alleviate the above issue in text generation tasks by truncating and transferring the probability mass from over-estimated regions to underestimated ones.", "Experiments on a variety of text generation datasets show that MC-Tailor consistently and significantly outperforms the fine-tuning approach.", "Our code is available at https://github.com/NingMiao/ MC-tailor .", "Recently, pre-trained language models (PLM), e.g. GPT-2 (Radford et al., 2019), have shown great promise in many applications of natural language generation, such as stylized text generation (Syed et al., 2019) and dialog system (Wolf et al., 2019).", "PLM is obtained by first pre-training on large-scaled raw sentences (always general domain cor-pus), and then used in downstream tasks by fine-tuning on task-specific datasets (always from some specific domains).", "Specifically, given a pre-trained GPT-2 model, to generate sentences of email domain, we always need to fine-tune the GPT-2 on a small set of email domain corpus.", "However, we argue that to get desired sentence outputs, fine-tuning PLM on a specific domain dataset is not necessarily the best, especially when the fine-tuning dataset is of a small size.", "Typically, fine-tuning is conducted through Maximum Likelihood Estimation (MLE), with which the resulting model distribution will be asymptotically consistent with true distribution when the fine-tuning dataset has infinite data samples.", "But it is not the 1 Page .", "case of fine-tuning on small datasets, which always leads to the mismatch problem of the real and model distributions.", "Specifically, MLE minimizes the Kull-backLeibler (KL) divergence between model and true distributions.", "Theis et al. (2016) point out that minimizing KL avoids assigning an extremely small probability to any data point but assigns a lot of probability mass to non-data regions, which leads to a gap between P Real and P Model .", "Additionally, simple data patterns in the fine-tuning dataset could be easily memorized and over-estimated.", "Meanwhile, the complex ones may be under-estimated.", "The above problem is not severe with adequate data samples, but non-trivial when the size of the fine-tuning dataset is not large enough.", "(see Figure 1).", "To address the overand under-estimated problem, in this paper, we propose MC-Tailor, which can tailor the resulting density of model distribution by cutting the probability mass of over-estimated zones to under-estimated zones, leading to more realistic model distribution after fine-tuning.", "Concretely, MC-Tailor consists of two components: a ratio estimator to distinguish overand underestimated regions of model distribution; and an early rejection sampling (ERS) component to tailor (reassign) probability mass and efficiently obtain sampled sentences from the model distribution.", "Note that the proposed ERS is inspired by Sequential Monte Carlo (SMC, Doucet et al. (2000)), but can avoid the degeneration from SMC, as it directly kills samples rather than performs resampling.", "We conduct experiments on various data sets to verify the effectiveness of the proposed MC-Tailor.", "Empirical results show that MC-Tailor can generate significantly better samples than finetuning, and the resulting model distributions of our model are closer to real data distributions.", "Language models generally estimate the density of sentences in real context within an autoregressive style:", "where x is a sentence with length N .", "Recently, with an extremely large number of parameters, pre-trained language models like GPT-2 (Radford et al., 2019) and Transformer-XL (Dai et al., 2019) have shown great promise in text generation.", "PLMs are first trained on a huge general domain data set and then fine-tuned on specific domain datasets of different downstream tasks.", "Specifically, given a pre-trained GPT2 model, to generate sentences of email domain, we always need to fine-tune the GPT2 on a small set of email domain corpus.", "Additionally, PLMs have some other important applications.", "Miao et al. (2019) use fine-tuned language models for constrained text generation.", "Wolf et al. (2019) fine-tune GPT-2 on a dialog data set to boost the performance of dialog system.", "However, as stated in the Introduction, directly fine-tuning the PLM on a small dataset may lead to the mismatch problem, namely the overand underestimated problem between the true distribution and the model distribution.", "In the next section, we propose a new method to alleviate this problem.", "To mitigate the above shortcomings of finetuning, we propose MC-Tailor, which generates samples from a modified sample distribution.", "MC-Tailor is composed of a ratio estimator, which detects over-and under-estimate regions of model distributions, and the Early Rejection Sampling algorithm (ERS), which accelerates sampling while ensuring sample quality.", "Ratio estimator is a common technique to measure the gap between two related distributions (Yuxuan et al., 2020).", "In this work, We apply ratio estimator ( x ) to estimating P Model ( x ) P True ( x ) , the probability ratio of sentence x in fine-tuned model distribution P Model ( x ) and true distribution P True ( x ) .", "To tailor the probability from a finetuned PLM, we cut the probabilities of over-fitting samples.", "Specifically, when ( x ) > 1 , i.e., the model over-estimates the probability of sample x , we remove x with a probability of 1 1 r ( x ) to approximate P True ( x ) .", "After normalization, probabilities of under-estimated areas will increase correspondingly.", "The resulting new distribution is P Tailor P Model ( x ) max ( ( x ) , 1) .", "In this work, we try several different structures of ratio estimators.", "Convolutional Ratio Estimator.", "Since ratio estimation shares similar properties with classifi-cation problems and convolutional neural networks (CNN) are powerful classifiers, our first thought is to build a CNN-based ratio estimator.", "To be concrete, we use a two-layer CNN to predict whether x is from true or learned distribution.", "By training with cross-entropy loss, Softmax ( CNN ( x )) P Model ( x ) P True ( x ) + P Model ( x ) .", "Dual Ratio Estimator.", "Though the basic convolutional ratio estimator is easy to apply, it makes sampling inefficient.", "For most sentence x , we can roughly predict whether it is in a specific domain or suffering from overfitting by the first a few words.", "However, ( x ) can only be obtained after a full sentence is generated, so massive computing resources are wasted on generating unpromising samples.", "where (cid:48) ( x [1: i ] ) is the minimum ratio of all sentences with prefix x [1: i ] .", "If (cid:48) ( x [1: i ] ) is greater than a pre-defined threshold, all sentences with prefix x [1: i ] should be rejected.", "As a result, we do not need to waste time to continue sampling.", "But if we directly train (cid:48) ( x [1: i ] ) to distinguish P True ( x [1: i ] ) from P Model ( x [1: i ] ) , we will end up getting the average value of ( x ) for all sentences with prefix x [1: i ] , rather than the minimum value.", "If so, some sentences with low ( x ) will be erroneously rejected.", "Luckily, the properties of min-max dual sheds some light on this problem.", "We first define (cid:48)(cid:48) ( x ) = max i ( (cid:48) ( x [1: i ] )) as the dual form of (cid:48) ( x ) .", "Under some weak conditions, we can prove that if (cid:48)(cid:48) ( x ) approximates P Model ( x ) P True ( x ) , then (cid:48) ( x [1: i ] ) approximates min( ( x )) for x with prefix x [1: i ] .", "Similar to training ( x ) , we train (cid:48)(cid:48) ( x ) by distinguishing P True ( x ) from P Model ( x ) .", "Since (cid:48)(cid:48) ( x ) is a function of (cid:48) ( x [1: i ] ) , we can get a set of proper parameters for (cid:48) ( x [1: i ] ) .", "Hierarchical Ratio Estimator.", "Since a single ratio estimator may not be powerful enough to accurately estimate P Model ( x ) P Real ( x ) , we break down the workload to several i ( x ) in the spirit of boosting.", "We first train 0 ( x ) to estimate P Model ( x ) P Real ( x ) , and get P 0 Tailor ( x ) .", "And then we use 1 ( x ) to estimate the gap between P Real and P 0 Tailor ( x ) ...", "With the collaboration of i ( x ) , we can get a more accurate P n Tailor ( x ) .", "Using hierarchical ratio estimators also avoids using a single but complicated ratio estimator, which is prone to over-fitting.", "Similarly, we can add hierarchy to the dual ratio estimator to make a hierarchical dual ratio estimator.", "In this part, we introduce our specially designed Early Rejection Sampling (ERS) algorithm for MC-Tailor.", "Improved from Sequential Monte Carlo, ERS can efficiently generate samples with high diversity.", "Rejection Sampling By applying RS, we first generate a batch of samples from P Model , and then rejecting some samples by rejection ratio 1 1 max ( ( x ) , 1) .", "However, RS is very inefficient in actual use since it rejects samples at the end of sampling.", "As shown in Figure 2a, lots of computation resources are wasted on ultimately rejected samples.", "Sequntial Monte Carlo Instead of rejecting samples at the end of sampling, SMC performs resampling at each step.", "The unnormalized resampling weight at step i is provided by (cid:48) ( x [1: i 1] ) (cid:48) ( x [1: i ] ) , leading to an asymptotically unbiased estimator.", "However, SMC suffers from serious degeneracy problem.", "In other words, samples from SMC tend to share a very small number of the ancestors because most of the ancestors are killed during resampling.", "As a result, sample diversity of SMC is critically low.", "Early Rejection Sampling To overcome the degeneracy problem of SMC and increase sample diversity.", "We propose Early Rejection Sampling (ERS) algorithm.", "ERS first uniformly samples a real number r in (0 , 1) .", "After step i , if (cid:48) ( x [1 : i ]) > 1 r , this particle is killed immediately and computation resources are released to parallel threads.", "The main difference between ERS and RS is that ERS kills unpromising particles before they are fully generated.", "But unlike SMC, there is no correlation between SMC samples, resulting in higher sample diversity.", "In this section, We empirically compare the sample quality of our model and baseline models.", "We first set up experiments and show results in Section 4.2.", "We conduct experiments on 9 data sets with different styles and sizes.", "And we use five different metrics, including human evaluation, to measure the generation performance of each method.", "experiments.", "Evaluation Metrics.", "To evaluate the generation quality and diversity, we use the following metrics.", "Ontonotes (Pradhan et al., 2013) is a multi-genre data set for sequence annotation.", "We use sentences from six genres (bn, bc, mz, nw, tc, wb) for the experiment.", "Switchboard (Jurafsky et al., 1997) and DailyDialog (Li et al., 2017) are large and medium scale dialog data sets, of which only responses are used for the experiment.", "IWSLT-16 (Cettolo et al., 2016) is a data set of paired conference speeches for machine translation.", "We use English sentences from De-En pairs to test model performance on the special conference speech domain.", "PPL reflects the average density of samples from test set in a generative model.", "Models with lower PPLs have more similar model distributions with real contexts.", "Unlike baseline models, MC-Tailor only has an unnormalized log-probability.", "We estimate the normalization constant of MC-Tailor by importance sampling and calculate PPLs directly from the normalized log-probability.", "Rev-PPL is a good indicator for both sample quality and diversity, which is derived by first training a language model with generated samples and calculating the PPL of test set in the language model.", "EMD-l is the earth mover distance between sentence lengths of real and generated data.", "EMD-f is the earth mover distance between word frequencies of real and generated data.", "Human Evaluation Score is added to reflect the comprehensive sample quality.", "We ask 4 volunteers to select a score from { 0, 0.5, 1 } for each sample according to their fluency and coherence with the target style.", "In 85% cases, at least three volunteers give the same score, showing the reliability of the human evaluation.", "Model Details.", "In all the experiments, we use the released GPT-2 with 117M parameters as the pre-trained language model.", "We first fine-tune GPT2 on each dataset and then build our tailor on it.", "Early-stop is applied to avoid over-fitting.", "For ratio estimators, we use simple CNNs with two convolution layers where (filter number, kernel size) is set to (10,5) and (5,5), respectively.", "Rev-PPLs of different models are shown in Table 1.", "We find that MC-Tailor significantly reduces Rev-PPLs than fine-tuning baseline in data sets of different sizes, from Ontonotes-mz with only 7k training samples to relatively large Switchboard data set with more than 200k samples.", "We also notice that multi-layer MC-Tailor ERS performs better than single-layer MC-Tailor RS , which confirms the point in Section 3.2 that the gap between P Model and P Data is too complex for a single-layer ratio estimator to estimate.", "Sample NLLs of each method (Table 2) further confirms that MC-Tailor succeeds in decreasing the probabilities of over-estimated simple patterns and reallocating them to underestimated samples.", "We further compare MC-Tailor with the baseline Refs Sentences NLL (Fine-tune) NLL (MC-Tailor ERS ) a Thank you everyone for watching .", "model under other metrics.", "From table 4, we find MC-Tailor greatly reduce PPL, which means increased probabilities of generating samples similar to test samples.", "And we can draw the conclusion that sample distributions of MC-Tailor are closer to real sample distributions, with lower EMD-l and EMD-f.", "What's more, human evaluation scores of MC-Tailor are about 10% higher than fine-tuning, which indicates better sample quality to human eyes.", "Cases shown in Table 3 further demonstrate the advantage of MC-Tailor in fluency and informativeness.", "Seq-GAN is also compared in our experiment.", "However, rev-ppls of GANs are even higher than directly fine-tuning GPT-2, and they are especially difficult to train.", "So we remove Seq-GAN from baseline models.", "The acceleration effect of ERS is also verified in the experiment.", "For MC-Tailor with 1, 2, and 3 layers of ratio estimator, ERS reduces 30%, 79%, and 90% of computation wasted on unpromising samples, achieving 1.5x, 2.8x, 5x accelerations, respectively.", "In this paper, we propose MC-Tailor to alleviate the overand under-estimation problem between true and model distributions.", "MC-Tailor is composed of a ratio estimator, which adjusts the probabilities of MLE fine-tuned PLMs to approximate true distributions, and the ERS to accelerate sampling while ensuring sample quality.", "Experiments on various datasets show the effectiveness and efficiency of MC-Tailor." ]
[ "abstain", "result", "objective", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain" ]
[ "Given the fast development of analysis techniques for NLP and speech processing systems, few systematic studies have been conducted to compare the strengths and weaknesses of each method.", "As a step in this direction we study the case of representations of phonology in neural network models of spoken language.", "We use two commonly applied analytical techniques, diagnostic classifiers and representational similarity analysis, to quantify to what extent neural activation patterns encode phonemes and phoneme sequences.", "We manipulate two factors that can affect the outcome of analysis.", "First, we investigate the role of learning by comparing neural activations extracted from trained versus randomly-initialized models.", "Second, we examine the temporal scope of the activations by probing both local activations corresponding to a few milliseconds of the speech signal, and global activations pooled over the whole utterance.", "We conclude that reporting analysis results with randomly initialized models is crucial, and that global-scope methods tend to yield more consistent results and we recommend their use as a complement to local-scope diagnostic methods.", "As end-to-end architectures based on neural networks became the tool of choice for processing speech and language, there has been increased interest in techniques for analyzing and interpreting the representations emerging in these models.", "A large array of analytical techniques have been proposed and applied to diverse tasks and architectures (Belinkov and Glass, 2019; Alishahi et al., 2019).", "Given the fast development of analysis techniques for NLP and speech processing systems, relatively few systematic studies have been conducted to compare the strengths and weaknesses of each methodology and to assess the reliability and explanatory power of their outcomes in controlled settings.", "This paper reports a step in this direction: as a case study, we examine the representation of phonology in neural network models of spoken language.", "We choose three different models that process speech signal as input, and analyze their learned neural representations.", "We use two commonly applied analytical techniques:", "(i) diagnostic models and", "(ii) representational similarity analysis to quantify to what extent neural activation patterns encode phonemes and phoneme sequences.", "In our experiments, we manipulate two important factors that can affect the outcome of analysis.", "One pitfall not always successfully avoided in work on neural representation analysis is the role of learning .", "Previous work has shown that sometimes non-trivial representations can be found in the activation patterns of randomly initialized, untrained neural networks (Zhang and Bowman, 2018; Chrupaa and Alishahi, 2019).", "Here we investigate the representations of phonology in neural models of spoken language in light of this fact, as extant studies have not properly controlled for role of learning in these representations.", "The second manipulated factor in our experiments is the scope of the extracted neural activations .", "We control for the temporal scope, probing both local activations corresponding to a few milliseconds of the speech signal, as well as global activations pooled over the whole utterance.", "When applied to global-scope representations, both analysis methods detect a robust difference between the trained and randomly initialized target models.", "However we find that in our setting, RSA applied to local representations shows low correlations between phonemes and neural activation patterns for both trained and randomly initialized target models, and for one of the target models the local diagnostic classifier only shows a minor difference in the decodability of phonemes from randomly initialized versus trained network.", "This highlights the importance of reporting analysis results with randomly initialized models as a baseline.", "This paper comes with a repository which contains instructions and code to reproduce our experiments.", "1 2 Related work 2.1 Analysis techniques Many current neural models of language learn representations that capture useful information about the form and meaning of the linguistic input.", "Such neural representations are typically extracted from activations of various layers of a deep neural architecture trained for a target task such as automatic speech recognition or language modeling.", "A variety of analysis techniques have been proposed in the academic literature to analyze and interpret representations learned by deep learning models of language as well as explain their decisions; see Belinkov and Glass (2019) and Alishahi et al. (2019) for a review.", "Some of the proposed techniques aim to explain the behavior of a network by tracking the response of individual or groups of neurons to an incoming trigger (e.g., Nagamine et al., 2015; Krug et al., 2018).", "In contrast, a larger body of work is dedicated to determining what type of linguistic information is encoded in the learned representations.", "This type of analysis is the focus of our paper.", "Two commonly used approaches to analyzing representations are: Probing techniques, or diagnostic classifiers, i.e. methods which use the activations from different layers of a deep learning architecture as input to a prediction model (e.g., Adi et al., 2017; Alishahi et al., 2017; Hupkes et al., 2018; Conneau et al., 2018); 1 See https://github.com/gchrupala/analyzing-analytical-methods.", "Representational Similarity Analysis (RSA) borrowed from neuroscience (Kriegeskorte et al., 2008) and used to correlate similarity structures of two different representation spaces (Bouchacourt and Baroni, 2018; Chrupaa and Alishahi, 2019; Abnar et al., 2019; Abdou et al., 2019).", "Research on the analysis of neural encodings of language has shown that in some cases, substantial information can be decoded from activation patterns of randomly initialized, untrained recurrent networks.", "It has been suggested that the dynamics of the network together with the characteristics of the input signal can result in non-random activation patterns (Zhang and Bowman, 2018).", "Using activations generated by randomly initialized recurrent networks has a history in speech recognition and computer vision.", "Two better-known families of such techniques are called Echo State Networks (ESN) (Jaeger, 2001) and Liquid State Machines (LSM) (Maass et al., 2002).", "The general approach (also known as reservoir computing) is as follows: the input signal is passed through a randomly initialized network to generate a nonlinear response signal.", "This signal is then used as input to train a model to generate the desired output at a reduced cost.", "We also focus on representations from randomly initialized neural models but do so in order to show how training a model changes the information encoded in the representations according to our cho-sen analysis methods.", "Since the majority of neural models of language work with text rather than speech, the bulk of work on representation analysis has been focused on (written) word and sentence representations.", "However, a number of studies analyze neural representations of phonology learned by models that receive a speech signal as their input.", "As an example of studies that track responses of neurons to controled input, Nagamine et al. (2015) analyze local representations acquired from a deep model of phoneme recognition and show that both individual and groups of nodes in the trained network are selective to various phonetic features, including manner of articulation, place of articulation, and voicing.", "Krug et al. (2018) use a similar approach and suggest that phonemes are learned as an intermediate representation for predicting graphemes, especially in very deep layers.", "Others predominantly use diagnostic classifiers for phoneme and grapheme classification from neural representations of speech.", "In one of the their experiments Alishahi et al. (2017) use a linear classifier to predict phonemes from local activation patterns of a grounded language learning model, where images and their spoken descriptions are processed and mapped into a shared semantic space.", "Their results show that the network encodes substantial knowledge of phonology on all its layers, but most strongly on the lower recurrent layers.", "Similarly, Belinkov and Glass (2017) use diagnostic classifiers to study the encoding of phonemes in an end-to-end ASR system with convolutional and recurrent layers, by feeding local (frame-based) representations to an MLP to predict a phoneme label.", "They show that phonological information is best represented in lowest input and convolutional layers and to some extent in low-to-middle recurrent layers.", "Belinkov et al. (2019) extend their previous work to multiple languages (Arabic and English) and different datasets, and show a consistent pattern across languages and datasets where both phonemes and graphemes are encoded best in the middle recurrent layers.", "None of these studies report on phoneme classification from randomly initialized versions of their target models, and none use global (i.e., utterance-level) representations in their analyses.", "In this section we first describe the speech models which are the targets of our analyses, followed by a discussion of the methods used here to carry out these analyses.", "Transformer-ASR model The first model is a transformer model (Vaswani et al., 2017) trained on the automatic speech recognition (ASR) task.", "More precisely, we used a pretrained joint CTC-Attention transformer model from the ESPNet toolkit (Watan-abe et al., 2018), trained on the Librispeech dataset (Panayotov et al., 2015).", "2 The architecture is based on the hybrid CTC-Attention decoding scheme presented by Watanabe et al. (2017) but adapted to the transformer model.", "The encoder is composed of two 2D convolutional layers (with stride 2 in both time and frequency) and a linear layer, followed by 12 transformer layers, while the decoder has 6 such layers.", "The convolutional layers use 512 channels, which is also the output dimension of the linear and transformer layers.", "The dimension of the flattened output of the two convolutional layers (along frequencies and channel) is then 20922 and 10240 respectively: we omit these two layers in our analyses due to their excessive size.", "The input to the model is made of a spectrogram with 80 coefficients and 3 pitch features, augmented with the SpecAugment method (Park et al., 2019).", "The output is composed of 5000 SentencePiece subword tokens (Kudo and Richardson, 2018).", "The model is trained for 120 epochs using the optimization strategy from Vaswani et al. (2017), also known as Noam optimization.", "Decoding is performed with a beam of size 60 for reported word error rates (WER) of 2.6% and 5.7% on the test set (for the clean and other subsets respectively).", "RNN-VGS model The Visually Grounded Speech (VGS) model is trained on the task of matching images with their corresponding spoken captions, first introduced by Harwath and Glass (2015) and Harwath et al. (2016).", "We use the architecture of Merkx et al. (2019) which implemented several improvements over the RNN model of Chrupaa et al. (2017), and train it on the Flickr8K Audio Caption Corpus (Harwath and Glass, 2015).", "The speech encoder consists of one 1D convolutional layer (with 64 output channels) which subsamples the input by a factor of two, and four bidirectional GRU layers (each of size 2048) followed by a self-attention-based pooling layer.", "The image encoder uses features from a pre-trained ResNet-152 model (He et al., 2016) followed by a linear projection.", "The loss function is a margin-based ranking objective.", "Following Merkx et al. (2019) we trained the model using the Adam optimizer (Kingma and Ba, 2015) with a cyclical learning rate schedule (Smith, 2017).", "The input are MFCC features with total energy and delta and double-delta coefficients with combined size 39.", "2 We used ESPnet code from commit 8fdd8e9 with the pretrained model available from tinyurl.com/r9n2ykc.", "RNN-ASR model This model is a middle ground between the two previous ones.", "It is trained as a speech recognizer similarly to the transformer model but the architecture of the encoder follows the RNN-VGS model (except that the recurrent layers are one-directional in order to fit the model in GPU memory).", "The last GRU layer of the encoder is fed to the attention-based decoder from Bahdanau et al. (2015), here composed of a single layer of 1024 GRU units.", "The model is trained with the Adadelta optimizer (Zeiler, 2012).", "The input features are identical to the ones used for the VGS model; it is also trained on the Flickr8k dataset spoken caption data, using the original written captions as transcriptions.", "The architecture of this model is not optimized for the speech recognition task: rather it is designed to be as similar as possible to the RNN-VGS model while still performing reasonably on speech recognition (WER of 24.4% on Flickr8k validation set with a beam of size 10).", "We consider two analytical approaches:", "Diagnostic model is a simple, often linear, classifier or regressor trained to predict some information of interest given neural activation patterns.", "To the extent that the model success-fuly decodes the information, we conclude that this information is present in the neural representations.", "Representational similarity analysis (RSA) is a second-order approach where similarities between pairs of some stimuli are measured in two representation spaces: e.g. neural activation pattern space and a space of symbolic linguistic representations such as sequences of phonemes or syntax trees (see Chrupaa and Alishahi, 2019).", "Then the correlation between these pairwise similarity measurements quan-tifies how much the two representations are aligned.", "The diagnostic models have trainable parameters while the RSA-based models do not, except when using a trainable pooling operation.", "We also consider two ways of viewing activation patterns in hidden layers as representations: Local representations at the level of a single frame or time-step; Global representations at the level of the whole utterance.", "Local diagnostic classifier.", "We use single frames of input (MFCC or spectrogram) features, or activations at a single timestep as input to a logistic diagnostic classifier which is trained to predict the phoneme aligned to this frame or timestep.", "Local RSA.", "We compute two sets of similarity scores.", "For neural representations, these are cosine similarities between neural activations from pairs of frames.", "For phonemic representations our similarities are binary, indicating whether a pair of frames are labeled with the same phoneme.", "Pearson's r coefficient computed against a binary variable, as in our setting, is also known as point biserial correlation.", "Global diagnostic classifier.", "We train a linear diagnostic classifier to predict the presence of phonemes in an utterence based on global (pooled) neural activations.", "For each phoneme j the predicted probability that it is present in the utterance with representation h is denoted as P( j | h ) and computed as: P( j | h ) = sigmoid( W Pool ( h ) + a ) j (1) where Pool is one of the pooling function in Section 3.2.1.", "Global RSA.", "We compute pairwise similarity scores between global (pooled; see Section 3.2.1) representations and measure Pearson's r with the pairwise string similarities between phonemic transcriptions of utterances.", "We define string similarity as: sim( a, b ) = 1 Levenshtein( a, b ) max( | a | , | b | ) (2) where | | denotes string length and Levenshtein is the string edit distance.", "The representations we evaluate are sequential: sequences of input frames, or of neural activation states.", "In order to pool them into a single global representation of the whole utterance we test two approaches.", "Mean pooling.", "We simply take the mean for each feature along the time dimension.", "Attention-based pooling.", "Here we use a simple self-attention operation with parameters trained to optimize the score of interest, i.e. the RSA score or the error of the diagnostic classifier.", "The attention-based pooling operator performs a weighted average over the positions in the sequence, using scalar weights.", "The pooled utterance representation Pool ( h ) is defined as: Pool ( h ) = N (cid:88) t =1 t h t , (3) with the weights computed as: t = exp( w T h t ) (cid:80) Nj =1 exp( w T h j ) , (4) where w are learnable parameters, and h t is an input or activation vector at position t .", "3 3.3 Metrics For RSA we use Pearson's r to measure how closely the activation similarity space corresponds to the phoneme or phoneme string similarity space.", "For the diagnostic classifiers we use the relative error reduction (RER) over the majority class baseline to measure how well phoneme information can be decoded from the activations.", "Effect of learning In order to be able to assess and compare how sensitive the different methods are to the effect of learning on the activation patterns, it is important to compare the score on the trained model to that on the randomly initialized model; we thus always display the two jointly.", "We posit that a desirable property of an analytical method is that it is sensitive to the learning effect, and that the scores on trained versus randomly initialized models are clearly separated.", "Coefficient of partial determination Correlation between similarity structures of two representational spaces can, in principle, be partly due to the fact that both these spaces are correlated to a third space.", "For example, were we to get a high value for global RSA for one of the top layers of the RNN-VGS model, we might suspect that this is due to the 3 Note that the visually grounded speech models of Chrupaa et al. (2017); Chrupaa (2019); Merkx et al. (2019) use similar mechanisms to aggregate the activations of the fi-nal RNN layer; here we use it as part of the analytical method to pool any sequential representation of interest.", "A further point worth noting is that we use scalar weights t and apply a linear model for learning them in order to keep the analytic model simple and easy to train consistently.", "fact that string similarities between phonemic transcriptions of captions are correlated to visual similarities between their corresponding images, rather than due to the layer encoding phoneme strings.", "In order to control for this issue, we can carry out RSA between two spaces while controling for the third, confounding, similarity space.", "We do this by computing the coefficient of partial determination defined as the relative reduction in error caused by including variable X in a linear regression model for Y : R 2 partial ( Y, X | Z ) = e Y Z e Y X + Z e Y Z (5) where e Y X + Z is the sum squared error of the model with all variables, and e Y Z is the sum squared error of the model with X removed.", "Given the scenario above with the confounding space being visual similarity, we identify Y as the pairwise similarities in phoneme string space, X as the similarities in neural activation space, and Z as similarities in the visual space.", "The visual similarities are computed via cosine similarity on the image feature vectors corresponding to the stimulus utterances.", "All analytical methods are implemented in Pytorch (Paszke et al., 2019).", "The diagnostic classifiers are trained using Adam with learning rate schedule which is scaled by 0.1 after 10 epochs with no improvement in accuracy.", "We terminate training after 50 epochs with no improvement.", "Global RSA with attention-based pooling is trained using Adam for 60 epochs with a fixed learning rate (0.001).", "For all trainable models we snapshot model parameters after every epoch and report the results for the epoch with best validation score.", "In all cases we sample half of the available data for training (if applicable), holding out the other half for validation.", "Sampling data for local RSA.", "When computing RSA scores it is common practice in neuroscience research to use the whole upper triangular part of the matrices containing pairwise similarity scores between stimuli, presumably because the number of stimuli is typically small in that setting.", "In our case the number of stimuli is very large, which makes using all the pairwise similarities computationally taxing.", "More importantly, when each stimulus is used for computing multiple similarity scores, these scores are not independent, and score distribution changes with the number of stimuli.", "Figures 13 display the outcome of analyzing our target models.", "All three figures are organized in a 2 3 matrix of panels, with the top row showing the diagnostic methods and the bottom row the RSA methods; the first column corresponds to local scope; column two and three show global scope with mean and attention pooling respectively.", "The data points are displayed in the order of the hierarchy of layers for each architecture, starting with the input (layer id = 0).", "In all the reported experiments, the score of the diagnostic classifiers corresponds to relative error reduction (RER), whereas for RSA we show Pearson's correlation coefficient.", "For methods with trainable parameters we show three separate runs with different random seeds in order to illustrate the variability due to parameter initialization.", "Figure 4 shows the results of global RSA with mean pooling on the RNN-VGS target model, while controling for visual similarity as a confound.", "We will discuss the patterns of results observed for each model separately in the following sections.", "As can be seen in Figure 1, most reported experiments (with the exception of the local RSA) suggest that phonemes are best encoded in pre-final layers of the deep network.", "The results also show a strong impact of learning on the predictions of the analytical methods, as is evident by the difference between the performance using representations of the trained versus randomly initialized models.", "Local RSA shows low correlation values overall, and does not separate the trained versus random conditions well.", "Most experimental findings displayed in Figure 2 suggest that phonemes are best encoded in RNN layers 3 and 4 of the VGS model.", "They also show that the representations extracted from the trained model encode phonemes more strongly than the ones from the random version of the model.", "However, the impact of learning is more salient with global than local scope: the scores of both local classifier and local RSA on random vs. trained representations are close to each other for all layers.", "For the global representations the performance on trained representations quickly diverges from the random representations from the first RNN layer onward.", "Furthermore, as demonstrated in Figure 4, for top RNN layers of this architecture, the correlation between similarities in the neural activation space and the similarities in the phoneme string space is not solely due to both being correlated to visual similarities: indeed similarities in activation space contribute substantially to predicting string similarities, over and above the visual similarities.", "The overall qualitative patterns for this target model are the same as for RNN-VGS.", "The absolute scores for the global diagnostic variants are higher, and the curves steeper, which may reflect that the objective for this target model is more closely aligned with encoding phonemes than in the case of RNN-VGS.", "In the case of the local diagnostic setting there is a marked contrast between the behavior of the RNN models on the one hand and the Transformer model on the other: the encoding of phoneme information for the randomly initialized RNN is substantially stronger in the higher layers, while for the randomly initialized Transformer the curve is flat.", "This difference is likely due to the very different connectivity in these two architectures.", "With random weights in RNN layer i , the activations at time t are a function of the features from layer i 1 at time t , mixed with the features from layer i at time t 1 .", "There are thus effects of depth that may make it easier for a linear diagnostic classifier to classify phonemes from the activations of a randomly initialized RNN:", "(i) features are recombined among themselves, and", "(ii) local context features are also mixed into the activations.", "The Transformer architecture, on the other hand, does not have the local recurrent connectivity: at each timestep t the activations are a combination of all the other timesteps and already in the first layer, so with random weights, the activations are close to random, and the amount of information does not increase with layer depth.", "In the global case, in the activations from random RNNs, pooling across time has the effect of averaging out the vectors such that they are around zero which makes them uninformative for the global Figure 1: Results of diagnostic and RSA analytical methods applied to the Transformer-ASR model.", "classifier: this does not happen to trained RNN activations.", "Figure 5 illustrates this point by showing the standard deviations of vectors of mean-pooled activations of each utterance processed by the RNN-VGS model for the randomly initialized and trained conditions, for the recurrent layers.", "4 4 Only the RNN layers are show, as the different scale of activations in different layer types would otherwise obscure Figure 5: Standard deviation of pooled activations of the RNN layers for the RNN-VGS model.", "Here we discuss the impact of each factor in the outcome of our analyses.", "Choice of method.", "The choice of RSA versus diagnostic classifier interacts with scope, and thus these are better considered as a combination.", "Specifically, local RSA as implemented in this study shows only weak correlations between neural activations and phoneme labels.", "It is possibly the pattern.", "related to the range restriction of point biserial correlation with unbalanced binary variables.", "Impact of learning.", "Applied to the global representations, both analytical methods are equally sensitive to learning.", "The results on random vs. trained representations for both methods start to diverge noticeably from early recurrent layers.", "The separation for the local diagnostic classifiers is weaker for the RNN models.", "Representation scope.", "Although the temporal scale of the extracted representations has not received much attention and scrutiny, our experimental findings suggest that it is an important choice.", "Specifically, global representations are more sensitive to learning, and more consistent across different analysis methods.", "Results with attention-based learned pooling are in general more erratic than with mean pooling.", "This reflects the fact that analytical models which incorporate learned pooling are more difficult to optimize and require more careful tuning compared to mean pooling.", "Given the above findings, we now offer tentative recommendations on how to carry out representational analyses of neural models.", "Analyses on randomly initialized target models should be run as a baseline.", "Most scores on these models were substantially above zero: some relatively close to scores on trained models.", "It is unwise to rely on a single analytical approach, even a widely used one such as the local diagnostic classifier.", "With solely this method we would have concluded that, in RNN models, learning has only a weak effect on the encoding of phonology.", "Global methods applied to pooled representations should be considered as a complement to standard local diagnostic methods.", "In our experiments they show more consistent results.", "In this systematic study of analysis methods for neural models of spoken language we offered some suggestions on best practices in this endeavor.", "Nevertheless our work is only a first step, and several limitations remain.", "The main challenge is that it is often difficult to completely control for the many factors of variation in the target models, due to the fact that a particular objective function, or even a dataset, may require relatively important architectural modifications.", "In future we will sample target models with a larger number of plausible combinations of factors.", "Likewise, a choice of an analytical method may often entail changes in other aspects of the analysis: for example, unlike a global diagnostic classifier, global RSA captures the sequential order of phonemes.", "In future work we hope to further disentangle these differences.", "Bertrand Higy was supported by a NWO/E-Science Center grant number 027.018.G03." ]
[ "abstain", "method", "method", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "other", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "method", "abstain", "abstain", "other" ]
[ "Online alignment in machine translation refers to the task of aligning a target word to a source word when the target sequence has only been partially decoded.", "Good online alignments facilitate important applications such as lexically constrained translation where user-defined dictionaries are used to inject lexical constraints into the translation model.", "We propose a novel posterior alignment technique that is truly online in its execution and superior in terms of alignment error rates compared to existing methods.", "Our proposed inference technique jointly considers alignment and token probabilities in a principled manner and can be seamlessly integrated within existing constrained beam-search decoding algorithms.", "On five language pairs, including two distant language pairs, we achieve consistent drop in alignment error rates.", "When deployed on seven lexically constrained translation tasks, we achieve significant improvements in BLEU specifically around the constrained positions.", "Online alignment seeks to align a target word to a source word at the decoding step when the word is output in an auto-regressive neural translation model (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014).", "This is unlike the more popular offline alignment task that uses the entire target sentence (Och and Ney, 2003).", "State of the art methods of offline alignment based on matching of whole source and target sentences (Jalili Sabet et al., 2020; Dou and Neubig, 2021) are not applicable for online alignment where we need to commit on the alignment of a target word based on only the generated prefix thus far.", "An important application of online alignment is lexically constrained translation which allows injection of domain-specific terminology and other phrasal constraints during decoding (Hasler et al., 2018; Hokamp and Liu, 2017; Alkhouli et al., 2018; Crego et al., 2016).", "Other applications include preservation of markups between the source and target (Mller, 2017), and supporting source word edits in summarization (Shen et al., 2019).", "These applications need to infer the specific source token which aligns with output token.", "Thus, alignment and translation is to be done simultaneously.", "Existing online alignment methods can be categorized into Prior and Posterior alignment methods.", "Prior alignment methods (Garg et al., 2019; Song et al., 2020) extract alignment based on the attention at time step t when outputting token y t .", "The attention probabilities at time-step t are conditioned on tokens output before time t .", "Thus, the alignment is estimated prior to observing y t .", "Naturally, the quality of alignment can be improved if we condition on the target token y t (Shankar and Sarawagi, 2019).", "This motivated Chen et al. (2020) to propose a posterior alignment method where alignment is calculated from the attention probabilities at the next decoder step t + 1 .", "While alignment quality improved as a result, their method is not truly online since it does not generate alignment synchronously with the token.", "The delay of one step makes it difficult and cumbersome to incorporate terminology constraints during beam decoding.", "We propose a truly online posterior alignment method that provides higher alignment accuracy than existing online methods, while also being synchronous.", "Because of that we can easily integrate posterior alignment to improve lexicon-constrained translation in state of the art constrained beam-search algorithms such as VDBA (Hu et al., 2019).", "Our method (Align-VDBA) presents a significant departure from existing papers on alignment-guided constrained translation (Chen et al., 2020; Song et al., 2020) that employ a greedy algorithm with poor constraint satisfaction rate (CSR).", "For example, on a ja en their CSR is 20 points lower than ours.", "Moreover, the latter does not benefit 6675 from larger beam sizes unlike VDBA-based methods that significantly improve with larger beam widths.", "Compared to Chen et al. (2020), our method improves average overall BLEU scores by 1.2 points and average BLEU scores around the constrained span by up to 9 points.", "In the evaluations performed in these earlier work, VDBA was not allocated the slightly higher beam size needed to pro-actively enforce constraints without compromising BLEU.", "Compared to Hu et al. (2019) (VDBA), this paper's contributions include online alignments and their use in more fluent constraint placement and efficient allocation of beams.", "Contributions A truly online posterior alignment method that integrates into existing NMT sytems via a trainable light-weight module.", "Higher online alignment accuracy on five language pairs including two distant language pairs where we improve over the best existing method in seven out of ten translation tasks.", "Principled method of modifying VDBA to incorporate posterior alignment probabilities in lexically-constrained decoding.", "VDBA enforces constraints ignoring source alignments; our change (Align-VDBA) leads to more fluent constraint placement and significant BLEU increase particularly for smaller beams.", "Establishing that VDBA-based pro-active constrained inference should be preferred over prevailing greedy alignment-guided inference (Chen et al., 2021; Song et al., 2020).", "Further, VDBA and our Align-VDBA inference with beam size 10 provide 1.2 BLEU increase over these methods with the same beam size.", "Given a sentence x = x 1 , . . . , x S in the source language and a sentence y = y 1 , . . . , y T in the target language, an alignment A between the word strings is a subset of the Cartesian product of the word positions (Brown et al., 1993; Och and Ney, 2003): A { ( s, t ) : s = 1 , . . . , S ; t = 1 , . . . , T } such that the aligned words can be considered translations of each other.", "An online alignment at time-step t commits on alignment of the t th output token conditioned only on x and y <t = y 1 , y 2 , . . . y t 1 .", "Additionally, if token y t is also available we call it a posterior online alignment.", "We seek to embed online alignment with existing NMT systems.", "We will first briefly describe the architecture of state of the art NMT systems.", "We will then elaborate on how alignments are computed from attention distributions in prior work and highlight some limitations, before describing our proposed approach.", "Transformers (Vaswani et al., 2017) adopt the popular encoder-decoder paradigm used for sequence-to-sequence modeling (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015).", "The encoder and decoder are both multi-layered networks with each layer consisting of a multi-headed self-attention and a feedforward module.", "The decoder layers additionally use multi-headed attention to encoder states.", "We elaborate on this mechanism next since it plays an important role in alignments.", "2.1.1 Decoder-Encoder Attention in NMTs The encoder transforms the S input tokens into a sequence of token representations H RS d .", "Each decoder layer (indexed by (cid:96) { 1 , . . . , L } ) computes multi-head attention over H by aggregating outputs from a set of independent attention heads.", "The attention output from a single head n { 1 , . . . , } in decoder layer (cid:96) is computed as follows.", "Let the output of the self-attention sub-layer in decoder layer (cid:96) at the t th target token be denoted as g (cid:96)t .", "Using three projection matrices W (cid:96),nQ , W (cid:96),nV , W (cid:96),nK R d d n , the query vector q (cid:96),nt R 1 d n and key and value matrices, K (cid:96),n RS d n and V (cid:96),n RS d n , are computed using the following projections: q (cid:96),nt = g (cid:96)t W (cid:96),nQ , K (cid:96),n = HW (cid:96),nK , and V (cid:96),n = HW (cid:96),nV .", "1 These are used to calculate the attention output from head n , Z (cid:96),nt = P ( a (cid:96),nt | x , y <t ) V (cid:96),n , where: P ( a (cid:96),nt | x , y <t ) = softmax (cid:32) q (cid:96),nt ( K (cid:96),n ) (cid:124) d (cid:33) (1) For brevity, the conditioning on x , y <t is dropped and P ( a (cid:96),nt ) is used to refer to P ( a (cid:96),nt | x , y <t ) in the following sections.", "Finally, the multi-head attention output is given by [ Z (cid:96), 1 t , . . . , Z (cid:96),t ] WO where [ ] denotes the column-wise concatenation of matrices and WO R d d is an output projection matrix.", "Several prior work have proposed to extract word alignments from the above attention prob-1", "abilities.", "For example Garg et al. (2019) propose a simple method called NAIVEATT that aligns a source word to the t th target token using argmax j 1 (cid:88) n =1 P ( a (cid:96),nt,j | x , y <t ) where j indexes the source tokens.", "In NAIVEATT, we note that the attention probabilities P ( a (cid:96),nt,j | x , y <t ) at decoding step t are not conditioned on the current output token y t .", "Alignment quality would benefit from conditioning on y t as well.", "This observation prompted Chen et al. (2020) to extract alignment of token y t using attention P ( a (cid:96),nt,j | x , y t ) computed at time step t + 1 .", "The asynchronicity inherent to this shift-by-one approach (SHIFTATT) makes it difficult and more computationally expensive to incorporate lexical constraints during beam decoding.", "We propose POSTALN that produces posterior alignments synchronously with the output tokens, while being more computationally efficient compared to previous approaches like SHIFTATT.", "We incorporate a lightweight alignment module to convert prior attention to posterior alignments in the same decoding step as the output.", "Figure 1 illustrates how this alignment module fits within the standard Transformer architecture.", "The alignment module is placed at the penultimate decoder layer (cid:96) = L 1 and takes as input (1) the encoder output H , (2) the output of the self-attention sub-layer of decoder layer (cid:96) , g (cid:96)t and, (3) the embedding of the decoded token e ( y t ) .", "Like in standard attention it projects H to obtain a key matrix, but to obtain the query matrix it uses both decoder state g (cid:96)t (that summarizes y <t ) and e ( y t ) to compute the posterior alignment P ( a post t ) as: P ( a post t ) = 1 (cid:88) n =1 softmax (cid:18) q n t, post ( K n post ) (cid:124) d (cid:19) , q nt, post = [ g (cid:96)t , e ( y t )] W nQ, post , K n post = HW nK, post Here W nQ, post R 2 d d n and W nK, post R d d n .", "This computation is synchronous with producing the target token y t , thus making it compatible with beam search decoding (as elaborated further in Section 3).", "It also accrues minimal computational overhead since P ( a post t ) is defined using H and g L 1 t , that are both already cached during a standard decoding pass.", "Note that if the query vector q nt, post is computed using only g L 1 t , without concatenating e ( y t ) , then we get prior alignments Inputs x Input Emb PositionalEncoding Layer 1 Layer 2 Layer L H Outputs y <t Output Emb PositionalEncoding Layers 1 to 1 Self-Attention Add and Norm Cross-Attention AlignmentModule Add and Norm Feed Forward Add and Norm Layers + 1 to L Linear & Softmax Output Probabilities AlignmentProbabilities y t g t Figure 1: Our alignment module is an encoder-decoder attention sub-layer, similar to the existing cross-attention sub-layer.", "that we refer to as PRIORATT.", "In our experiments, we explicitly compare PRIORATT with POSTALN to show the benefits of using y t in deriving alignments while keeping the rest of the architecture intact.", "Training Our posterior alignment sub-layer is trained using alignment supervision, while freezing the rest of the translation model parameters.", "Specifically, we train a total of 3 d 2 additional parameters across the matrices W nK, post and W nQ, post .", "Since gold alignments are very tedious and expensive to create for large training datasets, alignment labels are typically obtained using existing techniques.", "We use bidirectional symmetrized SHIFTATT alignments, denoted by S i,j that refers to an alignment between the i th target word and the j th source word, as reference labels to train our alignment sub-layer.", "Then the objective (following Garg et al. (2019)) can be defined as: max W nQ, post , W nK, post 1 TT (cid:88) i =1 S (cid:88) j =1 S i,j log( P ( a post i,j | x , y i )) Next, we demonstrate the role of posterior online alignments on an important downstream task.", "In the lexicon constrained translation task, for each to-be-translated sentence x , we are given a set of source text spans and the corresponding target tokens in the translation.", "A constraint C j comprises a pair ( C xj , C yj ) where C xj = ( p j , p j + 1 . . . , p j + (cid:96) j ) indicates input token positions, and C yj = ( y j 1 , y j 2 . . . , y jm j ) denote target tokens that are translations of the input tokens x p j . . . x p j + (cid:96) j .", "For the output tokens we do not know their positions in the target sentence.", "The different constraints are non-overlapping and each is expected to be used exactly once.", "The goal is to translate the given sentence x and satisfy as many constraints in C = (cid:83) j C j as possible while ensuring fluent and correct translations.", "Since the constraints do not specify target token position, it is natural to use online alignments to guide when a particular constraint is to be enforced.", "Existing inference algorithms for incorporating lexicon constraints differ in how pro-actively they enforce the constraints.", "A passive method is used in Song et al. (2020) where constraints are enforced only when the prior alignment is at a constrained source span.", "Specifically, if at decoding step t , i = argmax i (cid:48) P ( a t,i (cid:48) ) is present in some constraint C xj , the output token is fixed to the first token y j 1 from C yj .", "Otherwise, the decoding proceeds as usual.", "Also, if the translation of a constraint C j has started, the same is completed ( y j 2 through y jm j ) for the next m j 1 decoding steps before resuming unconstrained beam search.", "The pseudocode for this method is provided in Appendix G. For the posterior alignment methods of Chen et al. (2020) this leads to a rather cumbersome inference (Chen et al., 2021).", "First, at step t they predict a token y t , then start decoding step t + 1 with y t as input to compute the posterior alignment from attention at step t + 1 .", "If the maximum alignment is to the constrained source span C xj they revise the output token to be y j 1 from C yj , but the output score for further beam-search continues to be of y t .", "In this process both the posterior alignment and token probabilities are misrepresented since they are both based on y t instead of the finally output token y j 1 .", "The decoding step at t + 1 needs to be restarted after the revision.", "The overall algorithm continues to be normal beam-search, which implies that the constraints are not enforced pro-actively.", "Many prior methods have proposed more proactive methods of enforcing constraints, including the Grid Beam Search (GBA, Hokamp and Liu (2017)), Dynamic Beam Allocation (DBA, Post and Vilar (2018)) and Vectorized Dynamic Beam Allocation (VDBA, Hu et al. (2019)).", "The latest of these, VDBA, is efficient and available in pub-lic NMT systems (Ott et al., 2019; Hieber et al., 2020).", "Here multiple banks , each corresponding to a particular number of completed constraints, are maintained.", "At each decoding step, a hypothesis can either start a new constraint and move to a new bank or continue in the same bank (either by not starting a constraint or progressing on a constraint mid-completion).", "This allows them to achieve near 100% enforcement.", "However, VDBA enforces the constraints by considering only the target tokens of the lexicon and totally ignores the alignment of these tokens to the source span.", "This could lead to constraints being placed at unnatural locations leading to loss of fluency.", "Examples appear in Table 4 where we find that VDBA just attaches the constrained tokens at the end of the sentence.", "We modify VDBA with alignment probabilities to better guide constraint placement.", "The score of a constrained token is now the joint probability of the token, and the probability of the token being aligned with the corresponding constrained source span.", "Formally, if the current token y t is a part of the j th constraint i.e. y t C yj , the generation probability of y t , P ( y t | x , y <t ) is scaled by multiplying with the alignment probabilities of y t with C xj , the source span for constraint i .", "Thus, the updated probability is given by: P ( y t , C xj | x , y <t ) (cid:124) (cid:123)(cid:122) (cid:125) Joint Prob = P ( y t | x , y <t ) (cid:124) (cid:123)(cid:122) (cid:125) Token Prob (cid:88) r C xj P ( a post t,r | x , y t ) (cid:124) (cid:123)(cid:122) (cid:125) Src Align.", "P ( y t , C xj | x , y <t ) denotes the joint probability of outputting the constrained token and the alignment being on the corresponding source span.", "Since the supervision for the alignment probabilities was noisy, we found it useful to recalibrate the alignment distribution using a temperature scale T , so that the recalibrated probability is Pr( a post t,r | x , y t ) 1 T .", "We used T = 2 i.e., square-root of the alignment probability.", "Currently, VDBA attempts beam allocation for each unmet constraint since it has no way to discriminate.", "In Align-VDBA we allocate only when the alignment probability is greater than a threshold.", "When the beam size is small (say 5) this yields higher accuracy due to more efficient beam utilization.", "We used a threshold of 0.1 for all language pairs other than ro en for which a threshold of 0.3 was used.", "Further, the thresholds were used for the smaller beam size of 5 and not for larger beam sizes of 10 and 20.", "We present the pseudocode of our modification (steps 5, 6 and 7, in blue) to DBA in Algorithm", "1. Other details of the algorithm including the handling of constraints and the allocation steps (step 11) are involved and we refer the reader to Post and Vilar (2018) and Hu et al. (2019) to understand these details.", "The point of this code is to show that our proposed posterior alignment method can be easily incorporated into these algorithms so as to provide a more principled scoring of constrained hypothesis in a beam than the ad hoc revision-based method of Chen et al. (2021).", "Additionally, posterior alignments lead to better placement of constraints than in the original VDBA algorithm.", "We first compare our proposed posterior online alignment method on quality of alignment against existing methods in Section 4.2, and in Section 4.3, we demonstrate the impact of the improved alignment on the lexicon-constrained translation task.", "We deploy the fairseq toolkit (Ott et al., 2019) and use transformer_iwslt_de_en pre-configured model for all our experiments.", "Other configuration parameters include: Adam optimizer with 1 = 0 .", "9 , 2 = 0 .", "98 , a learning rate of 5e 4 de-en en-fr ro-en en-hi ja-en Training 1.9M 1.1M 0.5M 1.6M 0.3M Validation 994 1000 999 25 1166 Test 508 447 248 140 1235 Table 1: Number of sentence pairs for the five datasets used.", "with 4000 warm-up steps, an inverse square root schedule, weight decay of 1e 4 , label smoothing of 0 .", "1 , 0 .", "3 probability dropout and a batch size of 4500 tokens.", "The transformer models are trained for 50,000 iterations.", "Then, the alignment module is trained for 10,000 iterations, keeping the other model parameters fixed.", "A joint byte pair encoding (BPE) is learned for the source and the target languages with 10k merge operation (Sennrich et al., 2016) using subword-nmt .", "All experiments were done on a single 11GB Nvidia GeForce RTX 2080 Ti GPU on a machine with 64 core Intel Xeon CPU and 755 GB memory.", "The vanilla Transformer models take between 15 to 20 hours to train for different datasets.", "Starting from the alignments extracted from these models, the POSTALN alignment module trains in about 3 to 6 hours depending on the dataset.", "We evaluate online alignments on ten translation tasks spanning five language pairs.", "Three of these are popular in alignment papers (Zenkel et al., 2019): German-English (de-en), English-French (en-fr), Romanian-English (ro-en).", "These are all European languages that follow the same subject-verb-object (SVO) ordering.", "We also present results on two distant language pairs, English-Hindi (en-hi) and English-Japanese (ja-en), that follow a SOV word order which is different from the SVO 6679 D e l a y de-en en-fr ro-en en-hi ja-en Method de en en de en fr fr en ro en en ro en hi hi en ja en en ja Statistical Methods (Not Online) GIZA++ (Och and Ney, 2003) End 18.9 19.7 7.3 7.0 27.6 28.3 35.9 36.4 41.8 39.0 FastAlign (Dyer et al., 2013) End 28.4 32.0 16.4 15.9 33.8 35.5 --No Alignment Training NAIVEATT (Garg et al., 2019) 0 32.4 40.0 24.0 31.2 37.3 33.2 49.1 53.8 62.2 63.5 SHIFTATT (Chen et al., 2020) +1 20.0 22.9 14.7 20.4 26.9 27.4 35.3 38.6 53.6 48.6 With Alignment Training PRIORATT 0 23.4 25.8 14.0 16.6 29.3 27.2 36.4 35.1 52.7 50.9 SHIFTAET (Chen et al., 2020) +1 15.8 19.5 10.3 10.4 22.4 23.7 29.3 29.3 42.5 41.9 POSTALN [Ours] 0 15.5 19.5 9.9 10.4 21.8 23.2 28.7 28.9 41.2 42.2 Table 2: AER for de-en, en-fr, ro-en, en-hi, ja-en language pairs.", "Evaluation Method: For evaluating alignment performance, it is necessary that the target sentence is exactly the same as for which the gold alignments are provided.", "Thus, for the alignment experiments, we force the output token to be from the gold target and only infer the alignment.", "We then report the Alignment Error Rate (AER) (Och and Ney, 2000) between the gold alignments and the predicted alignments for different methods.", "Though our focus is online alignment, for comparison to previous works, we also report results on bidirectional symmetrized alignments in Appendix D. Methods compared : We compare our method with both existing statistical alignment models, namely GIZA++ (Och and Ney, 2003) and FastAlign (Dyer et al., 2013), and recent Transformer-based alignment methods of Garg et al. (2019) (NAIVEATT) and Chen et al. (2020) (SHIFTATT and SHIFTAET).", "Chen et al. (2020) also propose a variant of SHIFTATT called SHIFTAET that delays computations by one time-step as in SHIFTATT, and additionally includes a learned attention sublayer to compute alignment probabilities.", "We also present results on PRIORATT which is similar to POSTALN but does not use y t .", "Results: The alignment results are shown in Table", "2. First, AERs using statistical methods FastAlign and GIZA++ are shown.", "Here, for fair comparison, the IBM models used by GIZA++ are trained on the same sub-word units as the Transformer models and sub-word alignments are converted to word level alignments for AER calculations.", "(GIZA++ has remained a state-of-the-art alignment technique and continues to be compared against.)", "Next, we present alignment results for two vanilla Transformer models NAIVEATT and SHIFTATT that do not train a separate alignment module.", "The high AER of NAIVEATT shows that attention-as-is is very distant from alignment but posterior attention is closer to alignments than prior.", "Next we look at methods that train alignment-specific parameters: PRIORATT, a prior attention method; SHIFTAET and POSTALN , both posterior alignment methods.", "We observe that with training even PRIORATT has surpassed non-trained posterior.", "The posterior attention methods outperform the prior attention methods by a large margin, with an improvement of 4.0 to 8.0 points.", "Within each group, the methods with a trained alignment module outperform the ones without by a huge margin.", "POSTALN performs better or matches the performance of SHIFTAET (achieving the lowest AER in nine out of ten cases in Table 2) while avoiding the one-step delay in alignment generation.", "Even on the distant languages, POSTALN achieves significant reductions in error.", "For ja en, we achieve a 1.3 AER reduction compared to SHIFTAET which is not a truly online method.", "Figure 2 shows examples to illustrate the superior alignments of POSTALN compared to NAIVEATT and PRIORATT.", "We next depict the impact of improved AERs from our posterior alignment method on a downstream lexicon-constrained translation task.", "Following previous work (Hokamp and Liu, 2017; Post and Vilar, 2018; Song et al., 2020; Chen et al., 2020, 2021), we extract constraints using the gold alignments and gold translations.", "Up to three constraints of up to three words each are used for each sentence.", "Spans correctly translated by a greedy decoding 6680 a u c h k nn e n d i e s d i e z u s t n d i g e n i nn e r @@ s t aa t li c h e n b e h r d e n r e g e l n , s o f e r n s i ee s f r e r f o r d e r li c h h a l t e n .", "Metrics: Following prior work (Song et al., 2020), we report BLEU (Papineni et al., 2002), time to translate all test sentences, and Constraint Satisfaction Rate (CSR).", "However, since it is trivial to get 100% CSR by always copying, we report another metric to evaluate the appropriateness of constraint placement: We call this measure BLEU-C and compute it as the BLEU of the constraint (when satisfied) and a window of three words around it.", "All numbers are averages over five different sets of randomly sampled constraint sets.", "The beam size is set to ten by default; results for other beam sizes appear in Appendix E. Methods Compared: First we compare all the alignment methods presented in Section 4.2 on the constrained translation task using the alignment based token-replacement algorithm of Song et al. (2020) described in Section 3.1.", "Next, we present a comparison between VBDA (Hu et al., 2019) and our modification Align-VDBA.", "Results: Table 3 shows that VDBA and our Align-VDBA that pro-actively enforce constraints have a much higher CSR and BLEU-C compared to the other lazy constraint enforcement methods.", "For example, for ja en greedy methods can only achieve a CSR of 76% compared to 96% of the VDBA-based methods.", "In terms of overall BLEU too, these methods provide an average increase in BLEU of 1.2 and an average increase in BLEU-C of 5 points.", "On average, Align-VDBA has a 0.7 point greater BLEU-C compared to VDBA.", "It also has a greater BLEU than VDBA on all the five datasets.", "In Table 9 of Appendix we show that for smaller beamsize of 5, the gap between Align-VDBA and VDBA is even larger (2.1 points greater BLEU-C and 0.4 6681 Constraints (gesetz zur, law also ), (dealer, pusher ) Gold of course, if a drug addict becomes a pusher , then it is right and necessary that he should pay and answer before the law also .", "points greater BLEU).", "Table 4 lists some example translations by VDBA vs. Align-VDBA.", "We observe that VDBA places constraints at the end of the translated sentence (e.g., pusher\", develop-ment\") unlike Align-VDBA.", "In some cases where constraints contain frequent words (like of, the, etc.), VDBA picks the token in the wrong position to tack on the constraint (e.g., strong backing of\", of qualified\") while Align-VDBA places the constraint correctly.", "Real World Constraints: We also evaluate our method using real world constraints extracted from IATE and Wiktionary datasets by Dinu et al. (2019).", "Table 5 compares Align-VDBA with the soft-constraints method of Dinu et al. (2019) that requires special retraining to teach the model to copy constraints.", "We reproduced the numbers from their paper in the first three rows.", "Their baseline is almost 4 BLEU points worse than ours since they used a smaller transformer NMT model, thus making running times incomparable.", "When we compare the increment in BLEU over the respective baselines, Align-VDBA shows much greater gains of +1.2 vs. their +0.5.", "Also, Align-VDBA provides a larger CSR of 99.6 compared to their 92.", "Results for other beam sizes and other methods and metrics appear in Appendix F. 5 Related Work Online Prior Alignment from NMTs : Zenkel et al. (2019) find alignments using a single-head attention submodule, optimized to predict the next token.", "Garg et al. (2019) and Song et al. (2020) supervise a single alignment head from the penultimate multi-head attention with prior alignments from GIZA++ alignments or FastAlign.", "Bahar et al. (2020) and Shankar et al. (2018) treat alignment as a latent variable and impose a joint distribution over token and alignment while supervising on the token marginal of the joint distribution.", "Online Posterior Alignment from NMTs : Shankar and Sarawagi (2019) first identify the role of posterior attention for more accurate alignment.", "However, their NMT was a single-headed RNN.", "Chen et al. (2020) implement posterior attention in a multi-headed Transformer but they incur a delay of one step between token output and alignment.", "We are not aware of any prior work that extracts truly online posterior alignment in modern NMTs.", "Offline Alignment Systems : Several recent methods apply only in the offline setting: Zenkel et al. (2020) extend an NMT with an alignment module; Nagata et al. (2020) frame alignment as a question answering task; and Jalili Sabet et al. (2020); Dou and Neubig (2021) leverage similarity between contextual embeddings from pretrained multilingual models (Devlin et al., 2019).", "(2019) modify beam search to ensure that target phrases from a given constrained lexicon are present in the translation.", "These methods ignore alignment with the source but ensure high success rate for appearance of the target phrases in the constraint.", "Song et al. (2020) and Chen et al. (2021) do consider source alignment but they do not enforce constraints leading to lower CSR.", "Dinu et al. (2019) and Lee et al. (2021) propose alternative training strategies for constraints, whereas we focus on working with existing models.", "Recently, non autoregressive methods have been proposed for enforcing target constraints but they require that the constraints are given in the order they appear in the target translation (Susanto et al., 2020).", "In this paper we proposed a simple architectural modification to modern NMT systems to obtain accurate online alignments.", "The key idea that led to high alignment accuracy was conditioning on the output token.", "Further, our designed alignment module enables such conditioning to be performed synchronously with token generation.", "This property led us to Align-VDBA, a principled decoding algorithm for lexically constrained translation based on joint distribution of target token and source alignments.", "Future work includes increase efficiency of constrained inference and harnessing such joint distributions for other forms of constraints, for example, nested constraints.", "Limitations: All existing methods for hard constrained inference, including ours, come with considerable runtime overheads.", "Soft constrained methods are not accurate enough.", "We are grateful to the reviewers for their detailed analysis, thoughtful comments and insightful questions which have helped us improve the paper.", "We are grateful to Priyesh Jain for providing alignment annotations for 50 English-Hindi sentences." ]
[ "abstain", "abstain", "objective", "objective", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "objective", "result", "objective", "result", "objective", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "This paper proposes a sophisticated neural architecture to incorporate bilingual dictionaries into Neural Machine Translation (NMT) models.", "By introducing three novel components: P ointer , D isambiguator , and C opier , our method PDC achieves the following merits inherently compared with previous efforts: (1) Pointer leverages the semantic information from bilingual dictionaries, for the first time, to better locate source words whose translation in dictionaries can potentially be used; (2) Disambiguator synthesizes contextual information from the source view and the target view, both of which contribute to distinguishing the proper translation of a specific source word from multiple candidates in dictionaries; (3) Copier systematically connects Pointer and Disambiguator based on a hierarchical copy mechanism seamlessly integrated with Transformer, thereby building an end-to-end architecture that could avoid error propagation problems in alternative pipeline methods.", "The experimental results on Chinese-English and English-Japanese benchmarks demonstrate the PDC's overall superiority and effectiveness of each component.", "The past several years have witnessed the remarkable success of Neural machine translation (NMT), due to the development of sequence-to-sequence methods (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017).", "Since bilingual dictionaries cover rich prior knowledge, especially of low-frequency words, many efforts have been dedicated to incorporating bilingual dictionaries into NMT systems.", "These explorations can be roughly categorized into two broad paradigms.", "The first one transforms the bilingual dictionaries into pseudo parallel sentence pairs for training (Zhang Corresponding authors. These patterns increase brake friction between tires and ground. these pattern canincrease tire and ground betweenofbrakefriction Source Input : DecoderOutput : BilingualDictionary : Disambiguate rub friction clash conflict Copy Reference : These patterns increase brake Point 1 2 3 : Figure 1: Three key steps to translate with a bilingual dictionary: pointing, disambiguating and copying. This concrete illustrative example is chosen to conveniently show the primary intuition behind our method. and Zong, 2016; Zhao et al., 2020).", "The second one utilizes the bilingual dictionaries as external resources fed into neural architectures (Luong et al., 2015; Gulcehre et al., 2016; Arthur et al., 2016; Zhang et al., 2017b; Zhao et al., 2018a,b, 2019b), which is more widely used and the focus of this paper.", "In practice, bilingual dictionaries usually contain more than one translation for a word.", "From a high-level perspective, we believe there are three critical steps to incorporate bilingual dictionaries into NMT models as shown in Figure 1: (1) pointing to a source word whose translation in dictionaries will be used at a decoding step, (2) disambiguating multiple translation candidates of the source word from dictionaries, and (3) copying the selected translation into the target side if necessary.", "Note that some works assume that only one translation exists for each word in dictionaries (Luong et al., 2015; Gulcehre et al., 2016).", "In this simplified scenario, the disambiguating step is unnecessary, hence the pointing and copying step can be merged into a single step similar to the classic copying mechanism (Gu et al., 2016).", "In more practical scenarios, however, this process suffers from the following bottlenecks corresponding to each step.", "(1) In the pointing step, semantic information of translations in dictionaries is underutilized.", "To locate source words whose translation in dictionaries may be used, some works (Luong et al., 2015; Gulcehre et al., 2016) use a classic copy mechanism, but in an oversimplified scenario mentioned above.", "More recent efforts further leverage statistics-based pre-processing methods (Zhao et al., 2018b, 2019b) to help identify, e.g., rare or troublesome source words.", "Note that the goal of locating a source word is to further use its translation in dictionaries.", "Intuitively, by exploring rich information of a source word's translations in dictionaries, we can better understand the semantic meaning of the source word and distinguish whether we can its translation candidate.", "Unfortunately, this information is underutilized by most works, which could have boosted NMT performance, as shown in Section 5.2.", "(2) In the disambiguating step, the distinguishing information is from static prior knowledge or coarse-grained context information.", "To select the proper translation of one source word from multiple candidates in dictionaries, in addition to works that merely use the first-rank one (Luong et al., 2015; Gulcehre et al., 2016), existing explorations mainly involve exploiting prior probabilities, e.g., to adjust the distribution over the decoding vocabulary (Arthur et al., 2016; Zhao et al., 2018a).", "As a representative context-based disambiguation method, Zhao et al. (2019b) distinguish candidates by matching their embeddings with a decoder-oriented context embedding.", "Intuitively, an optimal translation candidate should not only accurately reflect the content of the source sentence, but also be consistent with the context of the current partial target sentence.", "Our observation is that both source information and target information is critical and complementary to distinguish candidates.", "Taking the source word in Figure 1 for example, the source context of / pattern , / tire and / ground helps to identify the candidates of rub and friction in the dictionary, and the target context of these patterns increase brake further makes friction the best choice.", "This observation inspires us to synthesize source information and target information in a more fine-grained manner to improve previous straightforward disambiguation methods.", "disambiguating step.", "Existing models usually do not explicitly emphasize a separate copying step 1 , since it is a trivial task in their simplified or pipeline scenario.", "However, to deliver a sophisticated end-to-end architecture that avoids error propagation problems, the pointing and disambiguating step must be appropriately connected as well as integrated into mature NMT models.", "The proposed copying step is the right place to complete this job.", "To address the above problems, we propose a novel neural architecture consisting of three novel components: P ointer , D isambiguator , and C opier , to effectively incorporate bilingual dictionaries into NMT models in an end-to-end manner.", "Pointer is a pioneering research effort on exploiting the semantic information from bilingual dictionaries to better locate source words whose translation in dictionaries may be used.", "Disambiguator synthesizes complementary contextual information from the source and target via a bi-view disambiguation mechanism, accurately distinguishing the proper translation of a specific source word from multiple candidates in dictionaries.", "Copier couples Pointer and Disambiguator based on a hierarchical copy mechanism seamlessly integrated with Transformer, thereby building a sophisticated end-to-end architecture.", "Last but not least, we design a simple and effective method to integrate byte-pair encoding (BPE) with bilingual dictionaries in our architecture.", "Extensive experiments are performed on Chinese-English and English-Japanese benchmarks, and the results verify the PDC's overall performance and effectiveness of each component.", "Transformer (Vaswani et al., 2017) is the most popular NMT architecture, which adopts the standard encoder-decoder framework and relies solely on stacked attention mechanisms.", "Specifically, given a source sequence x = { x 1 , x 2 ..., x n } , the model is supposed to generate the target sequence y = { y 1 , y 2 ..., y m } in an auto-regressive paradigm.", "Transformer Encoder.", "A Transformer encoder is constituted by a stack of N identical layers, each of which contains two sub-layers.", "The first is a multihead self-attention mechanism (SelfAtt), and the second is a fully connected feed-forward network (FFN).", "Layer normalization (LN) (Ba et al., 2016) and residual connection (He et al., 2016) is em-1 Note that previous works involve copy mechanism mainly correspond to the Pointing step.", "ployed around the two sub-layers in both encoder and decoder.", "h l = LN(SelfAtt( h l 1 ) + h l 1 ) , h l = LN(FFN( h l ) + h l ) , (1) where h l = { h l 1 , h l 2 ..., h ln } is the output of the l -th layer.", "The final output h N of the last encoder layer serves as the encoder state h .", "Transformer Decoder.", "Similarly, the decoder employs the stack structure with N layers.", "Besides the two sub-layers, an additional cross attention (CrossAtt) sub-layer is inserted to capture the information from the encoder.", "s l = LN(SelfAtt( s l 1 ) + s l 1 ) , (cid:98) s l = LN(CrossAtt( s l , h , h ) + s l ) , s l = LN(FFN( (cid:98) s l ) + (cid:98) s l ) , (2) where s l is the output of the l -th decoder layer and the final output s N is taken as the decoder state s .", "Then, the translation probability p ( y t | y <t , x ) of the t -th target word is produced with a softmax layer: p ( y t | y <t , x ) exp( W o s t ) , (3) where y <t is the proceeding tokens before y t .", "In this section, we mathematically describe our model in detail.", "We follow the notations in Section", "2. c i = { c (1) i , ..., c ( k ) i } denotes the translation candidates of a source word x i , derived from a bilingual dictionary D .", "An overview of the proposed PDC model is shown in Figure", "2. PDC aims to copy the correct translation candidate of the correct source word at a decoding step.", "Following the classic CopyNet (Gu et al., 2016), our model consists of two parts, an encoder-decoder translator to produce the generating probability and a copy mechanism to produce the copying probability.", "The above two probabilities will collaborate to emit the final probability.", "The procedure of our copy mechanism involves three critical components: (1) a Pointer that selects a source word whose translation candidates will potentially be copied, (2) a Disambiguator which distinguishes multiple translation candidates of the source word to find the optimal candidate to copy, and (3) a Copier that generates copying probability by combining the outputs from the above two components hierarchically.", "We will describe the details of each component in the following subsection.", "The pointer aims to point which source word should be translated at a decoding step.", "We utilize the carefully extracted semantic information of translation candidates to promote pointing accuracy.", "Specifi-cally, pointer first extracts the semantic information of candidates with candidate-wise encoding.", "Then the candidate representations of each source word are fused and interacted with the source representations from transformer encoder.", "An attention mechanism is applied on the refined source representations to point which word to be translated.", "Candidate Encoding.", "We first construct the candidate representations d i = { d (1) i , ..., d ( k ) i } for the candidates of x i , through an candidate embedding matrix and a single layer candidate encoder.", "the same structure as a source encoder layer.", "Pointing with candidate semantics.", "Previous dictionary-enhanced NMT systems usually directly utilize encoder state h and the decoder state s t at t th decoding step to point whose translation should be copied in the source sentence.", "Intuitively, translation candidates' information contributes to pointing the right source word, while it is underutilized previously.", "Accordingly, we propose to explore the semantic information of translation candidates in our pointer.", "First, we fuse multiple translation candidates' representations of each word by an attention mechanism between h i and d i .", "where d (cid:48) i d (cid:48) is the fused representation for all candidates of the source word x i .", "Next, the encoder state h and d (cid:48) are interacted to refine the representations of source words with the carefully-extracted candidate information.", "The refined encoder state h (cid:48) can be formalized as: h (cid:48) = LN(CrossAtt( h (cid:48) , d (cid:48) , d (cid:48) ) + h (cid:48) ) , h (cid:48) = LN(FFN( h (cid:48) ) + h (cid:48) ) .", "s (cid:48) t = n (cid:88) i =1 i h (cid:48) i ; i = exp( s t W h (cid:48) i ) (cid:80) ni (cid:48) =1 exp( s t W h (cid:48) i (cid:48) ) , (7)", "where i is the pointing probability for x i .", "s (cid:48) t denotes the refined decoder state.", "When translating a specific word, our model has the whole source sentence and the partial target sentence as inputs.", "An optimal translation candidate should not only accurately reflect the content of source sentence, but also be consistent with the context of the partial target sentence.", "Thus, we propose a bi-view disambiguation module to select the optimal translation candidate in both source view and target view.", "Source-view Disambiguation.", "Source-view disambiguation chooses the optimal candidate for each word with the context information stored in source sentence.", "The attention score src i = { src i, 1 , ..., src i,k } , which has been calculated in Equation 5, is employed as the source-view disambiguating distribution for the k translation candidates of x i .", "This disambiguating distribution is decoding-agnostic, which means it serve as global information during decoding.", "Target-view Diambiguation.", "As analyzed in Section 1, translation candidates that seem proper from the source view may not well fit in the target context.", "Thus, we also perform a target view disambiguation to narrow down which candidates fit the partial target sentence's context.", "Specifically, we leverage the refined decoder state s (cid:48) t to disambiguate the multiple candidates: tgt i,j = exp( s (cid:48) t W dt d ( j ) i ) (cid:80) kj (cid:48) =1 exp( s (cid:48) t W dt d ( j (cid:48) ) i ) , (8) where tgt i,j is the target-view disambiguating probability for c ( j ) i .", "In contrast to the decoding-agnostic source-view disambiguating probability, this target-view disambiguating probability varies during decoding steps.", "Finally, we combine the pointing distribution and the bi-view disambiguating distributions in a hierarchical way to constitute the copying distribution as follows:", "where is a scaling factor to adjust the contribution from source-view and target-view disambiguating probabilities.", "i,j indicates the probability to copy c ( j ) i , the j -th translation candidate of the i -th source word.", "We transform this positional probability into word-level copying probability p copy : p copy = p ( y t | y <t , x , c ) , (10) where c is the entire translation candidates for all source word in an instance.", "BPEBPE (Sennrich et al., 2016) is commonly used in NMT to deal with the rare words by separating them into frequent subwords.", "However, it is nontrivial to incorporate BPE into NMT systems with copy mechanism, because the split subwords may not match the original word appearing in dictionaries, either in source side or target side.", "Simply applying BPE on dictionary words will complicates the scenario to disambiguate and copy, since the model needs to aggregate the representations of these subwords for disambiguation and copy the subwords sequentially.", "As revealed in Section 5.4, the experimental results demonstrate that whether applying original BPE on dictionary words or not will not yield promising results.", "In this paper, we present a simple and effective strategy named selective BPE , which only performs BPE on all source words and a portion of target words.", "All of the translation candidates from the dictionary remain intact.", "Concretely, in the target side, we keep the target word from being separated into subwords if we can copy it from the translation candidate set c of the source sentence.", "Such case is formalized as: I tgt ( i ) = (cid:40) 1 , if y i c 0 , if y i / c , (13) where I tgt ( i ) is the BPE indicator for y i .", "A target word y i will be split by selective BPE only if I tgt ( i ) = 0 .", "Note that selective BPE is only used in training, since the reference of validation sets and testing sets do not need BPE.", "By applying selective BPE, our model can implicitly exploit the information of which dictionary candidates are likely to be copied.", "Thus, rare words will be more inclined to be copied directly as a whole from the dictionary.", "In this section, we elaborate on the experiment setup to evaluate our proposed model.", "We test our model on Chinese-to-Engish (Zh-En) and English-Japanese (En-Ja) translation tasks.", "For Zh-En translation, we carry out experiments on two datesets.", "We use 1.25M sentence pairs from news corpora LDC as the training set 1 .", "We adopt NIST 2006 (MT06) as validation set.", "NIST 2002, 2003, 2004, 2005, 2008 datasets are used for testing.", "Besides, we use the Ted talks corpus from IWSLT 2014 and 2015 (Cettolo et al., 2012) including 0.22M sentence pairs for training.", "We use dev2010 with 0.9K sentence pairs for development and tst2010-2013 with 5.5K sentence pairs for testing.", "For En-Ja translation, we adopt Wikipedia article dataset KFTT 2 , which contains 0.44M sentence pairs for training, 1.2K sentence pairs for validation and 1.2K sentence pairs for testing.", "The bilingual dictionary we used is constructed by the open-source cross-lingual word translate dataset word2word (Choe et al., 2020).", "We limit the maximum number of translation candidates to 5 for each source word.", "1 The training set includes LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06.", "2 http://www.phontron.com/kftt/ Systems MT06 MT02 MT03 MT04 MT05 MT08 Exsisting NMT systems (Cheng et al., 2019) 46.95 47.06 46.48 47.39 46.58 37.38 -(Yang et al., 2020) 44.69 -46.56 -46.04 37.53 -(Yan et al., 2020) 47.80 47.72 46.60 48.30 -38.70 Baseline NMT systems Transformer 44.11 46.38 45.05 47.07 44.82 34.74 ref Single-Copy 45.04 47.21 46.47 47.48 45.45 36.08 +0.93 Flat-Copy 44.93 46.33 46.26 46.83 45.38 35.19 +0.39 Our NMT systems PDC 46.74 48.85 48.43 48.57 47.71 37.45 +2.59 PDC(w/o Dict-Pointer 45.79 47.58 47.81 47.98 46.32 36.53 +1.63 PDC(w/o Tgt-View) 45.80 47.43 47.91 48.49 46.81 36.99 +1.91 PDC(w/o Src-View) 45.97 47.42 47.90 47.92 47.07 36.81 +1.81 Table 1: The main results of NIST Zh-En task.", "We implement our model on top of THUMT (Zhang et al., 2017a) toolkit.", "The dropout rate is set to be 0.1.", "The size of a mini-batch is 4096.", "We share the parameters in target embeddings and the output matrix of the Transformer decoder.", "The other hyper-parameters are the same as the default settings in Vaswani et al. (2017).", "The optimal value scaling factor in bi-view disambiguation is 0.4.", "All these hyper-parameters are tuned on the validation set.", "We apply BPE (Sennrich et al., 2016) with 32K merge operations.", "The best single model in validation is used for testing.", "We use multi bleu.perl 3 to calculate the case-insensitive 4-gram BLEU.", "Our models and the baselines use BPE in experiments by default.", "We compare our PDC with the following baselines: Transformer is the most widely-used NMT system with self-attention (Vaswani et al., 2017).", "Single-Copy is a Transformer-based copy mechanism that select a source word's first-rank translation candidate exactly following Luong et al. (2015); Gulcehre et al. (2016).", "by Huang et al. (2019).", "Note that APE focuses on copying from a draft generated by a pre-trained NMT system.", "We first arrange candidates of all source words into a sequence as a draft and then copy this flattened draft following Huang et al. (2019).", "Table 1 shows the performance of the baseline models and our method variants.", "We also list several existing robust NMT systems reported in previous work to validate PDC's effectiveness.", "By investigating the results in Table 1, we have the following four observations.", "First, compared with existing state-of-the-art NMT systems, PDC achieves very competitive results, e.g., the best BLEU scores in 4 of the 5 test sets.", "Second, Single-Copy outperforms Transformer, indicating that even incorporating only the first-rank translation candidate can improve NMT models.", "However, since Single-Copy disregards many translation candidates in dictionaries, which could have been copied, the improvement is relatively small (e.g., +0.93 of average BLEU score on the test sets).", "Third, the performance of Flat-Copy is even worse than Single-Copy, though it considers all translation candidates in dictionaries.", "The reason lies in that Flat-Copy ignores the hierarchy formed by a source sentence and the corresponding translation candidates of its each word, making it much 0.1 0.2 0.3 0.4 0.5 0.6 45.25 45.50 45.75 46.00 46.25 46.50 46.75 47.00 BLEU 45.69 45.70 46.01 46.20 46.05 45.78 46.00 46.16 46.28 46.74 46.52 46.08 Dev Test-avg Figure 3: The effect of hyper-parameter on NIST Zh-En translation task.", "more challenging to identify the proper candidate to be copied.", "Finally, PDC substantially outperforms Single-Copy and Flat-Copy, with improvements of 1.66 and 2.20 average BLEU points, due to our effective hierarchical copy mechanism that connects the Pointer and the Disambiguator , which will be further analyzed in the next sections.", "What distinguishes our Pointer from its counterparts of other NMT models is the utilization of semantic information of translation candidates in dictionaries.", "To verify the effectiveness of this technical design, we implement a PDC variant named PDC(w/o Dict-Pointer) whose Pointer locates source words based on the encoder state ( h ) of the vanilla Transformer instead of the dictionary-enhanced encoder state ( h (cid:48) ).", "So the semantic information from dictionaries is not incorporated into the pointing step.", "As expected, the performance of PDC(w/o Dict-Pointer) demonstrates a decrement of nearly 1.0 average BLEU score on the test sets compared with PDC, verifying the promising effect of Pointer .", "The results also justify our intuition that the rich information of source words' translations in dictionaries helps to point the proper source word.", "To investigate the effectiveness of our bi-view Disambiguator , we implement another two model variants: PDC(w/o Src-View) that is removed source-view disambiguation and PDC(w/o Tgt-View) that is removed target-view disambiguation.", "As Table 1 shows, the performance of both models significantly decrease.", "To further investigate the collaboration between Strategies BPE target Dev Test Dict Src Tgt Avg None (cid:55) (cid:55) (cid:55) 43.94 43.68 Standard (cid:55) (cid:51) (cid:51) 45.16 44.75 Dict (cid:51) (cid:51) (cid:51) 45.71 44.84 Selective (cid:55) (cid:51) S 46.74 46.20 Table 2: The BLEU scores of different BPE strategies.", "the source-view and target-view disambiguation, we analyze the impact of the hyper-parameter , which denotes how to weight the disambiguation distribution generated from source-view and target-view.", "In Figure 3, the orange polyline shows the BLEU scores on the development set (MT06), and the blue polyline shows average BLEU scores on another five test sets.", "By looking into these two polylines' trends, we find that PDC is best-performed when is 0.4, indicating neither the source view nor the target view can be ignored or overly dependent.", "These findings prove that both views' contextual information is critical and complementary to identify a specific source word's proper translation, and our Disambiguator synthesizes them effectively.", "We demonstrate the effects of different BPE strategies in Table 2, where None does not use BPE at all, Standard adopts the same BPE strategy as dictionary-independent NMT models, Dict simply apply BPE to dictionary candidates in addition to standard BPE, and Selective is our Selective BPE.", "More detailed settings of each strategy can be found in Table 2, from which we can also clearly observe the superiority of our selective BPE strategy.", "We attribute this superiority to the fine-grained collaboration between selective BPE and dictionaries, which implicitly yet effectively leveraging the information of which dictionary candidate are likely to be copied.", "It is worth mentioning that selective BPE on the target side will not prevent overcoming morphological variance compared with standard BPE.", "A morphologically inflected target word can be generated in two ways in our system.", "Firstly, if the target word is not in the candidate set, we will perform standard BPE decomposition.", "In this scenario, se-0 (0,0.05] (0.05,0.1] (0.1,0.15] (0.15,1] Proportion 30 35 40 45 50 55 BLEU 50.52 47.59 44.92 36.67 32.94 52.27 50.54 46.30 40.52 40.12 Transformer PDC Figure 4: Performance of Transformer and PDC on each subset with different rare word proportions.", "lective BPE is the same as standard BPE, and the target word will be generated in a standard way.", "Otherwise, if the target word is in the candidate set, it will not be decomposed and our method will encourage the model to copy this word directly.", "Thus, the morphological variance problem can be simply solved by copying.", "We notice that most dictionary-based NMT works aim to address the rare words problem.", "Though our work focuses on improving the overall process of incorporating dictionary information as external knowledge, we also conduct a rough experiment to see how our method alleviates the rare words problem.", "Specifically, we treat a source word as a rare word if it appears less than ten times in the training set.", "Then we split the test set into subsets according to the rare word proportions of source sentences.", "The performance on the subsets is shown in Figure", "4. We find that PDC outperforms Transformer by a larger gap on the test subsets with more rare words (e.g., 7.18 for the proportion greater than 0.15), demonstrating that PDC can well alleviate the rare words issue.", "This observation is also consistent with previous investigations (Luong et al., 2015).", "To verify PDC's generalization capability, we further conduct experiments on the IWSLT Zh-En translation task and KFTT En-Ja translation task.", "Due to space limitations, here we only report the performance of PDC and Transformer.", "PDC's superiority can be easily observed from the results in Table 3, indicating that PDC can be effectively applied in translation tasks of different language pairs and domains (e.g., news, speech and Wiki).", "Due to the rich prior information of parallel word pairs in bilingual dictionaries, many researchers have dedicated efforts to incorporating bilingual dictionaries into NMT systems.", "They either generate pseudo parallel sentence pairs based on bilingual dictionaries to boost training (Zhang and Zong, 2016; Zhao et al., 2020), or exploit the bilingual dictionaries as external resources fed into neural networks (Luong et al., 2015; Gulcehre et al., 2016; Arthur et al., 2016; Zhang et al., 2017b; Zhao et al., 2018a,b, 2019b).", "Our work can be categorized into the second direction, and focus on improving the overall process of incorporating bilingual dictionaries as external knowledge into the latest NMT systems.", "In particular, Luong et al. (2015); Gulcehre et al. (2016) first employed copy mechanism (Gu et al., 2016) into NMT to address rare words problem with one-to-one external bilingual dictionaries.", "Arthur et al. (2016); Zhao et al. (2018a) exploited the prior probabilities from external resource to adjust the distribution over the decoding vocabulary.", "(Zhao et al., 2018b, 2019b) leverage statistics-based pre-processing method to filter out troublesome words and perform disambiguation on multiple candidates.", "Our work extends the above ideas and reforms the overall process into a novel end-to-end framework consisting of three steps: pointing, disambiguating, and copying.", "CopyNet is also widely used in text summarization (See et al., 2017; Zhu et al., 2020), automatic post-editing (Huang et al., 2019), grammar correction (Zhao et al., 2019a) and so on.", "From a high-level perspective, our methods share a similar Transformer-based architecture with Huang et al. (2019) and Zhu et al. (2020).", "Huang et al. (2019) employed CopyNet to copy from a draft generated by a pre-trained NMT system.", "Zhu et al. (2020) proposed a method that integrates the operation of attending, translating, and summarizing to do cross-lingual summarization.", "What distinguishes our PDC from other copy-based architectures lies in that the three novel components ( P ointer , D isambiguator and C opier ) and the selective BPE strategy can make full and effective use of dictionary knowledge.", "We have presented PDC, a new method to incorporate bilingual dictionaries into NMT models, mainly involving four techniques.", "(1) By integrating semantic information of dictionaries, the enhanced context representations help to locate source words whose dictionary translations will potentially be used.", "(2) The source and target information is well synthesized and contribute to identifying the optimal translation of a source word among multiple dictionary candidates, in a complementary way.", "(3) The above two steps are then systematically integrated based on a hierarchical copy mechanism.", "(4) We finally equip the architecture with a novel selective BPE strategy carefully-designed for dictionary-enhanced NMT.", "Experiments show that we achieve competitive results on the Chinese-English and English-Japanese translation tasks, verifying that our approach favorably incorporates prior knowledge of bilingual dictionaries.", "We thank anonymous reviewers for valuable comments.", "This research was supported by the National Key Research And Development Program of China under Grant No.2019YFB1405802 and the central government guided local science and technology development fund projects (science and technology innovation base projects) under Grant No.206Z0302G." ]
[ "objective", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "objective", "other", "other", "other", "objective", "other", "abstain", "other", "other", "objective", "objective", "abstain", "abstain", "abstain", "objective", "result", "other", "other" ]
[ "Negative sampling is highly effective in handling missing annotations for named entity recognition (NER).", "One of our contributions is an analysis on how it makes sense through introducing two insightful concepts: missampling and uncertainty.", "Empirical studies show low missampling rate and high uncertainty are both essential for achieving promising performances with negative sampling.", "Based on the sparsity of named entities, we also theoretically derive a lower bound for the probability of zero missampling rate, which is only relevant to sentence length.", "The other contribution is an adaptive and weighted sampling distribution that further improves negative sampling via our former analysis.", "Experiments on synthetic datasets and well-annotated datasets (e.g., CoNLL-2003) show that our proposed approach benefits negative sampling in terms of F1 score and loss convergence.", "Besides, models with improved negative sampling have achieved new state-of-the-art results on real-world datasets (e.g., EC).", "With powerful neural networks and abundant well-labeled corpora, named entity recognition (NER) models have achieved promising performances (Huang et al., 2015; Ma and Hovy, 2016; Akbik et al., 2018; Li et al., 2020a).", "However, in many scenarios, available training data is low-quality, which means a portion of named entities are absent in annotations.", "Fig. 1 depicts a sentence and its incomplete annotations.", "Fine-grained NER (Ling and Weld, 2012) is a typical case.", "Its training data is mainly obtained through applying weak supervision to unlabeled corpora.", "Past works (Shang et al., 2018b; Li et al., 2021a) find missing annotations impact NER models and refer this to unlabeled entity problem .", "ing that causes their poor performances.", "To eliminate this adverse impact, they propose a simple yet effective approach based on negative sampling.", "Compared with its counterparts (Li and Liu, 2005; Tsuboi et al., 2008; Shang et al., 2018b; Peng et al., 2019), this method is of high flexibility, without relying on external resources, heuristics, etc.", "While negative sampling has handled missing annotations well, there is no systematic study on how it works, especially what potential factors are involved.", "From a number of experiments, we find missampling and uncertainty both worth receiving attention.", "Missampling means that some unlabeled entities are mistakenly drawn into the set of training negatives by negative sampling.", "To quantitively describe this, we define missampling rate, the proportion of unlabeled entities in sampled negatives, for a sentence.", "Uncertainty indicates how hard a sampled negative is for NER models to recognize, and we use entropy to estimate it.", "Empirical studies show low missampling rate and high uncertainty are both indispensable for effectively applying negative sampling.", "Besides, based on the observation that entities are commonly sparse, we provide a lower bound for the probability of zero missampling rate with theoretical proof, which is only related to sentence length.", "Originally, Li et al. (2021a) adopt uniform sampling distribution for negative sampling.", "Inspired by former findings, we introduce a weighted sampling distribution to displace the uniform one, 7188 Positive Instances ( ) (1, 1, O) (1, 3, O) (6, 7, O) (7, 7, O) (3, 5, O) (7, 7, O) sample (1, 2, PER) Sampled Negatives ( \" ) All the Negatives ( ) (3, 5, O) (7, 7, O) (1, 2, PER) Training Instances [Mark Twain] PER said this in New York extract extract collect collect (6, 7, LOC) inaccessible Unlabeled Entities ( $ ) LOC Figure 2: An example to depict how negative sampling collects training negatives given an annotated sentence. The phrase marked by a red circle is an unlabeled entity. which takes missampling and uncertainty into account. Our distribution is purely computed from the predictions of an NER model. This means it coevolves with the model throughout the training process. The adaptive property of our method is appealing since it doesn't rely on manual annotations or additional models to indicate valuable negatives. We have conducted extensive experiments to verify the effectiveness of our weighed sampling distribution. Results on synthetic datasets and well-annotated datasets (e.g., OntoNotes 5.0) show that weighted sampling distribution improves negative sampling in performances and loss convergence. Notably, with improved negative sampling, our NER models have established new state-of-the-art performances on real-world datasets, like EC (Yang et al., 2018). 2 Preliminaries 2.1 Unlabeled Entity Problem Given an n -length sentence, x = [ x 1 , x 2 , , x n ] , an annotator (e.g., human) will mark a set of named entities from it as y = { y 1 , y 2 , , y m } . n is sequence length and m is set size. Every entity, y k , of the set, y , is denoted as a tuple, ( i k , j k , l k ) . ( i k , j k ) is the span of the entity that corresponds to the phrase, x i k ,j k = [ x i k , x i k +1 , , x j k ] , and l k is its label. Unlabeled entity problem occurs when some ground truth named entities, (cid:98) y , are missed by annotators, which means they are not contained in the labeled entity collection, y . In distantly supervised NER (Mintz et al., 2009; Ren et al., 2015; Fries et al., 2017), this is resulted from the limited coverage of external resources, such as predefined ontology. In other situations (e.g., fine-grained NER where manual annotation is extremely hard), the cause may be the negligence of human annotators. Take Fig. 2 as an example. The set of labeled entities is y = [(1 , 2 , PER)] , that of unlabeled entities is (cid:98) y = { (6 , 7 , LOC) } , and that of ground-truth entities is y (cid:98) y . Let S denote the set that includes all spans of a sentence, x , except the ones of annotated named entities, y . Every span in this set is labeled with O\", indicating that it's a possible negative.", "A standard training strategy for NER models is to minimize the loss on annotated positives, y , and all negative candidates, S .", "Unfortunately, since S might contain unlabeled entities in (cid:98) y , NER models are seriously misguided in training.", "To address this problem, (Li et al., 2021a) propose to circumvent unlabeled entities with negative sampling.", "The core idea is to uniformly sample a few negative candidates, (cid:101) y , from S for reliably training NER models.", "Under this scheme, the training instances contain sampled negatives, (cid:101) y , and positives from annotated entities, y .", "With them, y (cid:101) y , a cross-entropy loss is incurred as J = (cid:88) ( i,j,l ) y (cid:101) y log P ( l | x i,j ; ) .", "P ( l | x i,j ; ) is the probability that the ground truth label of the span, ( i, j ) , is l and represents the parameters of a model.", "Following Li et al. (2021a), our NER models are all span-based, which treat a span, instead of a single token, as the basic unit for labeling.", "Negative sampling is probable to avoid models being exposed to unlabeled entities.", "As Fig. 2 shows, the false negative, (6 , 7 , O) , is not involved 7189 Figure 3: The comparisons between changes of entity number and square root curve.", "in training.", "Li et al. (2021a) have empirically confirmed the effectiveness of negative sampling in handling unlabeled entities.", "However, there is no systematic study to explain how it works, and what factors are relevant.", "We analyze how negative sampling leads NER models that suffer from missing entity annotations to promising results from two angles: missampling and uncertainty.", "Missampling rate, , is defined as, for a sentence, the proportion of unlabeled entities contained in sampled negatives, (cid:101) y .", "Formally, it's computed as = 1 # { ( i, j, l ) | ( i, j, l ) (cid:98) y ; ( i, j, O) / (cid:101) y } # (cid:101) y , where # is an operation that measures the size of an unordered set.", "The missampling rate reflects the quality of training instances, y (cid:101) y .", "A lower averaged rate over the whole dataset means that the NER model meets fewer unlabeled entities in training.", "Intuitively, this leads to higher F1 scores since there is less misguidance from missing annotations to the model.", "Hence, missampling is an essential factor for analysis.", "We design a simulation experiment to empirically verify the above intuition.", "Like Li et al. (2021a), we build synthetic datasets as follows.", "We start from a well-labeled dataset, i.e., CoNLL-2003 (Sang and De Meulder, 2003), and then mimic unlabeled entity problem by randomly masking manually annotated entities with a fixed probability p (e.g., 0 . 7 ).", "In this way, we can obtain unlabeled entities, (cid:98) y , and annotated entities, y , for every sentence, x .", "We can obtain different pairs of a missampling rate and an F1 score through running a negative sampling based model on different synthetic datasets.", "Table 1 demonstrates several cases, and we can see the trend that lower missamping rates lead to better performances.", "Therefore, we conclude that missampling affects the effectiveness of negative sampling.", "We also theoretically prove that negative sampling is very robust to unlabeled entities based on a natural property of named entities.", "Entity Sparsity.", "Unlike other sequence labeling tasks, such as syntactic chunking (Sang and Buch-holz, 2000) and part-of-speech tagging (Schmid, 1994), named entities (i.e., non-O\" segments) are commonly sparse in NER datasets. Fig. 3 depicts some statistics of two common NER datasets, CoNLL-2003 and OntoNotes 5.0. The blue points are the averaged number of entities for sentences of fixed lengths. Every point stands on the center of a dashed line, whose length is the 1.6 variance of the entity numbers. The red curves are the square roots of sentence lengths. To avoid being influenced by rare events\" we erase the points supported by too few cases (i.e., 20).", "From the above figure, we can see that the number of ground truth named entities (i.e., unlabeled entities, (cid:98) y , and annotated ones, y ) in a sentence is generally smaller than the square root of sentence length, n .", "Empirically, we have # y +# (cid:98) y n .", "Theorem", "1. For a n -length sentence x , assume (cid:101) y is the set of sampled negatives with size n (0 < < 1) via negative sampling.", "If the premise of entity sparsity holds, then the probability of zero missampling rate, i.e., = 0 , is bounded.", "where m = # y .", "The i -th product term is the probability that, at the i -th sampling turn, the i th sampled candidate doesn't belong to unlabeled entity set, (cid:98) y .", "Then we can derive the following inequalities: q (cid:89) 0 i< n (cid:0) 1 n m n ( n +1)2 m i (cid:1) (cid:89) 0 i< n (cid:0) 1 n n ( n +1)2 i (cid:1) > (cid:0) 1 2 n n ( n 1) + 2 (cid:1) n .", "The first inequality holds because of the assump-tion; the second one holds because n m n ( n +1)2 m i is monotonically decreases as m increases, and m 0 ; the last inequality hold since n n ( n +1)2 i increases with decreasing i , i < n , and n n .", "Because (1 + a ) b 1 + ba for a 1 b 1 and n < n + 1 , we have q > (cid:16) 1 2 n n ( n 1) + 2 (cid:17) n 1 2( n + 1) n n ( n 1) + 2 > 1 4 n n 1 .", "The right-most term monotonically increases with the sentence length n , and thus the probability of zero missampling rate for every sentence has a lower bound.", "This theorem shows that missampling rates for standard negative sampling are controllable, and implies why negative sampling succeeds in handling missing annotations.", "Assume P o ( l | x i,j ) is an oracle model that accurately estimates a label distribution over every span ( i, j ) .", "The uncertainty is defined as the entropy of this distribution: H ( L | X = x i,j ) = (cid:88) l L P o ( l | x i,j ) log P o ( l | x i,j ) , where L and X represent the label space and a span, x i,j , respectively.", "Note that the oracle model P o ( l | x i,j ) is generally unreachable.", "the common practice is to additionally train a model P ( l | x i,j ; ) (see Sec. 2.2) to approximate it.", "Besides, the approximate model is learned on held-out training data to avoid overconfident estimation.", "Uncertainties essentially measure how difficult a case is for models to make a decision (Jurado et al., 2015).", "In active learning, uncertainty is used to mine hard unlabeled instances for human annotator (Settles, 2009).", "In our scenario, we suspect that the uncertainty of sampled negatives plays an important role in our training with negative sampling.", "We design an empirical experiment to verify our hypothesis.", "Specifically, we first randomly and equally split the entire training data with masked entities into two parts, and the first part is used to train an oracle model P o .", "For every sentence x in the second part, we then sample three subsets from S as training negatives: the first subset denoted by (cid:101) y t corresponding to the topk uncertainties, and the second denoted by (cid:101) y m corresponding to middlek uncertainties, and the third denoted by (cid:101) y b corresponding to the bottomk uncertainties, with k = n .", "Since missampling affects F1 scores as aforementioned, we eliminate the effect on missampling rate by setting = 0 when constructing both subsets, i.e., neither subset contains 7191 any spans included in (cid:98) y .", "Finally, we respectively train three models on top of three negative subsets according to Eq.", "1, and report their performances on test data in Table", "2. We can see that the model trained on (cid:101) y t achieves the best performance, which validates our hypothesis.", "The previous section shows that the effectiveness of negative sampling is dependent on two factors: missampling and uncertainty.", "As a result, if we had considered both quantities when sampling negatives, we should see larger improvements from final models.", "In this section, we propose an adaptive and weighted sampling distribution based on these two factors.", "Unfortunately, since missampling rate is defined on top of the unlabeled entities (cid:98) y which is unknown in practice, it is not straightforward to apply missampling for improving negative sampling.", "Therefore, we assume that an oracle model, z i,j,l = P o ( l | x i,j ) , exists, which is likely to predict the ground-truth label for every span x i,j .", "Then we define a score v i,j as the difference between the score z i,j, O and the maximum label score on the span ( i, j ) : v i,j = z i,j, O max l L z i,j,l .", "Intuitively, if v i,j is high, then z i,j, O is high and max l L z i,j,l is low.", "In other words, x i,j is likely to be with O\" label and thus the missampling rate should be small.", "Hence sampling such a span as a negative won't hurt NER models.", "Note that max l L z i,j,l in the right hand acts as normalization, making v i,j comparable among different spans ( i, j ) .", "We also define an uncertainty score, u i,j , as the entropy of the label distribution for a span: u i,j = H ( L | X = x i,j ) = (cid:88) l L z i,j,l log z i,j,l .", "As discussed in Sec. 3.2.2, training a NER model with the negatives of higher uncertainty scores, u i,j , brings better performances.", "Based on v i,j and u i,j , we design the following weighted sampling distribution to displace the uniform one when sampling k negatives from S without replacement: r i,j = u i,j (1 + v i,j ) e i,j = exp( r i,j /T ) (cid:80) ( i ,j , O) S exp( r i ,j /T ) , (4) where T 1 is a temperature to control the smoothness of sampling distribution.", "1 is to make a trade-off between v i,j and u i,j : a high will ensure a low missampling rate while a low will ensure a high uncertainty score.", "To make our approach practical for use, we should specify how to approximate the oracle model, P o ( l | x i,j ) .", "In the simulation experiment in Sec. 3.2.1, the oracle model is a fixed model via standard negative sampling which is learned on held-out training data.", "It's natural to use such a fixed model to approximate the oracle model here.", "However, this will cause a side-effect that our approach is not self-contained due to its dependence on an external model.", "Consequently, we consider an adaptive style: directly using the NER model, P ( l | x i,j ; ) , itself as the oracle model whose parameter is learned during the training process.", "Under this scheme, T is scheduled as C c , where C is the number of training epochs and 0 c < C is the current epoch number.", "Since the NER model P ( l | x i,j ; ) is not accurate in early epochs of training, a more uniform sampling distribution (i.e., higher T ) is safer for sampling negatives.", "Finally, we get a weighted sampling distribution with the NER model, P ( l | x i,j ; ) , adaptively approximating the oracle model.", "Our training procedure is the same as that of vanilla negative sampling (see Fig. 2), except for sampling distribution.", "To evaluate our proposed variant (i.e., negative sampling w/ weighted sampling distribution) , we have conducted extensive experiments on under-annotated cases: synthetic datasets and real-world datasets.", "We also validate its superiority in well-annotated scenarios.", "The well-annotated datasets are CoNLL-2003 and OntoNotes 5.0.", "CoNLL-2003 contains 22137 sentences and is split into 14987 , 3466 , and 3684 sentences for training set, development set, and test set, respectively.", "OntoNotes 5.0 contains 76714 7192 Figure 4: The changes of F1 scores with training epochs on some synthetic datasets.", "sentences from a wide variety of sources.", "We follow the same format and partition as in Luo et al. (2020).", "The construction of synthetic datasets is based on well-annotated datasets and has been already described in Sec.", "3. Following prior works (Nooralahzadeh et al., 2019; Li et al., 2021a), we adopt EC and NEWS as the real-world datasets.", "Both of them are collected by Yang et al. (2018).", "The data contains 2400 sentences annotated by human and is divided into three portions: 1200 for training set, 400 for development set, and 800 for test set.", "Yang et al. (2018) build an entity dictionary of size 927 and apply distant supervision on a raw corpus to get extra 2500 training cases.", "NEWS is constructed from MSRA (Levow, 2006).", "Training set is of size 3000 , development set is of size 3328 , and test set is of size 3186 are all sampled from MSRA.", "Yang et al. (2018) collect an entity dictionary of size 71664 and perform distant supervision on the remaining data to obtain extra 3722 cases for training.", "Both EC and NEWS contain massive incomplete annotations.", "NER models trained on them suffer from unlabeled entity problem .", "We adopt the same configurations for all the datasets.", "The dimensions of scoring layers are 256 .", "L2 regularization and dropout ratio are 10 5 and 0 .", "4 , respectively.", "We set = 8 .", "This setting is obtained via grid search.", "We use Adam (Kingma and Ba, 2014) to optimize models.", "Our models run on GeForceRTX 2080T.", "At test time, we convert the predictions from our models into IOB format and use conlleval 1 script to compute the F1 score.", "In all the experiments, the improvements of our models over the baselines are statistically significant with a rejection probability lower than 0 .", "01 .", "We show how NER models with our proposed approach perform on two types of datasets: synthetic datasets (e.g., CoNLL-2003) and real-world datasets (e.g., EC).", "Synthetic datasets offer us a chance to qualitatively analyze how our approach reacts to changing mask probabilities.", "For example, we will show that weighted sampling distribution is beneficial in fast loss convergence.", "Real-world datasets provide more appropriate cases to evaluate NER models, since missing annotations are caused by limited knowledge resources, rather than intentional masking.", "Fig. 4 shows the changes of F1 scores from vanilla negative sampling and our proposed variant with training epochs.", "The synthetic datasets are constructed from OntoNotes 5.0.", "We can see that, compared with vanilla negative sampling, our proposed variant obtains far better performances on 1 https://www.clips.uantwerpen.be/conll2000/chunking/ conlleval.txt.", "the first few epochs and converges much faster.", "These results clearly verify the superiority of our weighted sampling distribution.", "Table 3 compares vanilla negative sampling with our proposed variant in terms of F1 score.", "We can draw two conclusions.", "Firstly, our approach greatly improves the effectiveness of negative sampling.", "For example, when masking probability p is 0 .", "8 , we increase the F1 scores by 4 .", "07% on CoNLL-2003 and 1 .", "29% on OntoNotes 5.0.", "Secondly, our variant is still robust when unlabeled entity problem is very serious.", "Setting masking probability p from 0 .", "5 to 0 .", "9 , our performance on OntoNotes 5.0 only drops by 8 .", "79% .", "By contrast, it's 32 .", "33% for vanilla negative sampling.", "Real-world datasets contain a high percentage of partial annotations caused by distant supervision.", "Hence, the models trained on them are faced with serious unlabeled entity problem .", "Table 4 diagrams the results.", "The F1 scores of negative sampling and Partial CRF are from their papers.", "We have additionally reported the results of PU Learning 2 , Weighted Partial CRF 3 , 2 https://github.com/v-mipeng/LexiconNER.", "BERT-MRC 4 , and BERT-Biaffine Model 5 , using their codes.", "We can draw three conclusions from the table.", "Firstly, we can see that BERT-MRC and BERT-Biaffine Model both perform poorly on real-world datasets.", "This manifests the huge adverse impacts of unlabeled entities on models.", "Secondly, our variant has achieved new state-of-the-art results on the two datasets.", "Our scores outnumber those of vanilla negative sampling by 1 .", "30% and 0 .", "89% on them.", "Thirdly, to make fair comparisons, we also report the results of using Bi-LSTM, instead of BERT, as the sentence encoder.", "This version still notably surpasses prior methods on the two datasets.", "For example, compared with Weighted Partial CRF, our improvements are 6 .", "57% on EC and 6 .", "55% on NEWS.", "As a by-product, we also evaluate the effectiveness of the proposed method on the well-annotated datasets CoNLL-2003 and OntoNotes 5.0.", "As shown in Table 5, we have achieved excellent performances on well-annotated datasets.", "The F1 scores of baselines are copied from Li et al. (2021a).", "With our weighted sampling distribution, the results of negative sampling are improved by 0 .", "28% 4 https://github.com/ShannonAI/mrc-for-flat-nested-ner.", "on CoNLL-2003 and 0 .", "64% on OntoNotes 5.0.", "Our model even outperforms BERT-Biaffine Model by 0 .", "19% on CoNLL-2003.", "Compared with a strong baseline, Flair Embedding, our improvements of F1 scores are 0 .", "63% and 2 .", "09% on the two datasets.", "These results further verify the effectiveness of the proposed sampling distribution.", "The comparison here is in fact unfair for our model, because negative sampling only utilizes a small part of negatives, n rather than n ( n +1)2 m (see Sec. 2 for the details of these numbers).", "We also have tried using all the negatives for training our model, and found the resulting performances significantly outnumber those of baselines.", "The purpose of Table 5 is to confirm that negative sampling even works well for situations with complete entity annotations.", "A number of NER models (Lample et al., 2016; Akbik et al., 2018; Clark et al., 2018; Li et al., 2020b, 2021b) based on end-to-end neural networks and well-labeled data have achieved promising performances.", "A representative work is Bi-LSTM CRF (Huang et al., 2015).", "However, in many situations (e.g., distantly supervised NER), these seemingly perfect models severely suffer from unlabeled entity problem , where massive named entities are not annotated in training data.", "There are some techniques developed by earlier works to mitigate this issue.", "Fuzzy CRF and AutoNER (Shang et al., 2018b) allow NER models to learn from high-quality phrases that might be potential named entities.", "Mining these phrases demands external resources (Shang et al., 2018a), which is not flexible for practical usage.", "Moreover, there is no guarantee that unlabeled entities are fully covered by these phrases.", "PU Learning (Peng et al., 2019; Mayhew et al., 2019) adopts a weighted training loss and assigns low weights to false negative instances.", "This approach is limited by requiring prior information or heuristics.", "Partial CRF (Yang et al., 2018; Jie et al., 2019) is an extension of CRF, which marginalizes the loss over all candidates that are compatible with the incomplete annotation.", "While being theoretically attractive, this approach still needs a portion of well-annotated data to obtain true negatives, which limits its use in real-world applications.", "For example, in fine-grained NER (Ling and Weld, 2012), all the training data are produced through weak supervision, and its manual annotation is very difficult, so obtaining enough high-quality data is not practical.", "Recently, Li et al. (2021a) find that unlabeled entities severely misguide the NER models during training.", "Based on this observation, they introduce a simple yet effective approach using negative sampling.", "It's much more flexible than other methods, without resorting to external resources, heuristics, etc.", "However, Li et al. (2021a) haven't well explained why negative sampling works and there are weaknesses in their principle analysis.", "In this paper, we first show two factors that affect how negative sampling avoids NER models from being impacted by missing annotations.", "Notably, a theoretical guarantee is provided for the zero missampling rate.", "Then, we propose weighted sampling distribution to further improve negative sampling based on our former findings.", "Negative sampling succeeds in handling missing annotations.", "In particular, the fine-grained NER module of our online text understanding service, TexSmart (Liu et al., 2021; Zhang et al., 2020), adopts this technique because of its massive low-quality training data.", "In this work, we have made two contributions.", "On the one hand, we analyze why negative sampling succeeds in handling unlabeled entity problem from two perspectives: missampling and uncertainty.", "Empirical studies show both low missampling rates and high uncertainties are essential for applying negative sampling.", "Based on entity sparsity, we also provide a theoretical lower bound for the probability of zero missampling rate.", "On the other hand, we propose an adaptive and weighted sampling distribution that takes missampling and uncertainty into account.", "We have conducted extensive experiments to verify whether this further improves the effectiveness of negative sampling.", "Results on synthetic datasets and well-annotated datasets show that our approach benefits in performances and loss convergence.", "With improved negative sampling, our NER models also have achieved new state-of-the-art results on real-world datasets (e.g., NEWS)." ]
[ "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "objective", "abstain", "method", "objective", "method", "abstain", "method", "objective", "result", "result", "objective" ]
[ "New words are regularly introduced to communities, yet not all of these words persist in a community's lexicon.", "Among the many factors contributing to lexical change, we focus on the understudied effect of social networks.", "We conduct a large-scale analysis of over 80k neologisms in 4420 online communities across a decade.", "Using Poisson regression and survival analysis, our study demonstrates that the community's network structure plays a significant role in lexical change.", "Apart from overall size, properties including dense connections, the lack of local clusters and more external contacts promote lexical innovation and retention.", "Unlike offline communities, these topic-based communities do not experience strong lexical levelling despite increased contact but accommodate more niche words.", "Our work provides support for the sociolinguistic hypothesis that lexical change is partially shaped by the structure of the underlying network but also uncovers findings specific to online communities.", "Lexical change is a prevalent process, as new words are added, thrive, and decline in day-to-day usage.", "While there is a certain randomness at play in word creation and adoption (Newberry et al., 2017), there are also psychological, social, linguistic and evolutionary factors that systematically affect lexical change (Labov, 2007; Christiansen and Kirby, 2003; Lupyan and Dale, 2010).", "In sociolinguistics, one structural factor that has long been recognized as influencing lexical changes is the language community's social network.", "For example, drawing on pioneering works on social networks (Granovetter, 1977, 1983), the weak tie model of change holds that the structural properties of social networks can account for the general tendency of some language communities to be more resistant to linguistic change than others (Milroy and Milroy, 1985, 1992; Milroy and Llamas, 2013).", "A classic finding is that loose-knit networks with mostly weak ties are more conducive to information diffusion, thereby facilitating innovation and change, while close-knit networks with strong bonds impose norm-enforcing pressure on language usage, strengthening the localized linguistic norms (Milroy and Milroy, 1985).", "One compelling observation in favor of this argument concerns the comparison between two Germanic languages, Icelandic and English.", "Icelandic has changed little since the late thirteenth century, which could be due to the norm-enforcing pressure inherent in the strong kinship and friendship ties.", "In contrast, in Early Modern London English, the loosening of network ties, accompanied by the rise of the mobile merchant class, was argued to be responsible for some radical change in the language (Milroy and Milroy, 1985).", "This study extends network-based sociolinguistic research to online communities, which remain understudied despite their expansion in past decades.", "While we draw an analogy between offline and online communities, our focus is on communities of practice (Eckert and McConnell-Ginet, 1992; Holmes and Meyerhoff, 1999; Schwen and Hara, 2003), or an aggregate of people who come together around mutual engagement in an endeavor (Eckert and McConnell-Ginet, 1992), rather than offline speech communities.", "We examine how network structures affect lexical innovation , retention and levelling in online communities.", "Specifically, we ask 1) how network structure contributes to the introduction of new words to online communities (innovation), 2) how structural properties affect the survival of these newly introduced words (retention) and 3) whether the increased inter-connectedness causes online communities to adopt a similar set of new words (levelling).", "communities, we precisely quantify the structural mechanisms that drive these lexical processes.", "Our work adds to network studies in sociolinguistics focusing on in-person observations of local communities (Conde-Silvestre, 2012; Sharma and Dodsworth, 2020) and shows that conclusions drawn from offline communities are insufficient to account for behavior seen in online social networks (Figure 1).", "We find that larger size, denser connections, lack of local clustering and greater external contacts promote lexical innovation and retention in online communities, while density, as discussed most in offline studies, could be an emergent byproduct of network size.", "These topic-based communities also do not experience strong levelling due to increased contact.", "Second, emerging studies in online communities (Danescu-Niculescu-Mizil et al., 2013; Stewart and Eisenstein, 2018; Del Tredici and Fernndez, 2018) focus exclusively on lexical change at the individual or word level.", "Few investigate how global network properties affect lexical change at the community level.", "Finally, sampling offline networks presents practical difficulties, we extract complete networks for thousands of online communities, providing a large-scale dataset to explore the structural factors of lexical change.", "Our code is available at https://github.com/lingjzhu/ reddit_network and replication details are available in Appendix A. 2 Lexical Change Lexical change and social networks Since the landmark study of sound change in the Belfast community by Milroy and Milroy (1985), the impact of network structures on language change has been a key consideration in sociolinguistics.", "Milroy and Milroy (1985) found that speakers in loose-knit networks tend to experience more linguistic change than those in close-knit networks.", "Most early social network studies focus predominantly on speakers in local, less mobile communities where ties between people tend to be strong (Nevalainen, 2000; Conde-Silvestre, 2012; Sharma and Dodsworth, 2020).", "Except for a few recent simulation studies (Reali et al., 2018), researchers have rarely explored how the global properties of social networks systematically affect lexical change, although the weak tie model does predict an influence of social network at the macro-level.", "In addition, while there are lexicographic studies attempting to enumerate factors that affect the acceptance of neologisms (Metcalf, 2004; Barnhart, 2007), network structures are rarely taken into consideration.", "A key limitation of previous works has been access to a large longitudinal dataset of communities with different network properties as well as a precise estimate of the network structure of larger communities, which are limitations this study overcomes.", "Lexical change in online communities The rise of social media and the proliferation of Internet speech has drawn increasing attention to lexical change in online communities, including Twitter (Eisenstein et al., 2014; Goel et al., 2016), Reddit (Altmann et al., 2011; Stewart and Eisenstein, 2018; Del Tredici and Fernndez, 2018) and review sites (Danescu-Niculescu-Mizil et al., 2013).", "It has been shown that the usage of certain words is associated with community loyalty and norms (Zhang et al., 2017; Bhandari and Armstrong, 2019) and indicative of user behaviors (Danescu-Niculescu-Mizil et al., 2013; Noble and Fernndez, 2015; Chang and Danescu-Niculescu-Mizil, 2019; Klein et al., 2019).", "Specifically for lexical change over time, Stewart and Eisenstein (2018) investigate the survival of lexical items in Reddit, and conclude that a word's appearance in more diverse linguistic contexts is the strongest predictor of its survival while social dissemination is a comparatively weaker predictor.", "Del Tredici and Fernndez (2018) examined the use of neologisms in 20 subreddit communities.", "Their finding that weak-tie users tend to innovate whereas strong-tie users tend to propagate is consistent with the weak tie theory of language change .", "Other studies along this line tend to focus on the role of individual users (Paolillo, 1999; Paradowski and Jonak, 2012).", "The study closest to our current study is that by Kershaw et al. (2016), which investigates word innovations in Reddit and Twitter by looking at grammatical and topical factors.", "Yet Kershaw et al. (2016) only used network information to partition the dataset without exploring the role of these structural attributes in depth.", "Less is known about how network structures are systematically related to community-level lexical change in online communities, which we address here.", "To analyze lexical innovation in a network setting across long time scales, we use comments made to Reddit, one of the most popular social media sites.", "There, 330M users are active in about 1M distinct topic-based sub-communities (subreddits).", "Here we define each subreddit as a community of practice (Schwen and Hara, 2003), as each subreddit is relatively independent with various norms formed through interactions.", "The subreddit communities span across a wide range of social network structures (Hamilton et al., 2017) and linguistic use patterns (Zhang et al., 2017), making them ideal for studying the propagation of sociolinguistic variations in online communities.", "Detailed statistics are given in Appendix B. Data To strike a balance between acquiring active subreddits and preserving the diversity of these communities, we initially select the top 4.5K subreddits based on their overall size from their inception to October 2018 via the Convokit package (Chang et al., 2020).", "Let C Reddit = { C 1 , C 2 , . . . , C n } be the set of subreddit communities included in the corpus.", "A subreddit community C n is further discretized into multiple monthly subreddit communities c n ( t ) based on its actual life span in the monthly time step t , such that C n = { c n (1) , c n (2) , . . . , c n ( t max ) } .", "For each c n ( t ) , we extracted all individual comments except those marked as [deleted] and performed tokenization via SpaCy.", "During text cleaning, we removed numbers, emojis, urls, punctuations and stop words, and set a cutoff frequency of 10 over the entire dataset to exclude infrequent typos or misspellings.", "Only those monthly subreddits c n ( t ) with more than 500 words or 50 users after preprocessing are retained.", "Some communities known for their content in foreign languages are also removed.", "After preprocessing, 4420 subreddits were left in our analysis.", "Community networks For a community c from month t = 1 , 2 , . . . , t max , its temporal network can be represented as a discrete-time sequence of network snapshots G c = { G c (1) , G c (2) , . . . , G c ( t max ) } .", "Each snapshot network at time t , G c ( t ) = { V c ( t ) , E c ( t ) } consists of a set of user nodes V c ( t ) and a set of edges E c ( t ) characterizing direct interactions between users.", "G c ( t ) is initiated as an undirected and unweighted graph under the assumption that these commenting communications are mutual and bi-directional.", "A user u i is represented as a node if this user has posted at least one comment at month t .", "An edge e ij exists between user u i and user u j if these two users have interacted in close proximity in a common discussion thread, that is, separated by at most two comments (Hamilton et al., 2017; Del Tredici and Fernndez, 2018).", "Since online communications are asynchronous, a discussion thread created at time t may still have active comments from users at time t + 1 or later.", "For such threads, we only included interactions at time t in G c ( t ) and grouped later interactions into the future time steps at which these interactions happened.", "Users marked as [deleted] or AutoModerater were all removed.", "After filtering, a total of 289.8k community networks have been extracted for all 4420 communities.", "Inter-community networks We also identify the network dynamics between communities.", "We created temporal network GIC to characterize the connections between communities at consecutive months t = 1 , 2 , . . . , t max , GIC = { GIC (1) , GIC (2) , . . . , GIC ( t max ) } , in which GIC ( t ) = { VIC ( t ) , EIC ( t ) } .", "VIC ( t ) contains the set of nodes whereas EIC ( t ) is the set of edges between communities.", "A community is represented as a node u i in GIC ( t ) , except for communities that do not exist or are no longer active at time t .", "Two communities are determined to be connected if they share active users, that is, users who had posted at least 2 comments in both communities during that month.", "Each network snapshot is initiated as a weighted and undirected network with the edge weights set to the numbers of shared users, as an approximation of connection strength.", "Finally, 152 inter-community networks have been constructed since the inception of Reddit in 2005 until October 2018.", "Internet neologisms Neologisms are newly emerging language norms that fall along a continuum from the common words known to the overwhelming majority of users to nonce words that are mostly meaningless and rarely adopted.", "We only focus on Internet neologisms, e.g. lol, lmao, idk , as community slangs in Reddit communities.", "Such neologisms are abundant in the ever-evolving online communications as people use them for convenience or to signify in-group identity.", "The nonstandard, idiosyncratic spelling patterns of Internet neologisms also make them easier to track than nuanced meaning shifts.", "We obtained the Internet slangs from two online dictionary sources, NoSlang.com and Urban Dictionary .", "The neologisms in NoSlang.com have been used in a previous study (Del Tredici and Fernndez, 2018).", "After filtering some lexical entries, we ended up with approximately 80K Internet neologisms for subsequent analysis.", "We set the minimum frequency threshold of neologisms to 10 over the entire dataset; this low setting ensures that the analysis is not biased by selectively looking only Frequency Neologisms Most frequent lol, /r, kinda, bitcoin, idk, lmao, tbh tl;dr, alot, /s, omg, lvl, hahaha, iirc Least frequent thugmonster, blein, sotk, f'tang yobbish, ferranti, sonse, yampy Table 1: Examples of neologisms.", "at surviving words, which may obscure the lexical change process.", "Details can be found in Appendix B. Many of these neologisms were not first coined in Reddit but were coined elsewhere and introduced into subreddits subsequently by users.", "Since it was neither feasible nor possible to trace the exact origins of these words, we instead focused on how words were introduced and adopted.", "This approach is also consistent with previous studies of lexical change (Altmann et al., 2011; Grieve et al., 2017; Del Tredici and Fernndez, 2018).", "Communities in Reddit can be defined in terms of how their members relate within the community (intra) and how the community relates to other communities (inter) through multi-community memberships by its users (Tan and Lee, 2015).", "We formalize both as potential influences.", "As network attributes may be affected by the hyperparameters for network construction, we additionally validate this approach in Appendix C. Intra-community features We take the following network measurements for each G c ( t ) to characterize the global properties of community networks: density, average local clustering coefficient, transitivity, average degree, maximum degree, degree assortativity, fraction of the largest connected components and fraction of singletons.", "These network measures can characterize the size, fragmentation and connectedness of Reddit networks (Hamil-ton et al., 2017; Cunha et al., 2019).", "Parameters like average local clustering coefficient, transitivity, and assortativity are highly influ-enced by the underlying degree distribution (Hamil-ton et al., 2017).", "We adjusted these parameters by computing their relative differences with respect to the mean values of five random baseline networks, which were generated by randomly rewiring the original network for 10 edge count iterations and preserving the original degree sequence.", "These features are referred to as adjusted local clustering coefficient, adjusted transitivity, and adjusted assortativity in the following text.", "Inter-community features In addition to the intra-community network features, it is also necessary to measure a community's external connections to other communities.", "User mobility and external influence have been found to play a role in the process of lexical change (Conde-Silvestre, 2012).", "For each between-community network snapshot GIC ( t ) at time t , we focus on the properties of individual nodes (communities).", "We computed the degree centrality, closeness centrality, eigenvector centrality, betweenness centrality and PageRank centrality for each community node.", "These centrality measures quantify the connectedness of a community to other communities, which can be used as an indicator of their degree of external contact and user mobility.", "In what types of communities are neologisms likely to be introduced?", "Here, we investigate the extent to which the number of innovations introduced per month can be predicted with only the structural properties of community networks.", "Experiment setup Given a set of communities C = { c 1 , c 2 , . . . , c n } spanning time steps T = { 1 , 2 , . . . , t max } , we aim to predict the count of monthly lexical innovations for each community Y = { y c 1 1 , y c 1 2 , . . . , y c n t max } from the corresponding network attributes X = { x c 1 1 , x c 1 2 , . . . , x c n t max } .", "The predicted variable y c n t is computed by counting only innovations first introduced into community c n at month t .", "Any subsequent usage of the same innovations after their first introduction is not counted as innovations in community c n .", "The feature vector x c n t is the structural features of the network at time t for c n .", "After removing about 0.03% invalid data points and outliers, we ended up with 289.1k samples for the task.", "Implementation We used both intra-community and inter-community features for innovation prediction.", "However, in empirical networks, certain structural features tend to be correlated.", "For example, network size and density are usually strongly correlated on a log-log scale in online social networks (Backstrom et al., 2012), which is also apparent in our dataset (Spearman =-0.87).", "Such correlations may confound the interpretation of the feature contributions (see Appendix D).", "To generate orthogonal features, we first standardized all 15 network features and then used principal component analysis (PCA) with whitening to decompose them into principal components (PCs).", "Standardization was necessary as it could prevent a few variables with a large range of variance from dominating the PCs.", "We found that the first five PCs accounted for 87% of total variance and 10 PCs explained 99% of the total variance.", "Since counts of innovations are non-negative integers, Poisson regression and Histogram-based Gradient Boosted Trees (HGBT) with Poisson loss were used to predict the number of innovations with PCs.", "The model parameters were selected through ten-fold cross-validation.", "The data were randomly partitioned into training and test sets with a ratio of 90%/10%.", "We report the mean absolute error (MAE) and the mean Poisson deviance (MPD) averaged across 20 runs with different random partitions of data.", "Both metrics should be minimized by the models.", "Replication details are in Appendix E. Model MAE MPD Baseline (mean) 19.37 30.16 Poisson reg.", "Results As summarized in Table 2, all models outperformed the mean baseline by a significant margin, suggesting that the internal network structures and the external connections to other communities are systematically correlated to the count of lexical innovations per month.", "The three largest coefficients of the Poisson model with 5 PCs correspond to the first three PCs (see Figure 2) 1 .", "PC1 represents the overall size of the network, such that the Poisson model predicts that networks having larger overall size tend to have more innovations (Coefficient: -0.87).", "PC2 indicates the fragmentation and the local clusteredness of the network, and contributes negatively to lexical innovation (Coeffi-cient.: -0.20).", "In other words, fragmented networks with local clusters tend to have fewer innovations as this structure inhibits the spread of information.", "1 Note that the coefficient sign for a PC must be interpreted with respect to to its loading on structural components.", "PC3 is generally related to inter-community connections with positive correlation to innovation (Co-efficient.: 0.19).", "Yet what matters is not the number of communities connected (degree centrality) but the quality of those connections (Pagerank central-ity).", "High Ragerank centrality suggests that the network might be connected to many influential communities, as these connections are weighted higher in the Pagerank algorithm (Page et al., 1999).", "While structural properties can account for many regularities in the creation of lexical innovations, there are also surges of innovations that cannot be explained by structural factors alone.", "Inspection of the data suggests that the surges of innovations at the tail of empirical distributions are often related to some factors beyond network structures, including topical variations or external events, such as community migration or new game releases for some game communities.", "Not all lexical innovations survive through time, with only a few neologisms eventually becoming widely adopted by community members.", "Here, we test the structural factors that systematically affect the survival of words in online communities.", "Model specification Survival analysis models the elapsed time before a future event happens (Kleinbaum and Klein, 2010), which has been used to predict word survival (Stewart and Eisenstein, 2018).", "Compared to the traditional Cox model, deep survival analysis approximates the risk (haz-ard) with neural networks, thereby achieving improved performance.", "We estimated word survival with the Logistic Hazard model (LH) proposed by Kvamme and Borgan (2019).", "Given samples { x 1 , x 2 , . . . , x n } and time steps { 1 , 2 , . . . , T } , the LH method estimates h ( t | x ) , the hazard function of the death event with respect to time t , with a deep neural network.", "The hazard function can be interpreted as the word's danger of dying\" at t . After the model is trained, the survival function S ( t | x i ) for sample x i can be computed as S ( t | x i ) = T (cid:89) t =1 [1 h ( t | x i )] (1) S ( t | x i ) can be interpreted as the chance of survival at time t for sample x i , that is, the survival probability of a word given the corresponding network features at time t . The detailed derivation and experiment settings are given in Appendix F. Data coding We consider only communities that have existed longer than six months and words that survived more than three months. The subreddit duration restriction avoids right-censoring of the data from new communities forming and quickly dying (a common event), which would skew estimates of word survival. A word's survival time is defined as the total number of months a word persists in a community, excluding the intervening month in which the word is not used. The last time step t at which the word shows up is considered the death\" event. However, if this last time step is also the last three recorded months, this word is considered right-censored such that a death event has not happened. This three-month buffer period is added to avoid false negatives. The network features for predictions were derived from averaging all the monthly features for the months that a particular word has existed. After preprocessing, we Model Concordance IBS Random baseline 0.50 0.25 Cox Model (PCs=5) 0.600 0.297 Cox Model (PCs=10) 0.662 0.289 Cox Model (raw) 0.665 0.209 LH (PCs=5) 0.584 0.245 LH (PCs=10) 0.691 0.192 LH (raw) 0.718 0.152 Table 3: Survival analysis results. All models outperform the concordance baseline. ended up with 1.47M samples with 69,683 distinct words. All features were then transformed into 10 orthogonal principal components using PCA with whitening. The first 5 PCs accounted for 90% of the total variance whereas all 10 PCs explained 99% of the variance. Implementation Models of deep survival analysis were implemented via the package pycox (Kvamme et al., 2019). We trained a three-layered LH model with 256 hidden dimensions to model the word survival. We used the Adam optimizer with a learning rate of 0.001 and a batch size of 2048 samples. The data were randomly partitioned into 80%, 10% and 10% portions as training, development and test sets, respectively, with no overlap between sets in terms of subreddits. Each model was run for 3 epochs and was run 10 times with different data partitioning. The performance metrics were averaged. We also ran baseline Cox models under the same conditions for comparison. The performance is evaluated with time-dependent concordance (Antolini et al., 2005) and Integrated Brier Score (IBS) (Kvamme et al., 2019). Concordance measures the model's capacity to provide a reliable ranking of individual risk scores. A good concordance score should be above the 0.5 random baseline and close to 1. The IBS is the average squared distances between the observed survival events and the predicted survival probability and should be minimized by the model. Results Results in Table 3 show that structural factors of the community in which a neologism is introduced can predict its chance of survival or death, with all models outperforming the baseline by a significant margin. Since samples in training and test sets do not overlap in subreddits, such performance indicates that there are strong associations between network structures and word survival such that our models can generalize across communities. The coefficients for the Cox model with 10 PCs are shown in Table 4. To interpret the LH Variables Coef. Exp(coef) S.E. PC1 -0.122 0.885 0.002 PC2 -0.072 0.930 0.002 PC3 0.170 1.186 0.003 PC4 0.009 1.001 0.001 PC5 -0.017 0.984 0.001 PC6 -0.160 0.852 0.001 PC7 -0.516 1.675 0.002 PC8 -0.048 0.953 0.001 PC9 -0.004 1.004 0.001 PC10 -0.054 0.947 0.002 Table 4: Results of the Cox model. All coefficients are highly significant. Exp(coef) refers to the hazard, or the probability of death. Lower Exp(coef) suggests that this variable is protective. S.E. refers to the standard error of the regression coefficients. model with 10 PCs, we generate the survival function S ( t | x ) by varying a single feature from low to high but keep the remainder fixed at their median value (Figure 4). While the Cox model predicts the hazard (death rate) and the LH model predicts S ( t | x ) (the survival rate) (in reverse direction), we found that both models were highly consistent in assessing the input PCs, both in terms of relative weights and directions. A large overall size (PC1) tends to preserve neologisms, as large communities provide a basic threshold population for words to be used. In addition to sheer size, global network topology also contributes to neologism survival. PC2, PC3, PC6 and PC7 correspond to three different network structures. PC3 represents networks that have many external connections but are split into multiple clusters within the community, which contributes negatively to the survival probabilities. In contrast, less clustered networks with dense edges and rich external connections (PC2) increase word survival rates. Both PC6 and PC7 boost word survival rate and they both represent networks that are relatively densely connected, but PC6 has high connections to many external communities and is more fragmented whereas PC7 is more isolated in the inter-community network (low degree centrality) but its external connections are influential communities (high Pagerank and Betweenness centrality). This may suggest that interand intra-community connections complement each other. In general, within a community, dense connections in the network keep words alive whereas local clusters in the network are adverse to word survival. In the multi-community landscape, more external connections tend to promote word survival. 0.25 0.00 0.25 NodesEdgesDensity Avg.degreeMax.degreeLarge.comSingletonsBetweennessClosenessDegreeEigenvectorPagerankAdj.assortAdj.gcAdj.lc PC1 0.25 0.00 0.25 PC2 0.0 0.5 PC3 0.0 0.5 PC6 0.5 0.0 PC7 Figure 3: The five highest weighted PCs used by the survival model. Inter-community features are indicated by orange bars. Adj.lc, Adj.gc and Adj.assort are local clustering coefficient, global clustering coefficient and assortativity adjusted to a random network. PC1 represents the overall size, PC2 the within-community connections and PC3 the inter-community connections. PC6 and PC7 are specific combinations of intraand inter-community connections. 0 50 100 PC1 PC2 0 50 100 0 50 100 PC6 0 50 100 PC7 0.0 0.2 0.4 0.6 0.8 S ( t | x ) P e r c e n t il e ( % ) : l o w h i g h Time (months) Figure 4: The contribution of predictors to the survival probability S ( t | x ) with remaining features fixed. Brighter regions indicate high survival rates. 7 Lexical levelling Levelling refers to the gradual replacement of localized linguistic features ( marked ) by mainstream linguistic features ( unmarked ) over the whole community (Kerswill, 2003), which has been observed in a wide range of offline linguistic communities due to increasing mobility and external contacts (Milroy, 2002; Kerswill, 2003). The subreddit communities have become increasingly inter-connected over time, as the average inter-community degree has increased from 6 in January 2008 to 2,323 in October 2018 (Figure 5). While some of these could be accounted for by the simultaneous growth in the number of subreddits, the growth in connectedness is also apparent. Such an increase of contact could promote the spread of neologisms across Reddit. In the same period, the number of variants that spread to more than 60% of the communities has grown slightly from 7 to 22. Some of the notable examples include words like lol , alot , imao and cuz . Meanwhile, the variants that are only confined to one community grew rapidly from 1992 in 2008 to 23,397 in 2018. The widespread use of some neologisms does not necessarily cause the loss of local expressions, as in offline communities. Instead, the community-specific terms and community-general terms develop in tandem. Many community-specific terms are nested within topic-based communities with little meaning overlap with those widespread variants, and are therefore unlikely to be replaced by more general terms through levelling. Figure 5 also shows that the probabilistic density distribution (PDF) of word dissemination (the percentage of communities sharing a neologism) conforms to the power law fit p ( x ) x , as a few words spread to most communities while most words are confined to a few communities. Further, the shape parameter decreases asymptotically despite the growth of average inter-community degree (Figure 5), which implies that, as the size of Reddit grows, more community-specific words, as well as more widespread words, emerge. Summary The number of community specific words grew rapidly despite increased intercommunity connectedness, which seems to go against the levelling trend observed in offline networks (Conde-Silvestre, 2012). In contrast to offline communities, these subreddit networks are of a different nature, as they are topic-based groups bounded by common interests. By joining these communities, users opt for fragmentation into some niche groups. Such segregation in topics and interests naturally brings in more community specific words. In other words, there is no strong evidence 2008 2010 2012 2014 2016 2018 Time 2 3 4 5 6 0 1000 2000 3000 A v e r g e I n t e r c o mm un i t y d e g r ee 10 2 10 1 10 0 Community dissemination (%) 10 2 10 1 10 0 10 1 10 2 PDF : p ( x ) 2008-05: =4.77 2010-05: =2.29 2014-05: =1.97 2018-05: =1.87 Figure 5: [ Left ] Change of average community degree and the shape parameter of the power law fit p ( x ) x over time. The average community degree is increasing, indicating that more communities are connected to each other. The is decreasing, suggesting that the tail of the distribution has become thicker, or more community specific words have emerged. [ Right ] Snapshots of PDFs of dissemination across communities over time. for lexical levelling; instead, online communities go in the reverse direction, by developing more niche neologisms. 8 Discussions and Conclusions In traditional sociolinguistics, weak ties within a social network have been linked to innovation and language change. Yet most studies only use indirect evidence to infer the underlying network types (Milroy and Milroy, 1985; Nevalainen, 2000; Dodsworth, 2019). Our quantitative analysis suggests that multiple structural properties play a role in lexical change. The overall network size is the most prominent factor in lexical innovation and survival, as large communities provide the base population to create and use those neologisms. The effect of network size has also been emphasized in other network studies of language (Re-ali et al., 2018; Raviv et al., 2019; Laitinen et al., 2020). However, sheer size is only part of the story, as dense edges between users, the lack of separate local clusters, and rich external connections also promote both lexical innovation and survival. Dense connections within and across communities increase the visibility of neologisms so that they can be imitated by other users, as exposure alone predicts users' information spreading behavior (Bakshy et al., 2012). In contrast, local clustering tends to separate networks into disconnected parts, slowing the spread of new words. These structural attributes are found to facilitate information spread in online social networks (Lerman and Ghosh, 2010). On a broader scale, our results suggest that the lexical change process in online social networks may be similar to other information spread processes (Guille et al., 2013). Our results show that conclusions drawn from offline communities might be insufficient to account for behavior seen in online social networks. While the classic weak tie model emphasizes the role of loose social networks in language change (Milroy and Milroy, 1985; Nevalainen, 2000) and has been confirmed in online communities (Del Tredici and Fernndez, 2018), our work further extends this model by showing that a variety of network structural attributes also play a role in language change. Our quantitative analysis also suggests a different leveling process in online communities with implications for sociolinguistic theories. Limitations and future work One limitation of this study is that topical variation is not explored in depth, because we aimed to look at the contributes of networks alone by smoothing out topical variation with diverse communities. Yet topics have been found to affect users' posting behavior in online communities (Mathew et al., 2019) and niche topics do affect word retention (Altmann et al., 2011). In Reddit, communities involving certain niche or foreign topics, such as r/pokemon , might inherently introduce more lexical innovations than others. Secondly, we only focus on Internet neologisms in Reddit. How these neologisms propagate across multiple social media platforms and how online and offline neologisms interact remain important questions to be addressed. Thirdly, while our study reveals the general patterns of lexical change, there are multiple sub-categories of neologisms such as discourse markers and name entities. It is of interest to ask whether different subcategories may exhibit different patterns of usage in online communities. These research questions are worth exploring in future work. 9 Ethical concerns In terms of ethical concerns, a great number of low frequency neologisms collected from Urban Dictionary may be considered offensive to specific groups of populations. We collected the word usage data as they were in order to recover as realistic of a lexical landscape in Reddit as possible. However, these offensive words by no means re-flect our values. Nor do we endorse the use of these words. Acknowledgements We thank Professor Patrice Beddor, Professor Will Styler, Julia Mendelsohn, Jiaxin Pei, Zuoyu Tian and Allison Lahnala for their comments on earlier versions of this draft. We are also grateful to all anonymous reviewers for their insightful comments, which helped improve this manuscript greatly. This material is based upon work supported by the National Science Foundation under Grant No 185022. References Eduardo G Altmann, Janet B Pierrehumbert, and Adil-son E Motter. 2011. Niche as a determinant of word fate in online groups. PloS one , 6(5):e19009. Laura Antolini, Patrizia Boracchi, and Elia Biganzoli. 2005. A time-dependent discrimination index for survival data. Statistics in Medicine , 24(24):3927 3944. Lars Backstrom, Paolo Boldi, Marco Rosa, Johan Ugander, and Sebastiano Vigna. 2012. Four degrees of separation. In Proceedings of the 4th Annual ACM Web Science Conference , pages 3342. Eytan Bakshy, Itamar Rosenn, Cameron Marlow, and Lada Adamic. 2012. The role of social networks in information diffusion. In Proceedings of the 21st international conference on World Wide Web , pages 519528. David K Barnhart. 2007. A calculus for new words. Dictionaries: Journal of the Dictionary Society of North America , 28(1):132138. Abhinav Bhandari and Caitrin Armstrong. 2019. Tkol, httt, and r/radiohead: High affinity terms in Reddit communities. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019) , pages 5767, Hong Kong, China. Association for Computational Linguistics. Jonathan Chang and Cristian Danescu-Niculescu-Mizil. 2019. Trajectories of blocked community members: Redemption, recidivism and departure. In The World Wide Web Conference , pages 184195. Jonathan P Chang, Caleb Chiam, Liye Fu, Andrew Z Wang, Justine Zhang, and Cristian Danescu-Niculescu-Mizil. 2020. Convokit: A toolkit for the analysis of conversations. In Proceedings of SIG-DIAL . Morten H Christiansen and Simon Kirby. 2003. Language evolution: Consensus and controversies. Trends in cognitive sciences , 7(7):300307. Juan Camilo Conde-Silvestre. 2012. The role of social networks and mobility in diachronic sociolinguistics. The Handbook of Historical Sociolinguistics , pages 332352. Tiago Cunha, David Jurgens, Chenhao Tan, and Daniel Romero. 2019. Are all successful communities alike? characterizing and predicting the success of online communities. In The World Wide Web Conference , pages 318328. Cristian Danescu-Niculescu-Mizil, Robert West, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. No country for old members: User lifecy-cle and linguistic change in online communities. In Proceedings of the 22nd international conference on World Wide Web , pages 307318. ACM. Marco Del Tredici and Raquel Fernndez. 2018. The road to success: Assessing the fate of linguistic innovations in online communities. In Proceedings of the 27th International Conference on Computational Linguistics , pages 15911603, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Robin Dodsworth. 2019. Bipartite network structures and individual differences in sound change. Glossa: a journal of general linguistics , 4(1). Penelope Eckert and Sally McConnell-Ginet. 1992. Think practically and look locally: Language and gender as community-based practice. Annual review of anthropology , 21(1):461488. Jacob Eisenstein, Brendan O'Connor, Noah A Smith, and Eric P Xing. 2014. Diffusion of lexical change in social media. PloS one , 9(11). Rahul Goel, Sandeep Soni, Naman Goyal, John Pa-parrizos, Hanna Wallach, Fernando Diaz, and Jacob Eisenstein. 2016. The social dynamics of language change in online networks. In International Conference on Social Informatics , pages 4157. Springer. Mark Granovetter. 1983. The strength of weak ties: A network theory revisited. Sociological theory , pages 201233. Mark S Granovetter. 1977. The strength of weak ties. In Social networks , pages 347367. Elsevier. Jack Grieve, Andrea Nini, and Diansheng Guo. 2017. Analyzing lexical emergence in modern american english online 1. English Language & Linguistics , 21(1):99127. Adrien Guille, Hakim Hacid, Cecile Favre, and Djamel A Zighed. 2013. Information diffusion in online social networks: A survey. ACM Sigmod Record , 42(2):1728. William L Hamilton, Justine Zhang, Cristian Danescu-Niculescu-Mizil, Dan Jurafsky, and Jure Leskovec. 2017. Loyalty in online communities. In Eleventh International AAAI Conference on Web and Social Media . Janet Holmes and Miriam Meyerhoff. 1999. The community of practice: Theories and methodologies in language and gender research. Language in society , 28(2):173183. Daniel Kershaw, Matthew Rowe, and Patrick Stacey. 2016. Towards modelling language innovation acceptance in online social networks. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining , pages 553562. Paul Kerswill. 2003. Dialect levelling and geographical diffusion in british english. Social dialectology: in honour of Peter Trudgill , pages 223243. Colin Klein, Peter Clutton, and Adam G Dunn. 2019. Pathways to conspiracy: The social and linguistic precursors of involvement in reddit's conspiracy theory forum. PloS one , 14(11):e0225098. David G Kleinbaum and Mitchel Klein. 2010. Survival analysis . Springer. Hvard Kvamme, rnulf Borgan, and Ida Scheel. 2019. Time-to-event prediction with neural networks and cox regression. Journal of Machine Learning Research , 20(129):130. Hvard Kvamme and rnulf Borgan. 2019. Continuous and discrete-time survival prediction with neural networks. arXiv preprint arXiv:1910.06724 . William Labov. 2007. Transmission and diffusion. Language , 83(2):344387. Mikko Laitinen, Masoud Fatemi, and Jonas Lundberg. 2020. Size matters: Digital social networks and language change. Frontiers in Artificial Intelligence , 3:46. Kristina Lerman and Rumi Ghosh. 2010. Information contagion: An empirical study of the spread of news on Digg and Twitter social networks. In Proceedings of 4th International Conference on Weblogs and Social Media (ICWSM), 2010 . Gary Lupyan and Rick Dale. 2010. Language structure is partly determined by social structure. PloS one , 5(1). Binny Mathew, Ritam Dutt, Suman Kalyan Maity, Pawan Goyal, and Animesh Mukherjee. 2019. Deep dive into anonymity: Large scale analysis of Quora questions. In International Conference on Social Informatics , pages 3549. Springer. Allan A Metcalf. 2004. Predicting new words: The secrets of their success . Houghton Mifflin Harcourt. James Milroy and Lesley Milroy. 1985. Linguistic change, social network and speaker innovation. Journal of Linguistics , 21(2):339384. Lesley Milroy. 2002. Introduction: Mobility, contact, and language changeworking with contemporary speech communities. Journal of Sociolinguistics , 6(1):315. Lesley Milroy and Carmen Llamas. 2013. Social networks. The Handbook of Language Variation and Change , pages 407427. Lesley Milroy and James Milroy. 1992. Social network and social class: Toward an integrated sociolinguistic model. Language in society , 21(1):126. Terttu Nevalainen. 2000. Mobility, social networks and language change in Early Modern England. European Journal of English Studies , 4(3):253264. Mitchell G Newberry, Christopher A Ahern, Robin Clark, and Joshua B Plotkin. 2017. Detecting evolutionary forces in language change. Nature , 551(7679):223226. Bill Noble and Raquel Fernndez. 2015. Centre stage: How social network position shapes linguistic coordination. In Proceedings of the 6th Workshop on Cognitive Modeling and Computational Linguistics , pages 2938, Denver, Colorado. Association for Computational Linguistics. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab. John C Paolillo. 1999. The virtual speech community: Social network and language variation on irc. In Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences. 1999. HICSS-32. Abstracts and CD-ROM of Full Papers , pages 10pp. IEEE. Micha B Paradowski and ukasz Jonak. 2012. Diffusion of linguistic innovation as social coordination. Psychology of Language and Communication , 16(2):131142. Limor Raviv, Antje Meyer, and Shiri Lev-Ari. 2019. Larger communities create more systematic languages. Proceedings of the Royal Society B , 286(1907):20191262. Florencia Reali, Nick Chater, and Morten H Christiansen. 2018. Simpler grammar, larger vocabulary: How population size affects language. Proceedings of the Royal Society B: Biological Sciences , 285(1871):20172586. Thomas M Schwen and Noriko Hara. 2003. Community of practice: A metaphor for online design? The Information Society , 19(3):257270. Devyani Sharma and Robin Dodsworth. 2020. Language variation and social networks. Annual Review of Linguistics , 6. Ian Stewart and Jacob Eisenstein. 2018. Making fetch happen: The influence of social and linguistic context on nonstandard word growth and decline." ]
[ "abstain", "objective", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Vision-language navigation (VLN) is a challenging task due to its large searching space in the environment.", "To address this problem, previous works have proposed some methods of fine-tuning a large model that pretrained on large-scale datasets.", "However, the conventional fine-tuning methods require extra human-labeled navigation data and lack self-exploration capabilities in environments, which hinders their generalization of unseen scenes.", "To improve the ability of fast cross-domain adaptation, we propose Pro mptb ased E nvironmental S elf-exploration (ProbES), which can self-explore the environments by sampling trajectories and automatically generates structured instructions via a large-scale cross-modal pretrained model (CLIP).", "Our method fully utilizes the knowledge learned from CLIP to build an in-domain dataset by self-exploration without human labeling.", "Unlike the conventional approach of fine-tuning, we introduce prompt-based learning to achieve fast adaptation for language embeddings, which substantially improves the learning efficiency by leveraging prior knowledge.", "By automatically synthesizing trajectory-instruction pairs in any environment without human supervision and efficient prompt-based learning, our model can adapt to diverse vision-language navigation tasks, including VLN and REVERIE.", "Both qualitative and quantitative results show that our ProbES significantly improves the generalization ability of the navigation model * .", "Teaching a robot to navigate following a natural language instruction has a broad impact in the field of human-robotic interaction.", "Many related tasks have been proposed to delve into this problem.", "The Corresponding author.", "vision-language navigation (VLN) task (Anderson et al., 2018) is proposed where an agent is required to navigate in a photo-realistic environment step-by-step following a natural language instruction.", "Recent tasks (Qi et al., 2020; Zhu et al., 2021) focus on target objects localization that asks an agent to identify an object in an unseen room.", "Solving these tasks requires an agent to obtain a vision-text alignment ability that locates related objects and executes corrective actions according to the instruction.", "However, collecting a large-scale VLN dataset is difficult and laborious since annotating the semantic of a trajectory within a sentence costs times of labor than annotating an image.", "Existing navigation datasets are relatively small-scale, and learning on such datasets hinders the agent to obtain a good generalization ability.", "To solve this problem, EnvDrop (Tan et al., 2019) uses a speaker model to generate instructions for sampled trajectories in unseen environments, but the generalization ability is not strong with limited vision-language understanding ability.", "Recently, VLN-BERT (Ma-jumdar et al., 2020) introduces a visio-linguistic model pretrained on Conceptual Captions (Sharma et al., 2018) dataset to learn from image-caption pairs, which are quite different from trajectory-instruction pairs from VLN.", "To address this, Airbert (Guhur et al., 2021) constructs a large-scale in-domain pretraining dataset with image-caption pairs collected from online marketplaces such as Airbnb to finetune ViLBERT.", "However, Airbert collects image captioning data on websites, which are still far from the scenario of vision-language navigation.", "Different from previous methods that collect human-labeled data to train a navigation model, we suggest that automatically generating instruction-trajectory pairs by self-exploration for pretraining not only helps the model obtain better generalization ability but also achieves fast adaptation to downstream tasks.", "prompt-based environmental self-exploration (ProbES) that generates navigation data with prior knowledge automatically and adapts pretrained model quickly to VLN tasks.", "An overview of our proposed framework is shown in Figure", "1. By using this method, a pretrained visio-linguistic model is able to adapt to the VLN task automatically and efficiently.", "Specifically, we build an in-domain dataset by self-exploration without labeling or crawler.", "To build such a dataset.", "we first generate templates by masking visual and action words in labeled instructions.", "Then, we sample trajectories in the training environment.", "A pretrained CLIP (Radford et al., 2021) model is used to recognize rooms and objects in the sampled trajectories and match described phrases with them.", "We construct instructions by filling the matched phrases into sampled templates.", "By leveraging the prior knowledge learned by CLIP, we are able to build a dataset automatically with rich semantic information.", "Meanwhile, finetuning the whole pretrained model is time-consuming, we adopt prompt tuning (Li and Liang, 2021; Liu et al., 2021c,b), a lightweight alternative to finetuning.", "Our prompt-based method can distill task-relevant knowledge from pretrained model and achieve fast adaption to downstream tasks.", "We evaluate ProbES on R2R (Anderson et al., 2018) and REVERIE (Qi et al., 2020) datasets by discriminative and generative settings.", "Results show that ProbES can match or surpass the performance of finetuning with substantially less training time.", "(1) We propose ProbES, a novel self-exploration method to automatically build an in-domain dataset that reduces the domain gap between the pretraining dataset and VLN tasks without human labeling; (2) Compared with finetuning large pretrained model, our proposed prompt tuning can achieve fast adaptation; (3) Experiments are conducted on R2R and REVERIE datasets with generative and discriminative settings, and results indicate that our proposed ProbES can achieve better or comparable performance.", "Besides, our generated data can be used as augmented data which improves the generalization ability of the model.", "Vision-and-Language Navigation.", "Anderson et al. (Anderson et al., 2018) proposed the first Vision-Language Navigation (VLN) benchmark combining real imagery (Chang et al., 2017) and natural language navigation instructions.", "To solve this task, Wang et al. (Wang et al., 2020) proposed a novel SERL model to learn reward functions from the expert distribution.", "And combining imitation learning and reinforcement learning (Wang et al., 2019) has been proved to be beneficial for VLN.", "Since the VLN dataset is relatively small-scale, some works propose augmentation approaches (Fried et al., 2018; Tan et al., 2019; Liu et al., 2021a) to improve robustness.", "Auxiliary losses (Majumdar et al., 2020; Zhu et al., 2020; Liang et al., 2021) is used to take advantage of the additional training signals derived from the semantic information.", "Some pretraining methods (Huang et al., 2019; Hao et al., 4838 2020) have been proposed to learn generic cross-modal representations.", "This is further extended to a recurrent model that significantly improves sequential action prediction (Hong et al., 2021).", "However, the limited number of environments in pretraining constrain the generalization ability to unseen scenarios.", "Most related to this work, VLN-BERT (Majumdar et al., 2020) transfers knowledge from abundant, but out-of-domain image-text data to improve path-instruction matching.", "In contrast, we not only propose an effective method to build an in-domain dataset by sampling trajectory and generating instructions with templates, but also present a prompt-based pretraining strategy to improve VLN.", "Vision-and-Language Pretraining.", "Vision-and-language pretraining has made great progress in recent years.", "Inspired by BERT (Devlin et al., 2019), much work has extended it to process visual tokens and pretrain on large-scale image-text pairs for learning generic visio-linguistic representations.", "Previous research introduces one-stream BERT models and two-stream BERT models.", "The former directly perform inter-modal grounding (Li et al., 2019; Su et al., 2019; Alberti et al., 2019; Li et al., 2020a; Chen et al., 2020; Zhou et al., 2020; Li et al., 2020b), while two-stream models process both visual and textual inputs in separate streams, and then fuse the two modalities in a later stage (Lu et al., 2019; Tan and Bansal, 2019).", "These models are often pretrained with self-supervised objectives akin to those in BERT: masked language modeling, masked object classification, and sentence-image alignment.", "In this work, the architecture of the ProbES model is structural similar to ViLBERT (Lu et al., 2019).", "We make several VLN-specific adaptations to ViLBERT so that pretrained weights can be transferred to initialize large portions of the model.", "Different from VLN-BERT which fine-tunes a ViLBERT on instruction-trajectory pairs to measure their compatibility in beam search setting, we introduce prompt tuning, which only tunes the continuous prompts.", "Prompting.", "Natural language prompting freezes pretrained models and reformats the natural language input with example prompts.", "GPT-3 (Brown et al., 2020) introduces in-context learning, using manually designed and discrete text prompts.", "Sun et al. (Sun and Lai, 2020) also leverage prompts as keywords to control the sentiment or topic of the generated sentence.", "AutoPrompt (Shin et al., 2020) searches for a sequence of discrete trigger words and concatenates it with each input to elicit sentiment or factual knowledge from a masked LM.", "Different from the discrete text prompt, some methods examine continuous prompts (a.k.a. soft prompts) that perform prompting directly in the embedding space of the model.", "Prefix-Tuning (Li and Liang, 2021) prepends a sequence of continuous task-specific vectors as virtual tokens to the input.", "(Zhong et al., 2021; Qin and Eisner, 2021; Hambardzumyan et al., 2021) introduce continuous templates following manual prompt templates.", "P-tuning (Liu et al., 2021c) uses continuous prompts which are learned by inserting trainable variables into the embedded input.", "Ptr (Han et al., 2021) adopts manually crafted sub-templates and generates complete templates by logic rules.", "In ProbES, we prepend continuous task-specific vectors to the embedding of the input instruction and directly tune the embeddings of these vectors.", "After prompt tuning, the model can be adapted to VLN and REVERIE tasks.", "The Vision-and-Language Navigation (VLN) task gives a global natural sentence I = { w 0 , ..., w l } as an instruction, where w i is a word token while the l is the length of the sentence.", "The instruction consists of step-by-step guidance toward the goal.", "At step t , the agent observes a panoramic view O t = { o t,i } 36 i =1 as the vision input, which is composed of 36 RGB image views.", "Each of these views consists of image feature v i and an orientation description ( sin t,i , cos t,i , sin t,i , cos t,i ).", "Candidates in the panoramic action space consist of k neighbours of the current node in the navigation graph and a stop action.", "We first generate templates from instructions in the R2R dataset.", "Then we sample trajectories in the training environment.", "We generate the candidate noun phrases and actionable verbs for the sampled trajectories and full-fill the templates by the above words.", "A detailed demonstration of our instruction generation module is shown in Fig.", "2. Generating Templates We collect phrases and replace these phrases in human-annotated navigation instruction with blank masks to generate templates.", "Different from the Airbert (Guhur et al., 2021) that 4839 Env CLIP forward aisle Full-filling FillingModule Turn left and walk through the living room.", "A trajectory is denoted as { v 1 , v 2 , ..., v n } , where v i represents an observation viewpoint.", "We introduce CLIP (Radford et al., 2021) to select candidate phrases c and match them to each view v i .", "We first embed the sentence a photo of [ c noun ]' by CLIP, where the c noun represents the noun-phrase candidates (room or object classes labeled in Matterport dataset).", "Then we embed the view image by the vision encoder of CLIP and calculate the similarity of the language embedding and vision embedding.", "We select the candidate with the highest matching score for the view v i .", "Each view has two matched candidates, one for the detected room and another for an object.", "Then the description c i of this view is written in 3 formats randomly: [room]', [object]' or [room] with [object]'.", "Since trajectories are sampled in the environment, we can obtain actionable verbs a i between two viewpoints via comparing headings and elevations.", "only extracts noun phrases, we also mask action words like left', right', 'forward', and around'.", "We denote the O mask as the mask for an object and A mask is the mask for an action.", "The generated templates are like Turn A mask and walk past O mask . Once out, walk A mask O mask . Stop once you reach O mask '.", "More examples are shown in Table", "1. Sampling Trajectories and Actions We first sample the trajectories in the Matterport (Chang et al., 2017) Environment.", "We randomly sample the starting and ending positions, and collect tracks with lengths of less than 8 hops.", "Then we obtain the corresponding actions of each trajectory by first-person movement.", "If the agent chooses the front navigable position to move, we generate a forward' action.", "If the agent chooses the back navigable position to move, we generate an around' action.", "Otherwise, if the agent selects the right front navigable position to move for the next step, we generate an action sequence like {right', forward'}, which is used to fill actionable verbs during instruction generation.", "Full-filling Template with Prior Knowledge Prior knowledge is the key to generating high-quality data without human labeling.", "ProbES introduces CLIP, a powerful vision-language alignment model learned from a large-scale image-caption dataset.", "To generate structured augmentation data, we full-fill the templates with phrases that describe the sampled trajectory and actions.", "We randomly select a template with the same or a close number of O mask as the number of viewpoints in the sampled trajectory.", "The template has a sequence of object masks { O mask, 1 , O mask, 2 , ..., O mask,i } and a sequence of action masks { A mask, 1 , A mask, 2 , ..., A mask,j } .", "Lengths of object masks and action masks are denoted as l and n respectively.", "The number of object masks and action masks is roughly balanced.", "Let n v be the number of viewpoints in a sampled trajectory.", "Then the generated captions of this trajectory is written as { c 1 , c 2 , ..., c n v } .", "We 4840 Table 1: Examples of generated templates.", "full-fill the templates by the following rules: 1) if n v l , we randomly sample l captions and fill the O mask in the template sequentially; 2) if n v < l , we randomly sample the O mask and use all the caption phrases to fill them.", "After filling phrases, we can identify which viewpoint A mask,i may appear since viewpoints of O mask,j near it are already known.", "For example, if the template is like O mask, 1 A mask, 1 O mask, 2 ' and captions of v 1 and v 2 are used to fill O mask, 1 and O mask, 2 respectively, then A mask, 1 is the sampled action between v 1 and v 2 .", "In this way, we use generated actionable verbs to full-fill the templates and get final instructions.", "By the above method, we can generate diverse instructions without human labeling.", "Prompt tuning has been found effective on many natural language understanding (NLU) tasks.", "Motivated by this, we introduce a prompt-based architecture to achieve fast adaptation on the self-exploration dataset (e.g., Conceptual Captions) and downstream tasks.", "The architecture is ViLBERT-like and equipped with a prompt encoder for prompt tuning.", "Given an instruction-trajectory pair, the visual and textual features can be extracted by the visual encoder E v and textual encoder E x in ViLBERT respectively.", "Especially, the textual input has two parts: prompt sequence { p 1 , ..., p n } and word sequence { x 1 , ..., x m } , where p and x indicate a pseudo prompt token and a word token of a generated instruction respectively.", "n and m represent lengths of the prompt sequence and word sequence respectively.", "We embed prompt sequence by the prompt encoder E p and embed word sequence by the textual encoder E x as follows: e p, 1 , ..., e p,n = E p ( p 1 , ..., p n ) e x, 1 , ..., e x,m = E x ( x 1 ) , ..., E x ( x m ) , (1) where E p is composed of a LSTM head followed by a MLP head.", "Then the textual embedding is mapped to e t = { e p, 1 , ..., e p,n , e x, 1 , ..., e x,m } , where e p, 1 , ..., e p,n are trainable embedding tensors and enable us to find better continous prompts.", "Let e v be denoted as visual embedding produced by visual encoder E v .", "e t and e v are then passed to the co-attention transformer similar to ViLBERT.", "Then in the prompt tuning process, we only train E p and fix the parameters of E x for the language stream.", "For the vision stream, since the trajectory is represented as a sequence of panoramic image regions, which is different from VLMs pretrained on image-caption pairs, we also update the visual embedding during prompt tuning.", "The visual embedding contains image embedding and location embedding.", "We sample hard negative paths based on distance in the environment for an instruction-trajectory pair, and the model is trained to choose the best path among them.", "Our model can adapt to diverse downstream navigation tasks, including VLN, a step-by-step navigation task, and REVERIE, an object-oriented navigation task.", "In the step-by-step navigation task, our model receives an instruction sentence and navigates following the commands in the instruction sequentially.", "In the object navigation task, our model receives an object description and explores the house to find an object.", "Also, our model can be adapted to both discriminative and generative navigation settings.", "In the discriminative setting, our model receives both an instruction and the observation sequence to represent a navigation trajectory and then output a score.", "In the generative setting, our model receives instruction and predicts actions sequentially.", "We experiment with our proposed ProbES on two downstream tasks: goal-oriented navigation task (R2R (Anderson et al., 2018)), and object-oriented navigation task (REVERIE (Qi et al., 2020)).", "ProbES can be easily applied to discriminative and generative models for these two tasks.", "Evaluation Metrics A large number of metrics are used to evaluate models in VLN, such as Trajectory Length (TL), the trajectory length in meters, Navigation Error (NE), the navigation error in meters, Oracle Success Rate (OR), the rate if the agent successfully stops at the closest point, Success Rate (SR), the success rate of reaching the goal, and Success rate weighted by (normalized inverse) Path Length (SPL) (Anderson et al., 2018).", "VLN task regard SR and SPL as the primary metric, and the REVERIE task regard RGS and RGSPL as the primary metric.", "Implementation Details Our training process is divided into two steps: Firstly, we pretrain our model on our generated self-exploration training set with prompt tuning for only 10 epochs.", "After that, we adapt our model to the downstream discriminative VLN task with only ranking loss for 20 epochs.", "The batch size is set as 64 and the learn-Table 4: Results by comparing ProbES with VLN-BERT in discriminative setting.", "ing rate is 4 10 5 .", "The generative navigation settings are the same as Recurrent VLN-BERT on both R2R and REVERIE.", "During pretraining, we use ProbES to 50k instruction-trajectory pairs.", "We use 32 NVIDIA V100 GPUs for pretraining and 8 GPUs for adaptation.", "Experiments with generative settings are conducted on a V100 GPU.", "In this section, we compare our model with previous state-of-the-art methods.", "We compare the ProbES with two baselines (ViLBERT and VLN-BERT built on Recurrent VLN-Bert) and five other methods.", "A brief description of previous models is as followed: 1) Seq2Seq: A sequence to sequence model reported in (Anderson et al., 2018); 2) Speaker-Follower (Fried et al., 2018): a method introduces a data augmentation approach and panoramic action space; 3) PRESS (Li et al., 2019): a conventional fine-tuning method with stochastic instruction sampling; 4) EnvDrop (Tan 4842 et al., 2019): a method augment data with environmental dropout; 5) Recurrent VLN-Bert (Hong et al., 2021) on three different settings: OSCAR and ViLBERT pretrained on out-of-domain data, VLN-BERT pretrained on R2R.", "We compare the models on three splits in the R2R dataset: validation seen house, validation unseen house, and testing (where the houses are also unseen).", "We also compare ProbES with Seq2Seq, RCM (Wang et al., 2019), SMNA (Ma et al., 2019), FAST-MATTN (Qi et al., 2020), Recurrent VLN-Bert (Hong et al., 2021) on OSCAR on REVERIE dataset.", "Results on R2R We compare ProbES with previous state-of-the-art methods on the R2R dataset in the generative setting, which predicts actions sequentially, as shown in Table", "2. Rec indicates using Recurrent VLN-Bert (Hong et al., 2021) with different backbones or parameter initialization.", "In the validation seen split, compared to VLN-BERT under the same setting, our ProbES achieves 5% improvement on SR and 5% improvement on SPL.", "In the validation unseen split, we achieve 1% improvement on SR compared to VLN-BERT.", "In the testing split, ProbES shows competitive results.", "Note that the PREVALENT backbone is pretrained on an in-domain R2R dataset with scene features and fine-tuned with an additional action prediction task in a generative setting while ProbES does not use labeled R2R data or augmented data generated by speaker (Fried et al., 2018).", "Results in Discriminative Setting We compare ProbES with VLN-BERT in the discriminative setting, which outputs scores for instruction-trajectory pairs, as in Table 4.", "In the validation unseen split, our method outperforms VLN-BERT, which indicates ProbES is able to improve the generalization ability for unseen scenes.", "Results on REVERIE We compare ProbES with previous state-of-the-art methods on the REVERIE dataset, as shown in Table", "3. In the validation unseen split, we achieve 0.42% improvement on RGS and 0.65% improvement on RGSPL.", "In the testing split, ProbES achieves 0.87% improvement on RGS and 0.69% improvement on RGSPL.", "We can see that ProbES benefits from prompt tuning with our generated instruction-trajectory pairs.", "Ablation of Learning Strategies.", "In Table 5, we ablate the performance gains from different learning strategies.", "PT and FT represent prompt tun-Table 5: Ablation of different modules during pretraining and finetuning.", "ing and fine-tuning respectively.", "Mask and Rank stand for masked multi-modal modeling loss and the ranking loss for path-selection task.", "We regard the model finetuned by ranking loss as our baseline.", "The masked multi-modal modeling loss on our data and R2R data are able to improve the performance.", "And finetuning on our data is able to improve generalization ability since the success rate in the validation unseen split gets 1.1% improvement and achieves 59.0%.", "At last, we discover that pretraining on our data with prompt tuning improves the baseline performance by 20.8% in the validation unseen split, achieving the best performance.", "Our model outperforms the model fine-tuned on R2R dataset by 1.1% in unseen split, indicating that ProbES improves the generalization ability of the navigation model.", "Ablation of Instruction Generation.", "Table 6 introduces comprehensive ablation experiments showing the impact of key steps in the strategy of generating instructions, and the experiments are performed in the baseline model: IL + RL from EnvDrop (Tan et al., 2019).", "Class indicates classes we use to feed into CLIP.", "M and P/O represent classes from Matterport and Place365/Objects365 datasets respectively.", "G Template denotes the strategy used to generate templates.", "ours' denote the strategy shown in Sec 3.2.", "For S Template , random' and match' indicate sampling a template randomly and choosing a template with the same number of masks as the number of viewpoints.", "As shown in Table 6, randomly selecting template without considering the number of masked tokens degrades the performance and introduces more noise in the data.", "Results show that equipped with our generated data (Row 3) improves the performance by a large margin.", "The model of using the rooms and objects from Places365 (Zhou et al., 2017) and Objects365 (Shao et al., 2019) (Row 4) performs worse than which uses the rooms and objects from Matterport.", "We infer from that Places365 and Objects365 contain many outdoor 4843 Figure 3: Statistical analysis of generated instructions.", "scenes and objects which are not suitable for VLN.", "Visualization of Data Distribution Figure 3 presents a statistical analysis of our generated instructions.", "We can see from the left figure that the number of object masks are larger than that of action masks, indicating that instructions contain more rich information generated by CLIP from sampled observations.", "The right figure shows the distribution of the instruction lengths.", "The lengths of most of the instructions range from 10 to 30, which matches the R2R dataset.", "The easy samples and hard samples in our generated instructions are balanced.", "Visualization of Trajectory-instruction pairs Here we provide visualization of the data generated by ProbES.", "Figure 4 shows the instruction-trajectory samples generated with our strategy.", "For each sample, we visualize observations of the trajectory, captions generated with CLIP, the selected template, and the final instruction generated by ProbES.", "Generated object classes fit observed 4844 scenes well, thus we can infer that CLIP is able to extract key information from the observation.", "Also, our method can select a suitable template and generate diverse instructions that describe observations of trajectories correctly.", "The length of our generated instruction ranges from 1 to 3 sentences, which matches the data distribution of the R2R dataset.", "In this work, we first introduce an effective way to generate in-domain data for pretraining the VLN model: leveraging a large pretrained CLIP model to generate captions for each viewpoint and sampling actions in the environment.", "Experiments show that the domain gap between pretraining data and VLN tasks can be mitigated.", "We also propose a prompt-based architecture, which introduces prompt tuning to adapt the pretrained model fastly.", "Our proposed ProbES achieves better results compared to baseline on both R2R and REVERIE datasets, and ablations show the contribution of each module and the effectiveness of the generated data.", "This work was supported in part by National Natural Science Foundation of China (NSFC) No.61976233, Guangdong Province Basic and Applied Basic Research (Regional Joint Fund-Key) Grant No.2019B1515120039, Guangdong Outstanding Youth Fund (Grant No. 2021B1515020061), Shenzhen Fundamental Research Program (Project No. RCYX20200714114642083, No. JCYJ20190807154211365) and CAAI-Huawei MindSpore Open Fund.", "We thank MindSpore for the partial support of this work, which is a new deep learning computing framwork , and supported by Guangdong Provincial Key Laboratory of Fire Science and Intelligent Emergency Technology, Guangzhou 510006, China." ]
[ "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "method", "abstain", "objective", "method", "abstain", "method", "method", "method", "result", "method", "abstain", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "result", "method", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "other", "result", "method", "result", "abstain", "result", "result", "abstain", "abstain", "result", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "objective", "abstain", "objective", "objective", "other", "other" ]
[ "We study the problem of analyzing tweets with Universal Dependencies (UD; Nivre et al., 2016).", "We extend the UD guidelines to cover special constructions in tweets that affect tokenization, part-of-speech tagging, and labeled dependencies.", "Using the extended guidelines, we create a new tweet treebank for English (TWEEBANK V 2) that is four times larger than the (unlabeled) TWEEBANK V 1 introduced by Kong et al. (2014).", "We characterize the disagreements between our annotators and show that it is challenging to deliver consistent annotation due to ambiguity in understanding and explaining tweets.", "Nonetheless, using the new treebank, we build a pipeline system to parse raw tweets into UD.", "To overcome annotation noise without sacrificing computational efficiency, we propose a new method to distill an ensemble of 20 transition-based parsers into a single one.", "Our parser achieves an improvement of 2.2 in LAS over the un-ensembled baseline and outperforms parsers that are state-of-the-art on other treebanks in both accuracy and speed.", "NLP for social media messages is challenging, requiring domain adaptation and annotated datasets (e.g., treebanks) for training and evaluation.", "Pioneering work by Foster et al. (2011) annotated 7,630 tokens' worth of tweets according to the phrase-structure conventions of the Penn Treebank (PTB; Marcus et al., 1993), enabling conversion to Stanford Dependencies.", "Kong et al. (2014) further studied the challenges in annotating tweets and presented a tweet treebank (TWEEBANK ), consisting of 12,149 tokens and largely following conventions suggested by Schneider et al. (2013), fairly close to Yamada and Matsumoto (2003) dependencies (without labels).", "Both annotation efforts were highly influenced by the PTB, whose guidelines have good grammatical coverage on newswire.", "However, when it comes to informal, unedited, user-generated text, the guidelines may leave many annotation decisions unspecified.", "Universal Dependencies (Nivre et al., 2016, UD) were introduced to enable consistent annotation across different languages.", "To allow such consistency, UD was designed to be adaptable to different genres (Wang et al., 2017) and languages (Guo et al., 2015; Ammar et al., 2016).", "We propose that analyzing the syntax of tweets can bene-fit from such adaptability.", "In this paper, we introduce a new English tweet treebank of 55,607 tokens that follows the UD guidelines, but also contends with social media-specific challenges that were not covered by UD guidelines.", "1 Our annotation includes tokenization, part-of-speech (POS) tags, and (labeled) Universal Dependencies.", "We characterize the disagreements among our annotators and find that consistent annotation is still challenging to deliver even with the extended guidelines.", "Based on these annotations, we nonetheless designed a pipeline to parse raw tweets into Universal Dependencies.", "Our pipeline includes: a bidirectional LSTM (bi-LSTM) tokenizer, a word clusterenhanced POS tagger (following Owoputi et al., 2013), and a stack LSTM parser with character-based word representations (Ballesteros et al., 2015), which we refer to as our baseline parser.", "To overcome the noise in our annotated 1 We developed our treebank independently of a similar effort for Italian tweets (Sanguinetti et al., 2017).", "See 2.5 for a comparison.", "data and achieve better performance without sacrificing computational efficiency, we distill a 20-parser ensemble into a single greedy parser (Hin-ton et al., 2015).", "We show further that learning directly from the exploration of the ensemble parser is more beneficial than learning from the gold standard oracle transition sequence.", "Experimental results show that an improvement of more than 2.2 points in LAS over the baseline parser can be achieved with our distillation method.", "It outperforms other state-of-the-art parsers in both accuracy and speed.", "The contributions of this paper include: We study the challenges of annotating tweets in UD (2) and create a new tweet treebank (TWEEBANK V 2), which includes tokenization, part-of-speech tagging, and labeled Universal Dependencies.", "We also characterize the difficulties of creating such annotation.", "We introduce and evaluate a pipeline system to parse the raw tweet text into Universal Dependencies (3).", "Experimental results show that it performs better than a pipeline of the state-of-the-art alternatives.", "We propose a new distillation method for training a greedy parser, leading to better performance than existing methods and without efficiency sacrifices.", "Our dataset and system are publicly available at https://github.com/Oneplus/Tweebank and https://github.com/Oneplus/twpipe .", "We first review TWEEBANK V 1 of Kong et al. (2014), the previous largest Twitter dependency annotation effort (2.1).", "Then we introduce the differences in our tokenization (2.2) and part-of-speech (2.3) (re)annotation with O'Connor et al. (2010) and Gimpel et al. (2011), respectively, on which TWEEBANK V 1 was built.", "We describe our effort of adapting the UD conventions to cover tweet-specific constructions (2.4).", "Finally, we present our process of creating a new tweet treebank, TWEEBANK V 2, and characterize the difficulties in reaching consistent annotations (2.6).", "The annotation effort we describe stands in contrast to the previous work by Kong et al. (2014).", "Their aim was the rapid development of a dependency parser for tweets, and to that end they contributed a new annotated corpus, TWEEBANK , consisting of 12,149 tokens.", "Their annotations added unlabeled dependencies to a portion of the data annotated with POS tags by Gimpel et al. (2011) and Owoputi et al. (2013) after rule-based tokenization (O'Connor et al., 2010).", "Kong et al. also contributed a system for parsing; we defer the discussion of their parser to 3.", "Kong et", "al.'s rapid, small-scale annotation effort was heavily constrained.", "It was carried out by annotators with only cursory training, no clear annotation guidelines, and no effort to achieve consensus on controversial cases.", "Annotators were allowed to underspecify their analyses.", "Most of the work was done in a very short amount of time (a day).", "Driven both by the style of the text they sought to annotate and by exigency, some of their annotation conventions included: Allowing an annotator to exclude tokens from the dependency tree.", "A clear criterion for exclusion was not given, but many tokens were excluded because they were deemed non-syntactic. Allowing an annotator to merge a multiword expression into a single node in the dependency tree, with no internal structure.", "Annotators were allowed to take the same step with noun phrases.", "Allowing multiple roots, since a single tweet might contain more than one sentence.", "These conventions were justified on the grounds of making the annotation easier for non-experts, but they must be revisited in our effort to apply UD to tweets.", "Our tokenization strategy lies between the strategy of O'Connor et al. (2010) and that of UD.", "There is a tradeoff between preservation of original tweet content and respecting the UD guidelines.", "The regex-based tokenizer of O'Connor et al. (2010)which was originally designed for an exploratory search interface called TweetMotif, not for NLPpreserves most whitespace-delimited tokens, including hashtags, at-mentions, emoticons, and unicode glyphs.", "They also treat contractions and acronyms as whole tokens and do not 966 split them.", "UD tokenization, 2 in order to better serve dependency annotation, treats each syntactic word as a token.", "They therefore more aggressively split clitics from contractions (e.g., gonna is tokenized as gon and na ; its is tokenized as it and s when s is a copula).", "But acronyms are not touched in the UD tokenization guidelines.", "Thus, we follow the UD tokenization for contractions and leave acronyms like idc (I don't care) as a single token.", "In the different direction of splitting tokens, UD guidelines also suggest to merge multi-token words (e.g., 20 000 ) into one single token in some special cases.", "We witnessed a small number of tweets that contain multi-token words (e.g., Y O , and R E T W E E T ) but didn't combine them for simplicity.", "Such tokens only account for 0.07% and we use the UD goeswith relation to resolve these cases in the dependency annotations.", "Before turning to UD annotations, we (re)annotated the data with POS tags, for consistency with other UD efforts, which adopt the universal POS tagset.", "3 In some cases, noncorresponding tag conflicts arose between the UD English Web Treebank treebank conventions (UD_English-EWT; de Marneffe et al., 2014) 4 and the conventions of Gimpel et al. (2011).", "In these cases, we always conformed to UD, enabling consistency (e.g., when we exploit the existing UD_English-EWT treebank in our parser for tweets, 3).", "For example, the nominal URL in Figure 2 is tagged as other ( X ) and + is tagged as symbol ( SYM ) rather than conjunction ( CCONJ ).", "Tokens that do not have a syntactic function (see Figure 1, discussed at greater length in the next section) were usually annotated as other ( X ), except for emoticons, which are tagged as symbol ( SYM ), following UD_English-EWT.", "Tokens that abbreviate multiple words (such as idc ) are resolved to the POS of the syntactic head of the expression, following UD conventions (in this example, the head care is a verb, so idc is tagged as a verb).", "When the token is not phrasal, we use the POS of the left-most sub-phrase.", "For 2 http://universaldependencies.org/u/overview/ tokenization.html 3 A revised and extended version of Petrov et al. (2012) with 17 tags.", "4 https://github.com/UniversalDependencies/UD_ English-EWT example, mfw (my face when) is tagged as a noun (for face ).", "Compared to the effort of Gimpel et al. (2011), our approach simplifies some matters.", "For example, if a token is not considered syntactic by UD conventions, it gets an other ( X ) tag (Gimpel et al. had more extensive conventions).", "Other phenomena, like abbreviations, are more complicated for us, as discussed above; Gimpel et al. used a single part of speech for such expressions.", "Another important difference follows from the difference in tokenization.", "As discussed in 2.2, UD calls for more aggressive tokenization than that of O'Connor et al. (2010) which opted out of splitting contractions and possessives.", "As a consequence of adopting O'Connor et", "al.'s (2010) tokenization, Gimpel et al. introduced new parts of speech for these cases instead.", "5 For us, these tokens must be split, but universal parts of speech can be applied.", "We adopt UD version 2 guidelines to annotate the syntax of tweets.", "In applying UD annotation conventions to tweets, the choices of Kong et al. (2014) must be revisited.", "We consider the key questions that arose in our annotation effort, and how we resolved them.", "Acronym abbreviations.", "We follow Kong et al. (2014) and annotate the syntax of an acronym as a single word without normalization.", "Their syntactic functions are decided according to their context.", "Eisenstein (2013) studied the necessity of normalization in social media text and argued that such normalization is problematic.", "Our solution to the syntax of abbreviations follows the spirit of his argument.", "Because abbreviations which clearly carry syntactic functions only constitute 0.06% of the tokens in our dataset, we believe that normalization for acronyms is an unnecessarily complicated step.", "Non-syntactic tokens.", "The major characteristic that distinguishes tweets from standard texts is that a large proportion of tokens don't carry any syntactic function.", "In our annotation, there are five types of non-syntactic tokens commonly seen in tweets: sentiment emoticons, retweet markers and 5 These tags only account for 2.7% of tokens, leading to concerns about data sparseness in tagging and all downstream analyses.", "their following at-mentions, topical hashtags, referential URLs, and truncated words.", "6 Figure 1 illustrates examples of these non-syntactic tokens.", "As discussed above, these are generally tagged with the other ( X ) part of speech, except emoticons, which are tagged as symbol ( SYM ).", "In our annotation, 7.55% of all tokens are belong to one of the five types; detailed statistics can be found in Table", "1. It is important to note that these types may, in some contexts, have syntactic functions.", "For example, besides being a discourse marker, RT can abbreviate the verb retweet ; emoticons and hashtags may be used as content words within a sentence; and at-mentions can be normal vocative proper nouns: see Figure", "2. Therefore, the cri-6 The tweets we analyze have at most 140 characters.", "Although Twitter has doubled the tweet length limit to 280 characters since our analysis, we believe this type of token will still remain.", "teria for annotating a token as non-syntactic must be context-dependent.", "Inspired by the way UD deals with punctuation (which is canonically non-syntactic), we adopt the following conventions: If a non-syntactic token is within a sentence that has a clear predicate, it will be attached to this predicate.", "The retweet construction is a special case and we will discuss its treatment in the following paragraph.", "If the whole sentence is a sequence of nonsyntactic tokens, we attach all these tokens to the first one.", "Non-syntactic tokens are mostly labeled as discourse , but URLs are always labeled as list , following the UD_English-EWT dataset.", "Kong et al. (2014) proposed an additional preprocessing step, token selection , in their annotation process.", "They required the annotators to first select the non-syntactic tokens and exclude them from the final dependency annotation.", "In order to keep our annotation conventions in line with UD norms and preserve the original tweets as much as possible, we include non-syntactic tokens in our annotation following the conventions above.", "Compared with Kong et al. (2014), we also gave a clear definition of non-syntactic tokens, which helped us avoid confusion during annotation.", "Retweet construction.", "Figure 1 shows an example of the retweet construction ( RT @coldplay : ).", "This might be treated as a verb phrase, with RT as a verb and the at-mention as an argument.", "This solution would lead to an uninformative root word and, since this expression is idiomatic to Twitter, might create unnecessary confusion for downstream applications aiming to identify the main predicate(s) of a tweet.", "We therefore treat the whole expression as non-syntactic, including assigning the other ( X ) part of speech to both RT and @coldplay , attaching the at-mention to RT with the discourse label and the colon to RT with the punct (uation) label, and attaching RT to the predicate of the following sentence.", "Constructions handled by UD.", "A number of constructions that are especially common in tweets are handled by UD conventions: ellipsis, irregular word orders, and paratactic phrases and sentences not explicitly delineated by punctuation.", "Vocative at-mentions.", "Another idiomatic construction on Twitter is a vocative at-mention (sometimes a signal that a tweet is a reply to a tweet by the mentioned user).", "We treat these at-mentions as vocative expressions, labeling them with POS tag proper noun ( PROPN ) and attaching them to the main predicate of the sentence it is within with the label vocative as in UD guidelines (see Figure 2 for an example).", "The first Twitter treebank annotated with Universal Dependencies was the PosTWITA-UD corpus for Italian (Sanguinetti et al., 2017), which consists of 6,738 tweets (119,726 tokens).", "In their convention, tokenization tends to preserve the original tweet content but two special cases, articulated prepositions (e.g., nella as in la ) and clitic clusters (e.g. guardandosi as guardando si ), are tokenized.", "Their lemmas include spelling normalization, whereas our lemmas only normalize casing and inflectional morphology.", "The current UD guidelines on lemmas are flexible, so variation between treebanks is expected.", "7 With respect to tweet-specific constructions, Sanguinetti et", "al.'s (2017) and our interpretations of headedness are the same, but we differ in the relation label.", "For topical hashtags, we use dis-7 http://universaldependencies.org/u/overview/ morphology.html#lemmas course while they used parataxis .", "In referential URLs, we use list (following the precedent of UD_English-EWT) while they used dep .", "Our choice of discourse for sentiment emoticons is inspired by the observation that emoticons are annotated as discourse by UD_English-EWT; Sanguinetti et al. (2017) used the same relation for the emoticons.", "Retweet constructions and truncated words were not explicitly touched by Sanguinetti et al. (2017).", "Judging from the released treebank 8 , the RT marker, at-mention, and colon in the retweet construction are all attached to the predicate of the following sentence with dep , voca-tive:mention and punct .", "We expect that the official UD guidelines will eventually adopt standards for these constructions so the treebanks can be harmonized.", "Following the guidelines presented above, we create a new Twitter dependency treebank, which we call TWEEBANK", "V 2. 2.6.1 Data Collection TWEEBANK V 2 is built on the original data of TWEEBANK V 1 (840 unique tweets, 639/201 for training/test), along with an additional 210 tweets sampled from the POS-tagged dataset of Gimpel et al. (2011) and 2,500 tweets sampled from the Twitter stream from February 2016 to July 2016.", "9 The latter data source consists of 147.4M English tweets after being filtered by the lang attribute in the tweet JSON and langid.py .", "10 As done by Kong et al. (2014), the annotation unit is always the tweet in its entiretywhich may consist of multiple sentencesnot the sentence alone.", "Before annotation, we use a simple regular expression to anonymize usernames and URLs.", "Our annotation process was conducted in two stages.", "In the first stage, 18 researchers worked on the TWEEBANK V 1 portion and the additional 210 tweets and created the initial annotations in one day.", "Before annotating, they were given a tutorial overview of the general UD annotation conventions and our guidelines specifically for annotating tweets.", "Both the guidelines and annotations 8 https://github.com/UniversalDependencies/UD_ Italian-PoSTWITA 9 Data downloaded from https://archive.org/ .", "were further refined by the authors of this paper to increase the coverage of our guidelines and solve inconsistencies between different annotators during this exercise.", "In the second stage, a tokenizer, a POS tagger, and a parser were trained on the annotated data from the first stage (1,050 tweets in total), and used to automatically analyze the sampled 2,500 tweets.", "Authors of this paper manually corrected the parsed data and finally achieved 3,550 labeled tweets.", "11 Newly created annotations are split into train, development, and test sets and appended to the original splits of TWEEBANK", "V 1. Statistics of our annotations and data splits are shown in Table", "2. We report the inter-annotator agreement between the annotators in the second stage.", "There is very little disagreement on the tokenization annotation.", "The agreement rate is 96.6% on POS, 88.8% on unlabeled dependencies, and 84.3% on labeled dependencies.", "Further analysis shows the major disagreements on POS involve entity names (30.6%) and topical hashtags (18.1%).", "Taking the example in Figure 1, Fix you can be understood as a verbal phrase but also as the name of the Cold-play's single and tagged as proper noun.", "An exam-11 Manual annotation was done with Arborator (Gerdes, 2013), a web platform for drawing dependency trees.", "ple of a disagreement on dependencies is shown in Figure", "3. Depending on whether this is an example of a zero copula construction, or a clause-modified noun, either annotation is plausible.", "Tokenization, as the initial step of many NLP tasks, is non-trivial for informal tweets, which include hashtags, at-mentions, and emoticons (O'Connor et al., 2010).", "Context is often required for tokenization decisions; for example, the asterisk in 4*3 is a separate token signifying multiplication, but the asterisk in sh*t works as a mask to evoke censorship and should not be segmented.", "We introduce a new character-level bidirectional LSTM (bi-LSTM) sequence-labeling model (Huang et al., 2015; Ma and Hovy, 2016) for tokenization.", "Our model takes the raw sentence and tags each character in this sentence as whether it is the beginning of a word (1 as the beginning and 0 otherwise).", "Figure 4 shows the architecture of our tokenization model.", "Space is treated as an input but deterministically assigned a special tag $.", "Experimental results.", "Our preliminary results showed that our model trained on the combination of UD_English-EWT and TWEEBANK V 2 outperformed the one trained only on the UD_English-EWT or TWEEBANK V 2, consistent with previous work on dialect treebank parsing (Wang et al., 2017).", "So we trained our tokenizer on the training portion of TWEEBANK V 2 combined with the UD_English-EWT training set and tested on the TWEEBANK V 2 test set.", "We report F 1 scores, combining precision and recall for token identi-fication.", "Table 3 shows the tokenization results, 970 System F 1 Stanford CoreNLP 97.3 Twokenizer 94.6 UDPipe v1.2 97.4 our bi-LSTM tokenizer 98.3 Table 3: Tokenizer comparison on the TWEEBANK V 2 test set.", "compared to other available tokenizers.", "Stanford CoreNLP (Manning et al., 2014) and Twokenizer (O'Connor et al., 2010) 12 are rule-based systems and were not adapted to the UD tokenization scheme.", "The UDPipe v1.2 (Straka and Strakov, 2017) model was re-trained on the same data as our system.", "Compared with UDPipe, we use an LSTM instead of a GRU in our model and we also use a larger size for hidden units (64 vs. 20), which has stronger representational power.", "Our bi-LSTM tokenizer achieves the best accuracy among all these tokenizers.", "These results speak to the value of statistical modeling in tokenization for informal texts.", "Part-of-speech tagging for tweets has been extensively studied (Ritter et al., 2011; Gimpel et al., 2011; Derczynski et al., 2013; Owoputi et al., 2013; Gui et al., 2017).", "We therefore consider existing POS taggers for tweets instead of developing our own.", "On the annotation scheme designed in 2.3, based on UD and adapted for Twitter, we compared several existing systems: the Stanford CoreNLP tagger, Owoputi et", "al.'s (2013) word clusterenhanced tagger (both greedy and CRF variants), and Ma and Hovy's (2016) neural network tagger which achieves the state-of-the-art performance on PTB.", "Gui et al. (2017) presented a state-of-the-art neural tagger for Twitter, but their implementation works only with the PTB tagset, so we exclude it.", "All compared systems were re-trained on the combination of the UD_English-EWT and TWEEBANK V 2 training sets.", "We use Twitter-specific GloVe embeddings released by Pennington et al. (2014) in all neural taggers and parsers.", "13 12 We use the updated version of Twokenizer from Owoputi et al. (2013).", "Tokenization System F 1 Stanford CoreNLP 92.3 our bi-LSTM tokenizer (3.1) 93.3 Table 5: Owoputi et al. (2013) POS tagging performance with automatic tokenization on the TWEEBANK V 2 test set.", "Experimental results.", "We tested the POS taggers on the TWEEBANK V 2 test set.", "Results with gold-standard tokenization are shown in Table", "4. Careful feature engineering and Brown et al. (1992) clusters help Owoputi et", "al.'s (2013) feature-based POS taggers to outperform Ma and Hovy's (2016) neural network model.", "Results of the Owoputi et al. (2013) tagger with non-greedy inference on automatically tokenized data are shown in Table", "5. We see that errors in tokenization do propagate, but tagging performance is above 93% with our tokenizer.", "Social media applications typically require processing large volumes of data, making speed an important consideration.", "We therefore begin with the neural greedy stack LSTM parser introduced by Ballesteros et al. (2015), which can parse a sentence in linear time and harnesses character representations to construct word vectors, which should help mitigate the challenge of spelling variation.", "We encourage the reader to refer their paper for more details about the model.", "In our initial experiments, we train our parser on the combination of UD_English-EWT and TWEEBANK V 2 training sets.", "Gold-standard tokenization and automatic POS tags are used.", "Automatic POS tags are assigned with 5-fold jackknifing.", "Hyperparameters are tuned on the TWEEBANK V 2 development set.", "Unlabeled attachment score and labeled attachment score (including punctuation) are reported.", "All the experiments were run on a Xeon E5-2670 2.6 GHz machine.", "Reimers and Gurevych (2017) and others have 971 System UAS LAS Kt/s Kong et al. (2014) 81.4 76.9 0.3 Dozat et al. (2017) 81.8 77.7 1.7 Ballesteros et al. (2015) 80.2 75.7 2.3 Ensemble (20) 83.4 79.4 0.2 Distillation ( = 1 . 0 ) 81.8 77.6 2.3 Distillation ( = 0 . 9 ) 82.0 77.8 2.3 Distillation w/ exploration 82.1 77.9 2.3 Table 6: Dependency parser comparison on TWEEBANK V 2 test set, with automatic POS tags.", "pointed out that neural network training is nondeterministic and depends on the seed for the random number generator.", "Our preliminary experiments confirm this finding, with a gap of 1.4 LAS on development data between the best (76.2) and worst (74.8) runs.", "To control for this effect, we report the average of five differently-seeded runs, for each of our models and the compared ones.", "Initial results.", "The first section of Table 6 compares the stack LSTM with TWEEBOPARSER (the system of Kong et al., 2014) and the state-of-the-art parser in the CoNLL 2017 evaluations, due to Dozat et al. (2017).", "Kong et", "al.'s (2014) parser is a graph-based parser with lexical features and word cluster and it uses dual decomposition for decoding.", "The parser in Dozat et al. (2017) is also a graph-based parser but includes character-based word representations and uses a biaffine classifier to predict whether an attachment exists between two words.", "Both of the compared systems require superlinear runtime due to graph-based parsing.", "They are re-trained on the same data as our system.", "Our baseline lags behind by nearly two LAS points but runs faster than both of them.", "Ensemble.", "Due to ambiguity in the training datawhich most loss functions are not robust to (Frnay and Verleysen, 2014), including the log loss we use, following Ballesteros et al. (2015) and due to the instability of neural network training, we follow Dietterich (2000) and consider an ensemble of twenty parsers trained using different random initialization.", "To parse at test time, the transition probabilities of the twenty members of the ensemble are averaged.", "The result achieves LAS of 79.4, outperforming all three systems above (Table 6).", "Distillation.", "The shortcoming of the 20-parser ensemble is, of course, that it requires twenty times the runtime of a single greedy parser, making it the slowest system in our comparison.", "Kuncoro et al. (2016) proposed the distillation of 20 greedy transition-based parser into a single graph-based parser; they transformed the votes of the ensemble into a structured loss function.", "However, as Kuncoro et al. pointed out, it is not straightforward to use a structured loss in a transition-based parsing algorithm.", "Because fast runtime is so important for NLP on social media, we introduce a new way to distill our greedy ensemble into a single transition-based parser (the first such attempt, to our knowledge).", "Our approach applies techniques from Hinton et al. (2015) and Kim and Rush (2016) to parsing.", "Note that training a transition-based parser typically involves the transformation of the training data into a sequence of oracle state-action pairs.", "Let q ( a | s ) denote the distilled model's probability of an action a given parser state s ; let p ( a | s ) be the probability under the ensemble (i.e., the average of the 20 separately-trained ensemble mem-bers).", "To train the distilled model, we minimize the interpolation between their distillation loss and the conventional log loss: argmin q X i X a p ( a | s i ) log q ( a | s i ) | {z } distillation loss (1) + (1 ) X i log q ( a i | s i ) | {z } log loss Distilling from this parser leads to a single greedy transition-based parser with 77.8 LAS better than past systems but worse than our more expensive ensemble.", "The effect of is illustrated in Figure 5; generally paying closer attention to the ensemble, rather than the conventional log loss objective, leads to better performance.", "Learning from exploration.", "When we set = 1 , we eliminate the oracle from the estimation procedure (for the distilled model).", "This presents an opportunity to learn with exploration , by randomly sampling transitions from the ensemble, found useful in recent methods for training greedy models that use dynamic oracles (Goldberg and Nivre, 2012, 2013; Kiperwasser and Goldberg, 972 l l l l l l l l l l l 77.4 77.6 77.3 77.3 77 76.7 76.4 76.1 75.8 75.1 74.9 75 76 77 78 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 alpha l a s Figure 5: The effect of on distillation. Pipeline stage Score Ours SOTA Tokenization F 1 98.3 97.3 POS tagging F 1 93.3 92.2 UD parsing LASF 1 74.0 71.4 Table 7: Evaluating our pipeline against a state-of-the-art pipeline. 2016; Ballesteros et al., 2016).", "We find that this approach outperforms the conventional distillation model, coming in 1.5 points behind the ensemble (last line of Table 6).", "Pipeline evaluation.", "Finally, we report our full pipeline's performance in Table 7.", "We also compare our model with a pipeline of the state-of-the-art systems (labeled SOTA): Stanford CoreNLP tokenizer, 14 Owoputi et", "al.'s (2013) tagger, and Dozat et", "al.'s (2017) parser.", "Our system differs from the state-of-the-art pipeline in the tokenization and parser components.", "From Table 7, our pipeline outperforms the state of the art when evaluated in pipeline manner.", "The results also emphasize the importance of tokenization: without gold tokenization UD parsing performance drops by about four points.", "We study the problem of parsing tweets into Universal Dependencies.", "We adapt the UD guidelines to cover special constructions in tweets and create the TWEEBANK V 2, which has 55,607 tokens.", "We characterize the disagreements among our annotators and argue that inherent ambiguity in this genre makes consistent annotation a challenge.", "Using this new treebank, we build a pipeline system to parse tweets into UD.", "We also propose a new method to distill an ensemble of 20 greedy parsers into a single one to overcome annotation noise 14 We choose the Stanford CoreNLP tokenizer in the spirit of comparing rule-based and statistical methods.", "without sacrificing efficiency.", "Our parser achieves an improvement of 2.2 in LAS over a strong baseline and outperforms other state-of-the-art parsers in both accuracy and speed.", "We thank Elizabeth Clark, Lucy Lin, Nelson Liu, Kelvin Luu, Phoebe Mulcaire, Hao Peng, Maarten Sap, Chenhao Tan, and Sam Thomson at the University of Washington, and Austin Blodgett, Lucia Donatelli, Joe Garman, Emma Manning, Angela Yang, and Yushi Zhang at Georgetown University for their annotation efforts in the first round.", "We are grateful for the support from Lingpeng Kong at the initial stage of this project.", "We also thank the anonymous reviewers for their helpful comments and suggestions.", "This work was supported by the National Key Basic Research Program of China via grant 2014CB340503 and the National Natural Science Foundation of China (NSFC) via grant 61632011." ]
[ "method", "objective", "abstain", "result", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "method", "method", "abstain", "abstain", "result", "result", "result", "abstain", "objective", "method", "method", "objective", "objective", "other", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "objective", "abstain", "result", "other", "other", "other", "other" ]
[ "Subword segmentation is widely used to address the open vocabulary problem in machine translation.", "The dominant approach to subword segmentation is Byte Pair Encoding (BPE), which keeps the most frequent words intact while splitting the rare ones into multiple tokens.", "While multiple segmentations are possible even with the same vocabulary, BPE splits words into unique sequences; this may prevent a model from better learning the compositionality of words and being robust to segmentation errors.", "So far, the only way to overcome this BPE imperfection, its deterministic nature, was to create another subword segmentation algorithm (Kudo, 2018).", "In contrast, we show that BPE itself incorporates the ability to produce multiple segmentations of the same word.", "We introduce BPE-dropout simple and effective subword regularization method based on and compatible with conventional BPE.", "It stochastically corrupts the segmentation procedure of BPE, which leads to producing multiple segmentations within the same fixed BPE framework.", "Using BPE-dropout during training and the standard BPE during inference improves translation quality up to 2.3 BLEU compared to BPE and up to 0.9 BLEU compared to the previous subword regularization.", "Using subword segmentation has become de-facto standard in Neural Machine Translation (Bojar et al., 2018; Barrault et al., 2019).", "Byte Pair Encoding (BPE) (Sennrich et al., 2016) is the dominant approach to subword segmentation.", "It keeps the common words intact while splitting the rare and unknown ones into a sequence of subword units.", "This potentially allows a model to make Equal contribution.", "use of morphology, word composition and transliteration.", "BPE effectively deals with an open-vocabulary problem and is widely used due to its simplicity.", "There is, however, a drawback of BPE in its deterministic nature: it splits words into unique subword sequences, which means that for each word a model observes only one segmentation.", "Thus, a model is likely not to reach its full potential in exploiting morphology, learning the compositionality of words and being robust to segmentation errors.", "Moreover, as we will show further, subwords into which rare words are segmented end up poorly understood.", "A natural way to handle this problem is to enable multiple segmentation candidates.", "This was initially proposed by Kudo (2018) as a subword regularization a regularization method, which is implemented as an on-the-fly data sampling and is not specific to NMT architecture.", "Since standard BPE produces single segmentation, to realize this regularization the author had to propose a new subword segmentation, different from BPE.", "However, the introduced approach is rather complicated: it requires training a separate segmentation unigram language model, using EM and Viterbi algorithms, and forbids using conventional BPE.", "In contrast, we show that BPE itself incorporates the ability to produce multiple segmentations of the same word.", "BPE builds a vocabulary of subwords and a merge table, which specifies which subwords have to be merged into a bigger subword, as well as the priority of the merges.", "During segmentation, words are first split into sequences of characters, then the learned merge operations are applied to merge the characters into larger, known symbols, till no merge can be done (Fig-ure", "1(a)).", "We introduce BPE-dropout a subword regularization method based on and compatible with conventional BPE.", "It uses a vocabulary and a", "merge table built by BPE, but at each merge step, some merges are randomly dropped.", "This results in different segmentations for the same word (Fig-ure", "1(b)).", "Our method requires no segmentation training in addition to BPE and uses standard BPE at test time, therefore is simple.", "BPE-dropout is superior compared to both BPE and Kudo (2018) on a wide range of translation tasks, therefore is effective.", "Our key contributions are as follows: We introduce BPE-dropout a simple and effective subword regularization method; We show that our method outperforms both BPE and previous subword regularization on a wide range of translation tasks; We analyze how training with BPE-dropout affects a model and show that it leads to a better quality of learned token embeddings and to a model being more robust to noisy input.", "In this section, we briefly describe BPE and the concept of subword regularization.", "We assume that our task is machine translation, where a model needs to predict the target sentence Y given the source sentence X , but the methods we describe are not task-specific.", "To define a segmentation procedure, BPE (Sen-nrich et al., 2016) builds a token vocabulary and a merge table.", "The token vocabulary is initialized with the character vocabulary, and the merge table is initialized with an empty table.", "First, each word is represented as a sequence of tokens plus a special end of word symbol.", "Then, the method iteratively counts all pairs of tokens and merges the most frequent pair into a new token.", "This token is added to the vocabulary, and the merge operation is added to the merge table.", "This is done until the desired vocabulary size is reached.", "The resulting merge table specifies which subwords have to be merged into a bigger subword, as well as the priority of the merges.", "In this way, it defines the segmentation procedure.", "First, a word is split into distinct characters plus the end of word symbol.", "Then, the pair of adjacent tokens which has the highest priority is merged.", "This is done iteratively until no merge from the table is available (Figure", "1(a)).", "Subword regularization (Kudo, 2018) is a training algorithm which integrates multiple segmentation candidates.", "Instead of maximizing log-likelihood, this algorithm maximizes log-likelihood marginal-ized over different segmentation candidates.", "Formally, L = (cid:88) ( X,Y ) DE x P ( x | X ) y P ( y | Y ) log P ( y | x, ) , (1) where x and y are sampled segmentation candidates for sentences X and Y respectively, P ( x | X ) and P ( y | Y ) are the probability distributions the candidates are sampled from, and is the set of model parameters.", "In practice, at each training step only one segmentation candidate is sampled.", "Since standard BPE segmentation is deterministic, to realize this regularization Kudo (2018) proposed a new subword segmentation.", "The introduced approach requires training a separate segmentation unigram language model to predict the probability of each subword, EM algorithm to optimize the vocabulary, and Viterbi algorithm to make samples of segmentations.", "Subword regularization was shown to achieve significant improvements over the method using a single subword sequence.", "However, the proposed method is rather complicated and forbids using conventional BPE.", "We show that to realize subword regularization it is not necessary to reject BPE since multiple segmentation candidates can be generated within the BPE framework.", "We introduce BPE-dropout a method which exploits the innate ability of BPE to be stochastic.", "It alters the segmentation procedure while keeping the original BPE merge table.", "During segmentation, at each merge step some merges are randomly dropped with the probability p .", "This procedure is described in Algorithm 1. Algorithm 1: BPE-dropout current split characters from input word; do merges all possible merges 1 of tokens from current split ; for merge from merges do /* The only difference from BPE */ remove merge from merges with the probability p ; end if merges is not empty then merge select the merge with the highest priority from merges ; apply merge to current split ; end while merges is not empty ; return current split ; If p is set to 0, the segmentation is equivalent to the standard BPE; if p is set to 1, the segmentation splits words into distinct characters.", "The values between 0 and 1 can be used to control the segmentation granularity.", "We use p > 0 (usually p = 0 . 1 ) in training time to expose a model to different segmentations and p = 0 during inference, which means that at inference time we use the original BPE.", "We discuss the choice of the value of p in Section 5. When some merges are randomly forbidden during segmentation, words end up segmented in different subwords; see for example Figure", "1(b).", "We hypothesize that exposing a model to different 1 In case of multiple occurrences of the same merge in a word (for example, m-e-r-g-e-r has two occurrences of the merge ( e, r )), we decide independently for each occurrence whether to drop it or not.", "Our baselines are the standard BPE and the subword regularization by Kudo (2018).", "Subword regularization by Kudo (2018) has segmentation sampling hyperparameters l and .", "l specifies how many best segmentations for each word are produced before sampling one of them, controls the smoothness of the sampling distribution.", "In the original paper ( l = , = 0 . 2 / 0 . 5) and ( l = 64 , = 0 . 1) were shown to perform best on different datasets.", "Since overall they show comparable results, in all experiments we use ( l = 64 , = 0 . 1) .", "There are two ways of building vocabulary for models trained with BPE-dropout : (1) take the vocabulary built by BPE; then the segmented with BPE-dropout text will contain a small number of unknown tokens (UNKs) 2 ; (2) add to the BPE vocabulary all tokens which can appear when segmenting with BPE-dropout .", "In the preliminary experiments, we did not observe any difference in quality; therefore, either of the methods can be used.", "We choose the first option to stay in the same setting as the standard BPE.", "Besides, a model exposed to some UNKs in training can be more reliable for practical applications where unknown tokens can be present.", "We conduct our experiments on a wide range of datasets with different corpora sizes and languages; information about the datasets is summarized in Table 1. These datasets are used in the main experiments (Section 5.1) and were chosen to match the ones used in the prior work (Kudo, 2018).", "In the additional experiments (Sections 5.2-5.5), we also use random subsets of the WMT14 English-French data; in this case, we specify dataset size for each experiment.", "2 For example, for the English part of the IWSLT15 En-Vi corpora, these UNKs make up 0.00585 and 0.00085 of all tokens for 32k and 4k vocabularies, respectively.", "3 However, Chinese and Japanese have no explicit word boundaries, and Moses tokenizer does not segment sentences into words; for these languages, subword segmentations are trained almost from unsegmented raw sentences.", "Relying on a recent study of how the choice of vocabulary size influences translation quality (Ding et al., 2019), we choose vocabulary size depending on the dataset size (Table 1).", "In training, translation pairs were batched together by approximate sequence length.", "For the main experiments, the values of batch size we used are given in Table 1 (batch size is the number of source tokens).", "In the experiments in Sections 5.2, 5.3 and 5.4, for datasets not larger than 500k sentence pairs we use vocabulary size and batch size of 4k, and 32k for the rest.", "4 In the main text, we train all models on lowercased data.", "In the appendix, we provide additional experiments with the original case and case-sensitive BLEU.", "The NMT system used in our experiments is Transformer base (Vaswani et al., 2017).", "More precisely, the number of layers is N = 6 with h = 8 parallel attention layers, or heads.", "The dimensionality of input and output is d model = 512 , and the inner-layer of feed-forward networks has dimensionality d ff = 2048 .", "We use regularization and optimization procedure as described in Vaswani et al. (2017).", "mosesdecoder 4 Large batch size can be reached by using several of GPUs or by accumulating the gradients for several batches and then making an update.", "We train models till convergence.", "For all experiments, we provide number of training batches in the appendix (Tables 6 and 7).", "To produce translations, for all models, we use beam search with the beam of 4 and length normalization of 0.6.", "In addition to the main results, Kudo (2018) also report scores using n -best decoding.", "To translate a sentence, this strategy produces multiple segmentations of a source sentence, generates a translation for each of them, and rescores the obtained translations.", "While this could be an interesting future work to investigate different sampling and rescoring strategies, in the current study we use 1-best decoding to fit in the standard decoding paradigm.", "For evaluation, we average 5 latest checkpoints and use BLEU (Papineni et al., 2002) computed via SacreBleu 5 (Post, 2018).", "For Chinese, we add option --tok zh to SacreBLEU.", "For Japanese, we use character-based BLEU.", "The results are provided in Table 2. For all datasets, BPE-dropout improves significantly over the standard BPE: more than 1.5 BLEU for En-Vi, Vi-En, En-Zh, Zh-En, Ar-En, De-En, and 0.5-1.4", "The improvements are especially prominent for smaller datasets; we will discuss this further in Section 5.4.", "Compared to Kudo (2018), among the 12 datasets we use BPE-dropout is beneficial for 8 datasets with improvements up to 0.92 BLEU, is not significantly different for 3 datasets and un-derperforms only on En-Ja.", "While Kudo (2018) uses another segmentation, our method operates within the BPE framework and changes only the way a model is trained.", "Thus, lower performance of BPE-dropout on En-Ja and only small or in-significant differences for Ja-En, En-Zh and Zh-En suggest that Japanese and Chinese may benefit from a language-specific segmentation.", "Note also that Kudo (2018) report larger improvements over BPE from using their method than we show in Table 2. This might be explained by the fact that Kudo (2018) used large vocabulary size (16k, 32k), which has been shown counterproductive for small datasets (Sennrich and Zhang, 2019; Ding et al., 2019).", "While this may not be the issue for models trained with subword regularization (see Section 5.4), this causes drastic drop in performance of the baselines.", "In this section, we investigate whether BPE-dropout should be used only on one side of a translation pair or for both source and target languages.", "We select random subsets of different sizes from WMT14 En-Fr data to understand how the results are affected by the amount of data.", "We show that: for small and medium datasets, full regularization performs best; for large datasets, BPE-dropout should be used only on the source side.", "Since full regularization performs the best for most of the considered dataset sizes, in the subsequent sections we use BPE-dropout on both source and target sides.", "Table 3 indicates that using BPE-dropout on the source side is more beneficial than on the target side; for the datasets not smaller than 0.5m sentence pairs, BPE-dropout can be used only the source side.", "We can speculate that it is more important for the model to understand a source sentence than being exposed to different ways to generate the same target sentence.", "For larger corpora (e.g., starting from 4m in-stances), it is better to use BPE-dropout only on the source side (Table 3).", "Interestingly, using BPE-dropout for both source and target languages hurts performance for large datasets.", "Figure 2 shows BLEU scores for the models trained on BPE-dropout with different values of p (the probability of a merge being dropped).", "Models trained with high values of p are unable to translate due to a large mismatch between training segmentation (which is close to char-level) and inference segmentation (BPE).", "The best quality is achieved with p = 0 .", "1 .", "In our experiments, we use p = 0 .", "1 for all languages except for Chinese and Japanese.", "For Chinese and Japanese, we take the value of p = 0 .", "6 to match the increase in length of segmented sentences for other languages.", "6 5.4 Varying corpora and vocabulary size Now we will look more closely at how the improvement from using BPE-dropout depends on corpora and vocabulary size.", "First, we see that BPE-dropout performs best for all dataset sizes (Figure 3).", "Next, models trained with subword regularization are less sensitive to the choice of vocabulary size: differences in performance of models with 4k and 32k vocabulary are much less than for models trained with the standard BPE.", "This makes BPE-dropout attractive since it allows", "(i) not to tune vocabulary size for each dataset,", "(ii) choose vocabulary size depending on the desired model properties: models with smaller vocabularies are beneficial in terms of number of parameters, models with larger vocabularies are beneficial in terms of inference time.", "7 Finally, we see that the effect from using 6 Formally, for English/French/etc.", "with BPE-dropout , p = 0 .", "1 sentences become on average about 1.25 times longer compared to segmented with BPE; for Chinese and Japanese, we need to set the value of p to 0 .", "6 to achieve the same increase.", "7 Table 4 shows that inference for models with 4k vocab-Figure 3: BLEU scores.", "Models trained on random subsets of WMT14 En-Fr.", "BPE-dropout vanishes when a corpora size gets bigger.", "This is not surprising: the effect of any regularization is less in high-resource settings; however, as we will show later in Section 6.3, when applied to noisy source, models trained with BPE-dropout show substantial improvements up to 2 BLEU even in high-resource settings.", "Note that for larger corpora, we recommend using BPE-dropout only for source language (Sec-tion 5.2).", "Since BPE-dropout produces more fine-grained segmentation, sentences segmented with BPE-dropout are longer; distribution of sentence lengths are shown in Figure 4", "(a) (with p = 0 . 1 , on average about 1.25 times longer).", "Thus there is a potential danger that models trained with BPE-dropout may tend to use more fine-grained segmentation in inference and hence to slow inference down.", "However, in practice this is not the case: distributions of lengths of generated translations for models trained with BPE and with BPE-dropout are close (Figure 4", "(b)).", "8 Table 4 confirms these observations and shows that inference time of models trained with BPE-dropout is not substantially different from the ones trained with BPE.", "ulary is more than 1.4 times longer than models with 32k vocabulary.", "8 This is the result of using beam search: while samples from a model reproduce training data distribution quite well, beam search favors more frequent tokens (Ott et al., 2018).", "Therefore, beam search translations tend not to use less frequent fine-grained segmentation.", "In this section, we analyze qualitative differences between models trained with BPE and BPE-dropout .", "We find, that when using BPE, frequent sequences of characters rarely appear in a segmented text as individual tokens rather than being a part bigger ones; BPE-dropout alleviates this issue; by analyzing the learned embedding spaces, we show that using BPE-dropout leads to a better understanding of rare tokens; as a consequence of the above, models trained with BPE-dropout are more robust to misspelled input.", "Here we highlight one of the drawbacks of BPE's deterministic nature: since it splits words into unique subword sequences, only rare words are split into subwords.", "This forces frequent sequences of characters to mostly appear in a segmented text as part of bigger tokens, and not as individual tokens.", "To show this, for each token in the BPE vocabulary we calculate how often it appears in a segmented text as an individual token and as a sequence of characters (which may Figure 5: Distribution of token to substring ratio for texts segmented using BPE or BPE-dropout for the same vocabulary of 32k tokens; only 10 % most frequent substrings are shown.", "Figure 5 shows distribution of the ratio between substring frequency as an individual token and as a sequence of characters (for top-10 % most frequent substrings).", "For frequent substrings, the distribution of token to substring ratio is clearly shifted to zero, which confirms our hypothesis: frequent sequences of characters rarely appear in a segmented text as individual tokens.", "When a text is segmented using BPE-dropout with the same vocabulary, this distribution significantly shifts away from zero, meaning that frequent substrings appear in a segmented text as individual tokens more often.", "Now we will analyze embedding spaces learned by different models.", "We take embeddings learned by models trained with BPE and BPE-dropout and for each token look at the closest neighbors in the corresponding embedding space.", "Figure 6 shows several examples.", "In contrast to BPE, nearest neighbours of a token in the embedding space of BPE-dropout are often tokens that share sequences of characters with the original token.", "To verify this observation quantitatively, we computed character 4-gram precision of top-10 neighbors: the proportion of those 4-grams of the top-10 closest neighbors which are present among 4-grams of the original token.", "As expected, embeddings of BPE-dropout have higher character 4-gram precision (0.29) compared to the precision of BPE (0.18).", "This also relates to the study by Gong et al. (2018).", "For several tasks, they analyze the em-Figure 6: Examples of nearest neighbours in the source embedding space of models trained with BPE and BPE-dropout .", "Models trained on WMT14 En-Fr (4m).", "bedding space learned by a model.", "The authors find that while a popular token usually has semantically related neighbors, a rare word usually does not: a vast majority of closest neighbors of rare words are rare words.", "To confirm this, we reduce dimensionality of embeddings by SVD and visualize (Figure 7).", "For the model trained with BPE, rare tokens are in general separated from the rest; for the model trained with BPE-dropout , this is not the case.", "While to alleviate this issue Gong et al. (2018) propose to use adversarial training for embedding layers, we showed that a trained with BPE-dropout model does not have this problem.", "Models trained with BPE-dropout better learn compositionality of words and the meaning of subwords, which suggests that these models have to be more robust to noise.", "We verify this by measuring the translation quality of models on a test set augmented with synthetic misspellings.", "We augment the source side of a test set by modifying each word with the probability of 10% by applying one of the predefined operations.", "The operations we consider are (1) removal of one character from a word, (2) insertion of a random character into a word, (3) substitution of a character in a word with a random one.", "This augmentation produces words source BPE BPE-dropout diff En-De original 27.41 28.01 +0.6 misspelled 24.45 26.03 +1.58 De-En original 32.69 34.19 +1.5 misspelled 29.71 32.03 +2.32 En-Fr (4m) original 33.38 33.85 +0.47 misspelled 30.30 32.13 +1.83 En-Fr (16m) original 34.37 34.82 +0.45 misspelled 31.23 32.94 +1.71 Table 5: BLEU scores for models trained on WMT14 dataset evaluated given the original and misspelled source.", "with the edit distance of 1 from the unmodified words.", "Edit distance is commonly used to model misspellings (Brill and Moore, 2000; Ahmad and Kondrak, 2005; Pinter et al., 2017).", "Table 5 shows the translation quality of the models trained on WMT 14 dataset when given the original source and augmented with misspellings.", "We deliberately chose large datasets, where improvements from using BPE-dropout are smaller.", "We can see that while for the original test sets the improvements from using BPE-dropout are usually modest, for misspelled test set the improvements are a lot larger: 1.6-2.3 BLEU.", "This is especially interesting since models have not been exposed to misspellings during training.", "Therefore, even for large datasets using BPE-dropout can result in substantially better quality for practical applications where input is likely to be noisy.", "Closest to our work in motivation is the work by Kudo (2018), who introduced the subword regularization framework multiple segmentation candidates and a new segmentation algorithm.", "Other segmentation algorithms include Creutz and La-gus (2006), Schuster and Nakajima (2012), Chitnis and DeNero (2015), Kunchukuttan and Bhattacharyya (2016), Wu and Zhao (2018), Banerjee and Bhattacharyya (2018).", "Regularization techniques are widely used for training deep neural networks.", "Among regulariza-tions applied to a network weights the most popular are Dropout (Srivastava et al., 2014) and L 2 regularization.", "Data augmentation techniques in natural language processing include dropping tokens at random positions or swapping tokens at close positions (Iyyer et al., 2015; Artetxe et al., 2018; Lample et al., 2018), replacing tokens at random positions with a placeholder token (Xie et al., 2017), replacing tokens at random positions with a token sampled from some distribution (e.g., based on token frequency or a language model) (Fadaee et al., 2017; Xie et al., 2017; Kobayashi, 2018).", "While BPE-dropout can be thought of as a regularization, our motivation is not to make a model robust by injecting noise.", "By exposing a model to different segmentations, we want to teach it to better understand the composition of words as well as subwords, and make it more flexible in the choice of segmentation during inference.", "Several works study how translation quality depends on a level of granularity of a segmentation (Cherry et al., 2018; Kreutzer and Sokolov, 2018; Ding et al., 2019).", "Cherry et al. (2018) show that trained long enough character-level models tend to have better quality, but it comes with the increase of computational cost for both training and inference.", "Kreutzer and Sokolov (2018) find that, given flexibility in choosing segmentation level, the model prefers to operate on (almost) character level.", "Ding et al. (2019) explore the effect of BPE vocabulary size and find that it is better to use small vocabulary for low-resource setting and large vocabulary for a high-resource setting.", "Following these observations, in our experiments we use different vocabulary size depending on a dataset size to ensure the strongest baselines.", "We introduce BPE-dropout simple and effective subword regularization, which operates within the standard BPE framework.", "The only difference from BPE is how a word is segmented during model training: BPE-dropout randomly drops some merges from the BPE merge table, which results in different segmentations for the same word.", "Models trained with BPE-dropout (1) outperform BPE and the previous subword regularization on a wide range of translation tasks, (2) have better quality of learned embeddings, (3) are more robust to noisy input.", "Future research directions include adaptive dropout rates for different merges and an in-depth analysis of other pathologies in learned token embeddings for different segmentations.", "We thank anonymous reviewers for the helpful feedback, Rico Sennrich for valuable comments on the first version of this paper, and Yandex Machine Translation team for discussions and inspiration." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Paraphrase generation has been widely used in various downstream tasks.", "Most tasks benefit mainly from high quality paraphrases, namely those that are semantically similar to, yet linguistically diverse from, the original sentence.", "Generating high-quality paraphrases is challenging as it becomes increasingly hard to preserve meaning as linguistic diversity increases.", "Recent works achieve nice results by controlling specific aspects of the paraphrase, such as its syntactic tree.", "However, they do not allow to directly control the quality of the generated paraphrase, and suffer from low flexibility and scalability.", "Here we propose QCPG, a quality-guided controlled paraphrase generation model, that allows directly controlling the quality dimensions.", "Furthermore, we suggest a method that given a sentence, identifies points in the quality control space that are expected to yield optimal generated paraphrases.", "We show that our method is able to generate paraphrases which maintain the original meaning while achieving higher diversity than the uncontrolled baseline.", "The models, the code, and the data can be found in https://github.com/IBM/quality-c ontrolled-paraphrase-generation .", "Paraphrase generation, namely rewriting a sentence using different words and/or syntax while preserving its meaning (Bhagat and Hovy, 2013), is an important technique in natural language processing, that has been widely used in various downstream tasks including question answering (Fader et al., 2014a; McCann et al., 2018), summarization (Rush et al., 2015), data augmentation (Yu et al., 2018) and adversarial learning (Iyyer et al., 2018).", "However, not all paraphrases are equally useful.", "For most real-world applications, paraphrases which are too similar to the original sentence are of limited value, while those with high linguistic diversity, High Quality Region !", "i.e. with large syntactic/lexical differences between the paraphrase and the original sentence, are more beneficial to the robustness and accuracy of automatic text evaluation and classification, and can avoid the blandness caused by repetitive patterns (Qian et al., 2019).", "The quality of paraphrases is often evaluated using three dimensions, where high quality paraphrases are those with high semantic similarity as well as high lexical and/or syntactic diversity (McCarthy et al., 2009).", "Generating high quality paraphrases can be challenging (for both humans and automatic models) since it is increasingly difficult to preserve meaning with increasing linguistic diversity.", "Indeed, when examining the quality of paraphrases among paraphrase generation datasets, one can find a wide range of paraphrase qualities, where the area of high quality is often very sparse (see Figure 1).", "This in turn results in scarcity of supervised data for high-quality paraphrase generation.", "A recent approach aiming to produce high quality paraphrases is controlled paraphrase generation, which exposes control mechanisms that can be manipulated to produce diversity.", "While the controlled generation approaches have yielded impressive results, they require providing the model with very specific information regarding the target sentence, such as its parse tree (Iyyer et al., 2018) or the list of keywords it needs to contain (Zeng et al., 2019).", "However, for most downstream applications, the important property of the paraphrase is its overall quality, rather than its specific syntactic or lexical form.", "The over-specificity of existing control-based methods not only complicates their usage and limits their scalability, but also hinders their coverage.", "Thus, it would be desirable to develop a paraphrase generation model, which uses a simple mechanism for directly controlling paraphrase quality, while avoiding unnecessary complications associated with fine-grained controls.", "In this paper we propose QCPG, a Quality Controlled Paraphrase Generation model, that given an input sentence and quality constraints, represented by a three dimensional vector of semantic similarity, and syntactic and lexical distances, produces a target sentence that conforms to the quality constraints.", "Our constraints are much simpler than previously suggested ones, such as parse trees or keyword lists, and leave the model the freedom to choose how to attain the desired quality levels.", "Enabling the direct control of the three quality dimensions, allows flexibility with respect to the specific requirements of the task at hand, and opens a range of generation possibilities: paraphrases of various flavors (e.g. syntactically vs. lexically di-verse), quasi-paraphrases (with lower semantic sim-ilarity), and even non-paraphrases which may be useful for downstream tasks (e.g. hard negative examples of sentences that are linguistically similar but have different meanings (Guo et al., 2018; Reimers and Gurevych, 2020)).", "Furthermore, even though the training data is of mixed quality, and exhibits scarcity in the high quality area (see Figure 1), our model is able to learn high quality paraphrasing behavior, i.e. it increases the linguistic diversity of the generated paraphrases without decreasing the semantic similarity compared to the uncontrolled baseline.", "In this section we provide a general description of our approach.", "We first explain how the different quality dimensions are measured.", "We then describe the controlled paraphrase generation model, QCPG, and finally we suggest a method that given the task requirements, detects the input control values which maximize the quality of the generated paraphrases.", "Figure 2 summarizes our proposed solution for generating controlled paraphrases, which is detailed in the rest of the section.", "2.1 Quantifying Paraphrase Quality The most common dimensions for measuring paraphrase quality are the semantic, syntactic and lexical dimensions.", "Several previous works used also a fluency evaluation metric (Siddique et al., 2020).", "However, since our focus is on the supervised setting, we rely on the gold paraphrases as fluency guidance for the model (Mc-Carthy et al., 2009).", "Thus, given a sentence s and a paraphrase s , we define the paraphrase quality as a three dimensional vector q ( s, s ) = ( q sem ( s, s ) , q syn ( s, s ) , q lex ( s, s )) , where q sem is a measure of semantic similarity, and q syn and q lex are measures of syntactic and lexical variation, respectively.", "For the syntactic score, inspired by Iyyer et al. (2018) we choose q syn ( s, s ) to be the normalized tree edit distance (Zhang and Shasha, 1989) between the third level constituency parse-trees of s and s , after removing the tokens to increase the decoupling from the lexical distance metric.", "We define the lexical score q lex ( s, s ) to be the normalized character-level minimal edit distance between the bag of words.", "This measure is independent of word order, and hence increases the decoupling from syntactic measures.", "Additionally, calculating the token distances on the character level enables to capture tokens that share the same stem/lemma.", "Character-level distance is also more robust to typos that may be found in noisy data.", "As for the semantic score, several strong metrics have been recently proposed for measuring semantic similarity between sentences.", "In order to select q sem ( s, s ) , we studied the agreement between the candidate metrics and human judgments, using only development data, and found Bleurt (Sellam et al., 2020) to have the highest correlation with human judgments (see Appendix A).", "Thus, we define 597 Figure 2: Solution Architecture.", "For ease of presentation all metrics are presented on a 0 100 scale.", "The main component of our solution is a quality controlled paraphrase generation model (QCPG), which is an encoder-decoder model trained on the task of controlled paraphrase generation.", "Given an input sentence s and a control vector c = ( c sem , c syn , c lex ) , the goal of QCPG is to generate an output paraphrase QCP G ( s, c ) that conforms to c .", "We train QCPG using the training set pairs ( s, t ) , by setting c to be q ( s, t ) , and maximizing P ( t | s, c = q ( s, t )) over the training set via the autoregressive cross entropy loss.", "A major challenge in the research of controlled paraphrase generation, is selecting appropriate input control values that can be achieved by the model (Goyal and Durrett, 2020).", "Clearly, given a sentence, not all paraphrase qualities are achievable.", "Some sentences are more amenable to paraphrasing than others.", "For example, named entities and numbers are much harder to be replaced while keeping sentence meaning, and hence, the potential lexical diversity of paraphrases involving such terms is relatively limited.", "Forcing QCPG to conform to quality control values that are too high with respect to the input sentence, may lead to suboptimal quality of the resultant paraphrases.", "Thus, for a more effective use of QCPG, the control values should be determined with respect to the input sentence.", "Below we describe the second part of our solution, namely a method that given a sentence, predicts the input control values, c ( s ) , that optimize the expected quality of the paraphrases generated by QCPG.", "For simplicity we assume that the quality distribution p ( q | s ) of all paraphrases of sentence s , is approximately normally distributed around a sentence dependent mean q 0 ( s ) , and that the variance is approximately sentence-independent.", "We further assume that given an input sentence s , the difficulty to generate a paraphrase of a given quality, q , is dominated by p ( q | s ) rather than by the quality vector q itself.", "Following our assumptions, the level of difficulty can be expressed by the offset, o = ( o sem , o syn , o lex ) of q from q 0 ( s ) .", "Thus, the input control, c ( s ) , for QCPG, is the sum of q 0 ( s ) and an offset o .", "Our aim is to analyze the model results for varying levels of difficulty, namely under different offsets, o , from q 0 ( s ) .", "The Quality Predictor (QP): Since q 0 ( s ) is unknown, we introduce QP, a regressor whose output, termed the reference of s , r ( s ) = ( r sem ( s ) , r syn ( s ) , r lex ( s )) , approximates q 0 ( s ) .", "During training, QP aims to predict q ( s, t ) given s , where ( s, t ) are the input-output pairs of the training data.", "To summarize, we define sentence-aware quality control by decomposing the QCPG input control, c , into a sum of a sentence dependent reference 598 point, r ( s ) , and a sentence independent offset, o .", "To test the ability of our model to learn high quality behavior from mixed quality data we use weakly annotated datasets.", "These datasets are large but noisy, and contain only a relatively small amount of high quality paraphrases.", "MSCOCO : This dataset consists of 123K images, where each image contains at most five human-labeled captions (Lin et al., 2014).", "Similar to previous works we consider different captions of the same image as paraphrases.", "WikiAnswers (WikiAns for short) : The WikiAnswers corpus contains clusters of questions tagged by wiki-answers.com users as similar.", "There are 30 , 370 , 994 clusters with 25 question in each on average.", "In total, the corpus contains over 70 million question pairs (Fader et al., 2014b).", "ParaBank2.0 : A dataset containing clusters of sentential paraphrases, produced from a bilingual corpus using negative constraints, inference sampling, and clustering (Hu et al., 2019).", "The dataset is composed of avarage of 5 paraphrases in every cluster and close to 100 million pairs in total.", "To get comparable results across all datasets, we randomly sub-sampled ParaBank2.0 and WikiAns to the same size as MSCOCO, and split them to train, dev and test sets, of sizes 900 K , 14 K and 14 K respectively.", "We carefully made sure that there are no pairs from the same cluster in different splits of the data.", "The full data splits will be published with our code.", "All models are trained with batch size of 32 on 2 NVIDIA A100 GPUs for 6 epochs.", "Full details as well as train and dev results can be found in Appendix C.1.", "QCPG: We use the pre-trained T5-base (Raffel et al., 2020) as the encoder-decoder model.", "The control input vector to QCPG is quantized at every dimension into 20 equally spaced values ranging from 0 to 100 .", "Each value is assigned to a special saved-token.", "The three tokens corresponding to the quantized values of the control vector c , are concatenated to the head of the input sentence, and together used as input to the model.", "r ( s ) and o are also quantized in a similar way.", "QP: An Electra base model (Clark et al., 2020) finetuned with MSE loss to predict the typical quality values (see Section 2.3).", "For all the models, we adopt the experimental setup used in (Devlin et al., 2019), i.e. we train the model with several learning rates and choose the one that achieves the highest dev set performance (see appendix C.1).", "The aim of the following analysis is to study the level of control achieved by QCPG.", "To this end, we measure the model response to changes in the input offsets.", "We compute the expected difference in paraphrase quality, as a result of applying an input offset o compared to zero offset as a reference.", "More formally, we define the 3-dimensional responsiveness vector of QCPG at an offset o , R ( o ) as Q ( o ) Q ((0 , 0 , 0)) , where Q ( o ) is the expected quality of the paraphrases generated by QCPG at an offset o .", "We estimate Q ( o ) by averaging q ( QCP G ( s, r ( s ) + o )) over the input sentences s of the dev set, and denote this estimate by Q ( o ) = ( Q sem ( o ) , Q syn ( o ) , Q lex ( o )) , and the corresponding estimate of R ( o ) by R ( o ) .", "Specifically, in the following analysis we are interested in studying the model response to each of the dimensions separately, i.e. how changing the input offset along a given quality dimension dim the controlled dimension while keeping the two other dimensions constant, affects the responsiveness in each of the three dimensions.", "A good control mechanism would imply that increasing the input offset in one dimension will result in a monotonically increasing responsiveness in that dimension, with relatively small responsiveness in the other two dimensions.", "Figure 3 shows, for each of the three datasets, the responsiveness in the three quality dimensions, when changing the input offset along each of the three dimensions, while fixing the input offsets in the other two dimensions at 0 .", "Examining the actual values of quality in the paraphrases of the dev sets, reveals that the standard deviation is different in each dimension.", "Hence, for clarity of presentation, we present the input offset values and the responsiveness in units of standard deviation as measured in the respective dimension and dev set.", "For the range of offsets displayed in Figure 3, the responsiveness in the controlled dimension increases monotonically with the input offsets across all datasets and dimensions.", "As expected, the responsiveness in the uncontrolled dimensions does not zeros due to the inherent coupling between the dimensions.", "For example, many changes that increase syntactic diversity, also increase lexical diversity (e.g. a move from passive to active voice).", "Still, our control mechanism is able to increase the responsiveness in the controlled dimension with relative low responsiveness in the uncontrolled dimensions.", "Specifically, focusing on the relation between semantic similarity and expression diversity, the figure shows that there is a minor decrease in semantic similarity in response to an increase in lexical and syntactic diversity.", "In the next section, we will show that this does not prevent our model from generating paraphrases that are not only more lexically and syntactically diverse, but also more semantically similar to the source sentences, compared to the paraphrases generated by the uncontrolled baseline.", "Figure 3 focused on small to moderate input offsets, i.e. offsets up to 2 stds from the reference point.", "However, as we speculated before, with increasing offsets, i.e. the more the requested control value deviates from the typical value, it becomes increasingly difficult to generate a paraphrase that conforms to the requested control value.", "Figure 4 depicts the responsiveness in the syntactic and lexical dimensions for a larger range of offset values.", "For the semantic dimension, the typical values are too high to allow large positive offsets, which for most sentences result in exceeding the upper limit of the semantic score.", "Indeed, as can be seen in Figure 4, when moving to high offset values, the responsiveness in the syntactic and lexical dimensions starts to decrease.", "This behavior is in line with our aforementioned hypothesis, and reflects the detrimental effect of feeding QCPG with input control values that are too far from the typical paraphrase qualities of the input sentence.", "The non-monotonic behavior of the responsiveness implies that the input offsets should be selected carefully in order to optimize the quality of the resultant paraphrases.", "In Section 4.2 we suggest a method for identifying these optimal offsets.", "In this section, we suggest a method that given task requirements, selects the input offsets that are expected to yield the desired quality of paraphrases.", "The idea is to compute the estimated expected quality, Q ( o ) , for each input offset o , using the dev set as described in Section 4.1, and then search the 3D grid of input offsets to find the point for which Q ( o ) is best suited for the user's requirements.", "We envision this analysis as a preliminary step in which the user chooses the input control parameters that best achieve his desired paraphrasing operation point, and then uses the chosen values at inference which is why we use the dev set.", "We study the behavior of Q ( o ) as a function of the 3D grid of offset points in the relevant range, i.e every o where o sem , o syn and o lex in 0 , 5 , 10 ... 50 .", "Figure 5 depicts Q ( o ) for WikiAns, on a slice of the full offset grid.", "The results for the full grid on all datasets are shown in Figure 6.", "The right-hand-side map depicts the estimated linguistic diversity (the average of Q syn ( o ) and Q lex ( o ) ) and the left-hand-side depicts the semantic similarity, Q sem ( o ) ).", "The maps are presented for o sem = 50 , and for different values of o syn and o lex .", "As expected, the two measures are anti-correlated, where areas with increased semantic similarity are characterized by decreased linguistic diversity.", "The QCPG results are compared to two reference points, which are invariant to o and are marked on the colorbars with black squares: 'Dataset' is the semantic-similarity/linguistic-diversity average value over the corresponding dev set paraphrases, and 'Baseline' is the average semantic-similarity/linguistic-diversity of the uncontrolled baseline over the corresponding dev set.", "Notice that the average diversity level achieved by the uncontrolled baseline is lower than that of the dev set mean, reflecting the difficulty of this model to generate diverse paraphrases.", "QCPG on the other hand, with suitable input offset values, is able to generate paraphrases which are on average higher than the baseline both in their linguistic diversity and in their semantic similarity, and in fact even higher in many cases than the values of the ground truth paraphrases in the dev-set.", "In general, the estimates of the expected quality achieved by QCPG at different input offsets, enable a user to generate paraphrases at different operation points, by manipulating the input offset control o to meet her desired quality values.", "Con-600 !\"#$%&'()*+,-*' ./)0)0 1$2$3+4 /5+,&%,$%()*+,-*' /\"6&+,$%()*+,-*' 7 ( 8 \" 4 9*+ 4 $ : \"+\" 44 ; <&-&=&+2>?@", "sider for example a typical use case, of aiming to maximize linguistic diversity under a constraint on semantic similarity.", "An example of such a case is an operation point, denoted by QCP G , which aims to exemplify the advantage of QCPG over the baseline, by maximizing linguistic diversity under the constraint that the semantic similarity is at least 5 points higher than the baseline.", "The input offset values to obtain this operation point depend on the dataset, and can be found using heatmaps such as in Figure 5.", "For WikiAns the input offset for the QCP G operation point values are (50 , 35 , 5) (entry marked by the black square).", "In the previous section we saw, using estimates based on the dev sets, that there are many operation points which generate paraphrases with higher quality than those achieved by the uncontrolled baseline.", "We now turn to evaluate one such operation point, namely QCP G , using the source sentences of the test sets which were not used in the selection of the input offset values.", "Automatic Evaluation We use four quality measures to evaluate different aspects of generated paraphrases.", "The three quality measures used in the control of QCPG (Section 2.1) and Self-BLEU (Zhu et al., 2018) as adapted in Li et al. (2019); Liu et al. (2020a), which aims to measure the linguistic diversity in the generated paraphrases by penalizing copying from input sentences.", "As can be seen in Table 1, QCP G outperforms the baseline in all metrics across all datasets, as predicted using the dev-set heatmaps.", "A clear advantage is obtained even for Self-BLEU, which was not part of the metrics used as input controls.", "Importantly, the quality of the paraphrases generated by our model is comparable to, or at times better than the quality of the paraphrases in the ground truth of the datasets.", "Examples of paraphrases generated by QCP G compared to the ground truth paraphrases appear in Table 10.", "This is an important step towards the goal of obtaining paraphrases in the sparse area of high quality (recall the top right corner of Figure 1).", "Additionally, we examined QCPG from another perspective: the effect of the quality guidance on the model's ability to predict the ground truth paraphrases.", "Tables 5 and 6 show the BLEU scores (Papineni et al., 2002) obtained by QCPG and the uncontrolled baseline respectively.", "The results verify that the input quality vectors induced by the target sentences are effectively utilized by QCPG to achieve better prediction performance.", "Human Evaluation While linguistic diversity can be automatically measured by reliable metrics such as Self-BLEU, measuring semantic similarity is more challenging.", "We therefore rely on automatic metrics for evaluating the lexical and syntactic diversity, but use human annotation for validating the semantic evaluation.", "To this end, we selected a sample of 50 source sentences from each test set, and generated one paraphrase using the uncontrolled baseline and one using QCP G .", "The 602 MSCOCO WikiAns ParaBank2 q sem q syn q lex Self-BLEU q sem q syn q lex Self-BLEU q sem q syn q lex Self-BLEU Gold 29.9 34.5 28.0 8.7 34.6 30.7 24.4 16.4 75.0 18.5 20.9 23.9 BL 50.0 27.8 23.0 18.8 46.6 24.7 20.9 23.4 77.8 16.8 18.6 29.4 QCPG 56.6 29.6 42.4 18.0 48.5 41.5 24.8 21.4 81.4 18.9 19.6 27.1 Table 1: Automatic evaluation of the QCPG model on the test set.", "annotators were shown the source sentence, along with the two generated paraphrases (randomly or-dered), and were asked which of the two better preserves the semantic meaning of the source sentence (ties are also allowed).", "In total, 150 triplets were evaluated by 5 judges.", "Table 2 demonstrates an advantage for QCP G in all datasets, with a large margin in MSCOCO and WikiAns.", "This advantage is statistically significant ( p value < 0 . 05 ) as obtained by applying the Wilcoxon signed-rank test to the difference between the number of annotators that voted for QCP G and those voted for the baseline, across all datasets.", "Thus, the human evaluation is in line with the results of the automatic semantic similarity measure.", "We also verified, that the results of this sample, in terms of linguistic diversity, are very similar to those shown in Table 1.", "For examples of paraphrases generated by QCP G see Table 10 in the Appendix.", "Many recent works on paraphrase generation have been focused on attempting to achieve high-quality paraphrases.", "These works can be divided into supervised and unsupervised approaches.", "Supervised Approaches To achieve diversity, some works focused on diverse decoding using heuristics such as Hamming distance or distinct n-grams to preserve diverse options during beam search (Vijayakumar et al., 2018).", "Other works generate multiple outputs by perturbing latent representations (Gupta et al., 2018; Park et al., 2019).", "or by using distinct generators (Qian et al., 2019).", "These methods achieve some diversity, but do not control generation in an interpretable manner.", "The works that are most similar to ours strive to gain diversity using controlled-paraphrase generation, by exposing control mechanisms that are manipulated to produce either lexically (Zeng et al., 2019; Thompson and Post, 2020) or syntactically (Chen et al., 2019; Goyal and Durrett, 2020) diverse paraphrases.", "One approach is to use an exemplar sentence for guiding the syntax of the generated paraphrase (Chen et al., 2019; Bao et al., 2019; Hosking and Lapata, 2021).", "An alternative is to directly employ constituency tree as the syntax guidance (Iyyer et al., 2018; Li and Choi, 2020).", "Goyal and Durrett (2020) promote syntactic diversity by conditioning over possible syntactic rearrangements of the input.", "Zeng et al. (2019) use keywords as lexical guidance for the generation process.", "Here we introduce a simple model for jointly controlling the lexical, syntactic and semantic aspects of the generated paraphrases.", "Unsupervised Approaches Niu et al. (2020) rely on neural models to generate high quality paraphrases, using a decoding method that enforces diversity by preventing repetitive copying of the input tokens.", "Liu et al. (2020b) optimize a quality oriented objective by casting paraphrase generation as an optimization problem, and searching the sentence space to find the optimal point.", "Garg et al. (2021) and Siddique et al. (2020) use reinforcement learning with quality-oriented reward combining textual entailment, semantic similarity, expression 603 diversity and fluency.", "In this paper, we propose a novel controlled paraphrase generation model, that leverages measures of paraphrase quality for encouraging the generation of paraphrases with desired quality.", "We demonstrate the high level of control achieved by the model, and suggest a method for coping with the challenging problem of finding suitable control values.", "Aside from offering a simple and effective way for controlling models' output quality, the quality control paradigm enables a holistic view of the data, the training process and the final model analysis.", "Namely: (I) Examination of the training data through the lens of data quality enables to characterize the data at hand, its strengths and limitations.", "(II)", "A quality-aware training process can be viewed as multi-task learning, where each quality level is a separate task with its own accurate supervision, as opposed to the standard quality-agnostic approach, where low quality data is in fact used as a poor supervision for a model which aims at generating higher quality output.", "(III)", "Analyzing the model behavior under different quality controls, allows finer understanding of the different model behaviors and the trade-offs between their output qualities.", "Better understanding the expected output quality of neural NLG models, for different input quality controls, can increase the trust in their output.", "Finally, our model analysis consistently shows that although the models generally follow the quality requirements, there is still room for improvement.", "A possible direction for future research is exploring methods, such as reinforcement learning, for further improving the ability of the model to satisfy the quality requirements." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "other", "result", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain" ]
[ "Aspect-based sentiment analysis is a fine-grained sentiment classification task.", "Recently, graph neural networks over dependency trees have been explored to explicitly model connections between aspects and opinion words.", "However, the improvement is limited due to the inaccuracy of the dependency parsing results and the informal expressions and complexity of online reviews.", "To overcome these challenges, in this paper, we propose a dual graph convolutional networks (Du-alGCN) model that considers the complementarity of syntax structures and semantic correlations simultaneously.", "Particularly, to alleviate dependency parsing errors, we design a SynGCN module with rich syntactic knowledge.", "To capture semantic correlations, we design a SemGCN module with self-attention mechanism.", "Furthermore, we propose orthogonal and differential regularizers to capture semantic correlations between words precisely by constraining attention scores in the SemGCN module.", "The orthogonal regularizer encourages the SemGCN to learn semantically correlated words with less overlap for each word.", "The differential regularizer encourages the SemGCN to learn semantic features that the SynGCN fails to capture.", "Experimental results on three public datasets show that our DualGCN model outperforms state-of-the-art methods and verify the effectiveness of our model.", "Sentiment analysis has become a popular topic in natural language processing (Liu, 2012; Li and Hovy, 2017).", "Aspect-based sentiment analysis (ABSA) talks an entity-level oriented fine-grained sentiment analysis task that aims to determine sentiment polarities of given aspects in a sentence.", "In Corresponding author.", "Figure 1, the comment is about a restaurant review.", "The sentiment polarity of the two aspects price and service are positive and negative, respectively.", "Thus, ABSA can precisely identify user's attitudes towards a certain aspect, rather than simply assigning a sentiment polarity for a sentence.", "The key point in solving the ABSA task is to model the dependency relationship between an aspect and its corresponding opinion expressions.", "Nevertheless, there probably exist multiple aspects and different opinion expressions in a sentence.", "To judge the sentiment of a particular aspect, previous studies (Wang et al., 2016; Tang et al., 2016a; Ma et al., 2017; Chen et al., 2017; Fan et al., 2018; Huang et al., 2018; Gu et al., 2018) have proposed various recurrent neural networks (RNNs) with attention mechanisms to generate aspect-specific sentence representations and have achieved appealing results.", "However, an inherent defect makes the attention mechanism vulnerable to noise in the sentence.", "Take Figure 1 as an example; for the aspect service , the opinion word reasonable may receive more attention than the opinion word poor .", "However, the reasonable refers to another aspect, i.e., price .", "More recent efforts (Zhang et al., 2019; Sun et al., 2019b; Huang and Carley, 2019; Zhang and Qian, 2020; Chen et al., 2020; Liang et al., 2020; Wang et al., 2020; Tang et al., 2020) have been devoted to graph convolutional networks (GCNs) and graph attention networks (GATs) over dependency trees, which explicitly exploit the syntactic structure of a sentence.", "Consider the dependency tree in Figure 1; the syntactic dependency can establish connections between the words in a sentence.", "For example, a dependency relation exists between the aspect price and the opinion word reasonable .", "However, two challenges arise when applying syntactic dependency knowledge to the ABSA task: 1) the inaccuracy of the dependency parsing results and 2) GCNs over dependency trees do not work well as expected on datasets that are not sensitive to syntactic dependency due to the informal expression and complexity of online reviews.", "In this paper, we propose a novel architecture, the dual graph convolution network (DualGCN), as shown in Figure 2, to solve the aforementioned challenges.", "For the first challenge, we use the probability matrix of all dependency arcs from a dependency parser to build a syntax-based graph convolutional network (SynGCN).", "The idea behind this approach is that the probability matrix representing dependencies between words contains rich syntactic information compared with the final discrete output of a dependency parser.", "For the second, we construct a semantic correlation-based graph convolutional network (SemGCN) by utilizing a self-attention mechanism.", "The idea behind this approach is that the attention matrix shaped by self-attending, also viewed as an edge-weighted directed graph, can represent semantic correlations between words.", "Moreover, motivated by the work of DGEDT (Tang et al., 2020), we utilize a BiAffine module to bridge relevant information between the SynGCN and SemGCN modules.", "Furthermore, we design two regularizers to enhance our DualGCN model.", "We observe that the semantically related terms of each word should not overlap.", "Therefore, we encourage the attention probability distributions over words to be orthogonal.", "To this end, we incorporate an orthogonal regularizer on the attention probability matrix for the SemGCN module.", "Moreover, the two representations learned from the SynGCN and SemGCN modules should contain significantly distinct information captured by the syntactic dependency and the semantic correlation.", "Therefore, we expect that the SemGCN module could learn semantic representations different from syntactic representations.", "Thus, we propose a differential regularizer between the SynGCN and SemGCN modules.", "We propose a DualGCN model for the ABSA task.", "Our DualGCN considers both the syntactic structure and the semantic correlation within a given sentence.", "Specifically, our DualGCN integrates the SynGCN and SemGCN networks through a mutual BiAffine module.", "We propose orthogonal and differential regularizers.", "The orthogonal regularizer encourages the SemGCN network to learn an orthogonal semantic attention matrix, whereas the differential regularizer encourages the SemGCN network to learn semantic features distinct from the syntactic ones built from the SynGCN network.", "We conduct extensive experiments on the SemEval 2014 and Twitter datasets.", "The experimental results demonstrate the effectiveness of our DualGCN model.", "Additionally, the source code and preprocessed datasets used in our work are provided on GitHub 1 .", "Traditional sentiment analysis tasks are sentence-level or document-level oriented.", "In contrast, ABSA is an entity-level oriented and a more fine-grained task for sentiment analysis.", "Earlier methods (Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Vo and Zhang, 2015) are usually based on handcrafted features and fail to model the dependency between the given aspect and its context.", "Recently, various attention-based neural networks have been proposed to implicitly model the semantic relation of an aspect and its context to capture the opinion expression component (Wang et al., 2016; Tang et al., 2016a,b; Ma et al., 2017; Chen et al., 2017; Fan et al., 2018; Huang et al., 2018; Gu et al., 2018; Li et al., 2018a; Tan et al., 2019).", "For instance, (Wang et al., 2016) proposed attention-based LSTMs for aspect-level sentiment classification.", "(Tang et al., 2016b) and (Chen et al., 2017) both introduced a hierarchical attention network to identify important sentiment information related to the given aspect.", "(Fan et al., 2018) exploited a multi-grained attention mechanism to capture the word-level interaction between aspects and their context.", "(Tan et al., 2019) designed a dual attention 1 https://github.com/CCChenhao997/DualGCN-ABSA network to recognize conflicting opinions.", "In addition, the pre-trained language model BERT (Devlin et al., 2019) has achieved remarkable performance in many NLP tasks, including ABSA.", "(Sun et al., 2019a) transformed ABSA task into a sentence pair classification task by constructing an auxiliary sentence.", "(Xu et al., 2019) proposed a post-training approach on the BERT to enhance the performance of fine-tuning stage for the ABSA task.", "Another trend explicitly leverages syntactic knowledge.", "This type of knowledge helps to establish connections between the aspects and the other words in a sentence to learn syntax-aware feature representations of aspects.", "(Dong et al., 2014) proposed a recursive neural network to adaptively propagate the sentiment of words to the aspect along the dependency tree.", "(He et al., 2018) introduced an attention model that incorporated syntactic information to compute attention weights.", "(Phan and Ogunbona, 2020) utilized the syntactic relative distance to reduce the impact of irrelevant words.", "Following this line, a few works extend the GCN and GAT models by means of a syntactical dependency tree and develop several outstanding models (Zhang et al., 2019; Sun et al., 2019b; Huang and Carley, 2019; Wang et al., 2020; Tang et al., 2020).", "These works explicitly exploit the syntactic structure information to learn node representations from adjacent nodes.", "Thus, the dependency tree shortens the distance between the aspects and opinion words of a sentence and alleviates the problem of long-range dependency.", "Most recently, several works explore the idea of combining different types of graph for ABSA task.", "For instance, (Chen et al., 2020) combined a dependency graph and a latent graph to generate the aspect representation.", "(Zhang and Qian, 2020) observed the characteristics of word co-occurrence in linguistics and designed hierarchical syntactic and lexical graphs.", "(Liang et al., 2020) constructed aspect-focused and inter-aspect graphs to learn dependency feature of the key aspect words and sentiment relations between different aspects.", "In this paper, we propose a GCN based method combining syntactic and semantic features.", "We use a dependency probability matrix with richer syntactic information and elaborately design orthogonal and differential regularizers to enhance the ability to precisely capture the semantic associations.", "Motivated by conventional convolutional neural networks (CNNs) and graph embedding, a GCN is an efficient CNN variant that operates directly on graphs (Kipf and Welling, 2017).", "For graph structured data, a GCN can apply the convolution operation on directly connected nodes to encode local information.", "Through the message passing of multilayer GCNs, each node in a graph can learn more global information.", "Given a graph with n nodes, the graph can be represented as an adjacency matrix A R n n .", "Most previous work (Zhang et al., 2019; Sun et al., 2019b) extend GCN models by encoding dependency trees and incorporating dependency paths between words.", "They build the adjacency matrix A over the syntactical dependency tree of a sentence.", "Thus, an element A ij in A indicates whether the i -th node is connected to the j -th node.", "Specifically, A ij = 1 if the i -th node is connected to the j -th node, and A ij = 0 otherwise.", "In addition, the adjacency matrix A , composed of 0 and 1 , can be deemed as the final discrete output of a dependency parser.", "For the i -th node at the l -th layer, formally, its hidden state representation, denoted as h li , is updated by the following equation: h li = n (cid:88) j =1 A ij W l h l 1 j + b l (1) where W l is a weight matrix, b l is a bias term, and is an activation function (e.g., ReLU).", "Figure 2 provides an overview of DualGCN.", "In the ABSA task, a sentence-aspect pair ( s, a ) is given, where a = { a 1 , a 2 , ..., a m } is an aspect.", "It is also a sub-sequence of the entire sentence s = { w 1 , w 2 , ..., w n } .", "Then, we utilize BiLSTM or BERT as sentence encoder to extract hidden contextual representations, respectively.", "For the BiLSTM encoder, we first obtain the word embeddings x = { x 1 , x 2 , ..., x n } of the sentence s from an embedding lookup table E R | V | d e , where | V | is the size of vocabulary and d e denotes the dimensionality of word embeddings.", "Next, the word embeddings of the sentence are fed into a BiLSTM to produce hidden state vectors H = { h 1 , h 2 , ..., h n } , where h i R 2 d is the hidden state vector at time t from the BiLSTM.", "The dimensionality of a hidden state vector d is output by a unidirectional LSTM.", "For the BERT encoder, we construct a sentence-aspect pair [CLS] sentence [SEP] aspect [SEP] as input to obtain aspect-aware hidden representations of the sentence.", "Moreover, in order to match the wordpiece-based representations of BERT with the result of syntactic dependency based on word, we expand dependencies of a word into its all of subwords.", "Then, the hidden representations of sentence are input into the SynGCN and SemGCN modules, respectively.", "A BiAffine module is then adopted for effective information flow.", "Finally, we aggregate all the aspect nodes' representations from the SynGCN and SemGCN modules via pooling and concatenation to form the final aspect representation.", "Next, we elaborate on the details of our proposed DualGCN model.", "The SynGCN module takes the syntactic encoding as input.", "To encode syntactic information, we utilize the probability matrix of all dependency arcs from a dependency parser.", "Compared to the final discrete output of a dependency parser, the dependency probability matrix could capture rich structural information by providing all latent syntactic structures.", "Therefore, the dependency probability matrix is used to alleviate dependency parsing errors.", "Here, we use the state-of-the-art dependency parsing model LAL-Parser (Mrini et al., 2019).", "With the syntactic encoding of an adjacency matrix A syn R n n , the SynGCN module takes the hidden state vectors H from BiLSTM as initial node representations in the syntactic graph.", "The syntactic graph representation H syn = { h syn 1 , h syn 2 , ..., h syn n } is then obtained from the SynGCN module using Eq.", "(1).", "Here, h syn i R d is a hidden representation of the i th node.", "Note that for aspect nodes, we use symbols { h syn a 1 , h syn a 2 , ..., h syn a m } to denote their hidden representations.", "Instead of utilizing additional syntactic knowledge, as in SynGCN, SemGCN obtains an attention matrix as an adjacency matrix via a self-attention mechanism.", "On the one hand, self-attention can capture the semantically related terms of each word in a sentence, which is more flexible than the syntactic structure.", "One the other hand, SemGCN can adapt to online reviews that are not sensitive to syntactic information.", "Self-Attention Self-attention (Vaswani et al., 2017) computes the attention score of each pair of elements in parallel.", "In our DualGCN, we compute the attention score matrix A sem R n n using a self-attention layer.", "We then take the attention score matrix A sem as the adjacency matrix of our SemGCN module, which can be formulated as: A sem = softmax (cid:32) QWQ (cid:0) KWK (cid:1) T d (cid:33) (2) where matrices Q and K are both equal to the graph representations of previous layer of our SemGCN module, while WQ and WK are learnable weight matrices.", "In addition, d is the dimensionality of the input node feature.", "Note that we use only one self-attention head to obtain an attention score matrix for a sentence.", "Similar to the SynGCN module, the SemGCN module obtains the graph representation H sem .", "Additionally, we use the symbols { h sem a 1 , h sem a 2 , ..., h sem a m } to denote the hidden representations of all aspect nodes.", "BiAffine Module To effectively exchange relevant features between the SynGCN and SemGCN modules, we adopt a mutual BiAffine transformation as a bridge.", "We formulate the process as follows: H syn (cid:48) = softmax (cid:16) H syn W 1 ( H sem ) T (cid:17) H sem (3) H sem (cid:48) = softmax (cid:16) H sem W 2 ( H syn ) T (cid:17) H syn (4) where W 1 and W 2 are trainable parameters.", "Finally, we apply average pooling and concatenation operations on the aspect nodes of the SynGCN and SemGCN modules.", "Thus, we obtain the final feature representation for the ABSA task, i.e., h syn a = f (cid:0) h syn a 1 , h syn a 2 , ..., h syn a m (cid:1) (5) h sem a = f (cid:0) h sem a 1 , h sem a 2 , ..., h sem a m (cid:1) (6) r = [ h syn a , h sem a ] (7) where f ( ) is an average pooling function applied over the aspect node representations.", "Then, the obtained representation r is fed into a linear layer, followed by a softmax function to produce a sentiment probability distribution p , i.e., p ( a ) = softmax ( W p r + b p ) (8) where W p and b p are the learnable weight and bias.", "i.e., orthogonal and differential regularizers.", "Orthogonal Regularizer Intuitively, the related items of each word should be in different regions in a sentence, so the attention score distributions rarely overlap.", "Therefore, we expect a regularizer to encourage orthogonality among the attention score vectors of all words.", "Given an attention score matrix A sem R n n , the orthogonal regularizer is formulated as follows: RO = (cid:107) A sem A sem T I (cid:107) F (9) where I is an identity matrix.", "The subscript F denotes the Frobenius norm.", "As a result, each nondiagonal element of A sem A sem T is minimized to maintain the matrix A sem orthogonal.", "Differential Regularizer We expect that two types of feature representations learned from the SynGCN and SemGCN modules represent distinct information contained within the syntactic dependency trees and semantic correlations.", "Therefore, we adopt a differential regularizer between the two adjacency matrices of the SynGCN and SemGCN modules.", "Note that the regularizer is only restrictive to A sem and is given as RD = 1 (cid:107) A sem A syn (cid:107) F .", "Our training goal is to minimize the following total objective function:", "where 1 , 2 and 3 are regularization coefficients and represents all trainable model parameters.", "(cid:96)", "C is a standard cross-entropy loss and is defined for the ABSA task as follows: (cid:96) C = (cid:88) ( s,a ) D (cid:88) c C log p ( a ) (12) where D contains all sentence-aspect pairs and C is the collection of distinct sentiment polarities.", "We conduct experiments on three public standard datasets.", "The Restaurant and Laptop datasets Dataset Division # Positive # Negative # Neutral Restaurant Training 2164 807 637 Testing 727 196 196 Laptop Training 976 851 455 Testing 337 128 167 Twitter Training 1507 1528 3016 Testing 172 169 336 Table 1: Statistics for the three experimental datasets.", "are made public from the SemEval ABSA challenge (Pontiki et al., 2014).", "Following (Chen et al., 2017), we remove the instances using the conflict label.", "In addition, the Twitter dataset is a collection of tweets (Dong et al., 2014).", "All three datasets have three sentiment polarities: positive, negative and neutral.", "Each sentence in these datasets is annotated with marked aspects and their corresponding polarities.", "Statistics for the three datasets are shown in Table 1.", "The LAL-Parser (Mrini et al., 2019), which is used for dependency parsing, provides an off-the-shelf parser 2 .", "For all the experiments, we use pretrained 300-dimensional Glove 3 vectors (Pennington et al., 2014) to initialize the word embeddings.", "The dimensionality of the position (i.e., the relative position of each word in a sentence with respect to the aspect) embeddings and part-of-speech (POS) embeddings is set to 30.", "Thus, we concatenate the word, POS and position embeddings and then input them into a BiLSTM model, whose hidden size is set to 50.", "To alleviate overfitting, we apply dropout at a rate of 0.7 to the input word embeddings of the BiLSTM.", "The dropout rate of the SynGCN and SemGCN modules is set to 0.1, and the number of SynGCN and SemGCN layers is set to 2.", "All the model weights are initialized from a uniform distribution.", "We use the Adam optimizer with a learning rate of 0.002.", "The DualGCN model is trained in 50 epochs with a batch size of 16.", "The regularization coefficients, 1 and 2 are set to (0.2, 0.3), (0.2, 0.2) and (0.3, 0.2) for the three datasets, respectively, and 3 is set to 10 4 .", "For DualGCN+BERT, we use the bert-base-uncased 4 English version.", "See our code for more details about BERT's experiments.", "Additionally, following (Marcheggiani and Titov, 2017), we add a self-loop for each node in 2 https://github.com/KhalilMrini/LAL-Parser 3 https://nlp.stanford.edu/projects/glove/ 4 https://github.com/huggingface/transformers the SynGCN and SemGCN modules.", "We compare DualGCN with state-of-the-art baselines.", "The models are briefly described as follows.", "1) ATAE-LSTM (Wang et al., 2016) utilizes aspect embedding and the attention mechanism in aspect-level sentiment classification.", "2) IAN (Ma et al., 2017) employs two LSTMs and an interactive attention mechanism to generate representations for the aspect and sentence.", "3) RAM (Chen et al., 2017) uses multiple attention and memory networks to learn the sentence representation.", "4) MGAN (Fan et al., 2018) designs a multigrained attention mechanism to capture word-level interactions between the aspect and context.", "5) TNet (Li et al., 2018b) transforms BiLSTM embeddings into target-specific embeddings and uses CNN to extract final embeddings for classification.", "6) ASGCN (Zhang et al., 2019) first proposed using GCN to learn the aspect-specific representations for aspect-based sentiment classification.", "7) CDT (Sun et al., 2019b) utilizes a GCN over a dependency tree to learn aspect representations with syntactic information.", "8) BiGCN (Zhang and Qian, 2020) uses hierarchical graph structure to integrate word co-occurrence information and dependency type information.", "9) kumaGCN (Chen et al., 2020) employs a latent graph structure to complement syntactic features.", "10) InterGCN (Liang et al., 2020) utilizes a GCN over a dependency tree to learn aspect representations with syntactic information.", "11) R-GAT (Wang et al., 2020) proposes a aspect-oriented dependency tree structure and then encodes new dependency trees with a relational GAT.", "12) DGEDT (Tang et al., 2020) proposes a dependency graph enhanced dual-transformer network by jointly considering flat representations and graph-based representations.", "13) BERT (Devlin et al., 2019) is the vanilla BERT model by feeding the sentence-aspect pair and using the representation of [CLS] for predictions.", "14) R-GAT+BERT (Wang et al., 2020) is the R-GAT model that uses a pre-trained BERT to replace BiLSTM as an encoder.", "15) DGEDT+BERT (Tang et al., 2020) is the DGEDT model that uses a pre-trained BERT to replace BiLSTM as an encoder.", "To evaluate the ABSA models, we use the accuracy and macro-averaged F1-score as the main evaluation metrics.", "The main experimental results are reported in Table 2.", "Our DualGCN model consistently outperforms all attention-based and syntax-based methods on the Restaurant, Laptop and Twitter datasets.", "These results demonstrates that our DualGCN effectively integrates syntactic knowledge and semantic information.", "In addition, the DualGCN accurately fits datasets that contain formal, informal or complicated reviews.", "Compared to attention-based methods such as ATAE-LSTM, IAN and RAM, our DualGCN model utilizes syntactic knowledge to establish dependencies between words, so it can avoid noises introduced by the attention mechanism.", "Moreover, the syntax-based methods, such as ASGCN, CDT, R-GAT and so on, achieve better performance than attention-based methods, but they ignore the semantic correlation between words.", "However, when considering informal or complicated sentences, using only syntactic knowledge results in poor performance.", "In Table 2, on the other side, the results from the last group shows that the basic BERT outperforms most of the models based on static word embedding.", "Moreover, based on BERT, our DualGCN+BERT achieves better performance.", "To further investigate the role of modules in the DualGCN model, we conduct extensive ablation studies.", "The results are reported in Table 2.", "The SynGCN-head model uses the discrete outputs of a dependency parser to construct the adjacency matrix of the GCNs.", "In contrast, SynGCN leverages the probability matrix generated in a dependency parser as the adjacency matrix.", "The SynGCN model outperforms the SynGCN-head on the Restaurant and Laptop datasets, which demonstrates that rich syntactic knowledge can alleviate dependency parsing errors.", "The SemGCN model utilizes a self-attention layer to construct the adjacency matrix of the semantic graph.", "This SemGCN model outperforms the SynGCN on the Twitter dataset because the reviews from Twitter, compared to those from Restaurant and Laptop datasets, are largely informal and insensitive to syntactic information.", "DualGCN w/o BiAffine means that we remove the BiAffine module so that the SynGCN and SemGCN modules cannot interact with each other.", "Therefore, the performance degrades substantially on the Restaurant and Laptop datasets.", "DualGCN w/o RO & RD indicates that we remove both the orthogonal and differential regularizers.", "Similarly, DualGCN w/o RO or RD denotes that we remove only one of the regularizers.", "The experimental results show that our two regularizers encourage the DualGCN to capture semantic correlations precisely.", "Overall, our DualGCN with all modules achieves the best performance.", "Table 4 shows a few sample cases analyzed using different models.", "The notations P, N and O represent positive, negative and neutral sentiment, respectively.", "We highlight the aspect words in red and in blue.", "For the aspect food in the first sample, the attention-based methods, i.e., ATAE-LSTM and IAN, are prone to attend to the noisy word dreadful .", "Although the syntactic dependency can establish direct connections between an aspect and some words, no association exists between the aspect and the opinion words for complicated sentences.", "Take the second sample as an example; the aspect apple os is far from the opinion word happy in terms of syntactic distance.", "Thus, the SynGCN model fails.", "Additionally, in the third sample, feature representations of the key words did not are not captured by the SynGCN model.", "In contrast, the SemGCN model can attend to the semantic correlation between words.", "The last two samples demonstrate that our DualGCN, which fully considers the complementarity of syntactic knowledge and semantic information, can address complicated and informal sentences with the help of the orthogonal and differential regularizers.", "To investigate the effectiveness of the two regularizers in capturing the semantic correlations between words, we visualized the attention score matrix of the DualGCN w/o RO & RD and the intact DualGCN.", "Consider the sample sentence, i.e., Web browsing is very quick with Safari browser. with Safari browser as an aspect.", "As shown in Figure 3", "(a), the attention score matrix is dense, and the related terms of each word overlap in the DualGCN w/o RO & RD model.", "This result is attributed to the lack of semantic constraints in the self-attention layers.", "The overlap of semantic correlations will lead to redundancy and noise during information propagation.", "The seventh and eighth rows of the Models Restaurant Laptop Twitter Accuracy Macro-F1 Accuracy Macro-F1 Accuracy Macro-F1 ATAE-LSTM (Wang et al., 2016) 77.20 -68.70 -IAN (Ma et al., 2017) 78.60 -72.10 -RAM (Chen et al., 2017) 80.23 70.80 74.49 71.35 69.36 67.30 MGAN (Fan et al., 2018) 81.25 71.94 75.39 72.47 72.54 70.81 TNet (Li et al., 2018b) 80.69 71.27 76.54 71.75 74.90 73.60 ASGCN (Zhang et al., 2019) 80.77 72.02 75.55 71.05 72.15 70.40 CDT (Sun et al., 2019b) 82.30 74.02 77.19 72.99 74.66 73.66 BiGCN (Zhang and Qian, 2020) 81.97 73.48 74.59 71.84 74.16 73.35 kumaGCN (Chen et al., 2020) 81.43 73.64 76.12 72.42 72.45 70.77 InterGCN (Liang et al., 2020) 82.23 74.01 77.86 74.32 -R-GAT (Wang et al., 2020) 83.30 76.08 77.42 73.76 75.57 73.82 DGEDT (Tang et al., 2020) 83.90 75.10 76.80 72.30 74.80 73.40 Our DualGCN 84.27 78.08 78.48 74.74 75.92 74.29 BERT-SPC (Devlin et al., 2019) 86.15 80.29 81.01 76.69 75.18 74.01 R-GAT+BERT (Wang et al., 2020) 86.60 81.35 78.21 74.07 76.15 74.88 DGEDT+BERT (Tang et al., 2020) 86.30 80.00 79.80 75.60 77.90 75.40 Our DualGCN+BERT 87.13 81.16 81.80 78.10 77.40 76.02 Table 2: Experimental results comparison on three publicly available datasets.", "attention score matrix are the attention probability distributions of safari and browser, respectively.", "The information to which safari browser pays attention is redundant and it does not pay more attention to the key opinion word quick.", "Thus, the DualGCN w/o RO & RD failed.", "In comparison, in Figure 3", "(b), the attention score matrix produced by our DualGCN is relatively sparse.", "Both safari and browser are semantically related to quick, and their other attended items are also semantically reasonable.", "In addition, the attention scores of the related terms of each words tend to be distinct and precise due to the semantic constraints of these two regularizers.", "Therefore, our DualGCN model can readily predict the correct sentiment polarity of the aspect safari browser.", "To investigate the impact of the DualGCN layer number, we evaluate our DualGCN model with one to eight layers on the Restaurant and Laptop datasets.", "As shown in Figure 4, our model with two DualGCN layers performs the best.", "On one the hand, node representations cannot propagate far when the number of layers is small.", "On the other hand, if the number of layers is excessive, the model will become unstable due to the vanishing gradient and information redundancy.", "In this paper, we propose a DualGCN architecture to address the disadvantages of attention-based and dependency-based methods for ABSA tasks.", "Our # Review ATAE-LSTM IAN SynGCN SemGCN DualGCN 1 Great food but the service was dreadful!", "DualGCN model integrates syntactic knowledge and semantic information by means of the SynGCN and SemGCN modules.", "Moreover, to effectively capture the semantic correlation between words, we propose orthogonal and differential regularizers in the SemGCN module.", "These regularizers can attend to the semantically related items with less overlap of each word and capture feature representations that differ from the syntactic structure.", "Extensive experiments on benchmark datasets show that our DualGCN model outperforms baselines.", "This work was supported in part by the National Key R&D Program of China under Grant 2019YFF0303300 and Subject II under Grant 2019YFF0303302, in part by the National Natural Science Foundation of China under Grants 61906018 and 62076032, in part by the 111 Project under Grant B08004, and in part by the Fundamental Research Funds for the Central Universities under Grant 2021RC36." ]
[ "abstain", "abstain", "abstain", "objective", "method", "method", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "objective", "objective", "method", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "result", "other" ]
[ "Recent work on entity coreference resolution (CR) follows current trends in Deep Learning applied to embeddings and relatively simple task-related features.", "SOTA models do not make use of hierarchical representations of discourse structure.", "In this work, we leverage automatically constructed discourse parse trees within a neural approach and demonstrate a significant improvement on two benchmark entity coreference-resolution datasets.", "We explore how the impact varies depending upon the type of mention.", "Historically, theories of discourse coherence (Chafe, 1976; Hobbs, 1979; Grosz and Sidner, 1986; Clark and Brennan, 1991) have offered elaborate expositions on how the patterns of anaphoric references in discourse are constrained by limitations in human capacity to manage attention and resolve ambiguity.", "Hobbs (1979) acknowledges that these human limitations have meant that coreference resolution in natural text can be achieved with relatively high accuracy using a combination of recency and simple semantic constraints.", "State-of-the-art neural approaches for coreference resolution (Lee et al., 2017; Joshi et al., 2019, 2020) have therefore not surprisingly shown strong performance relying on surface-level features and local-context (i.e., extracted from a small text window around the mention).", "Traditional approaches, on the other hand, make an attempt to formally model the process of managing attention, for example, the stack in Grosz and Sidner (1986)'s model.", "Their stack-based model suggests specific places where recency might fail while a more explicit model of discourse structure might make a correct prediction, for example, where an anaphor and a nearby potential (but incorrect) antecedent are in adjacent but separate discourse segments.", "Because of the potential existence of such cases, we hypothesize that formally incorporating a representation of discourse structure would have a small but non-random positive impact on the ability to correctly resolve anaphoric references.", "This effect might vary depending upon the semantic informativeness of alternative types of anaphoric expressions, since they impose different constraints on where their antecedent can be located within a hierarchical discourse structure.", "There is also a danger that the level of accuracy with which the hierarchical structure of discourse can be obtained in practice might reduce the positive impact still further.", "The contribution of this paper is an empirical investigation of the impact of including a representation of the hierarchical structure of discourse within a neural entity coreference approach.", "To this end, we leverage a state-of-the-art RST discourse-parser to convert a flat document into a tree-like structure from which we can derive features that model the structural constraints.", "We embed this representation within an architecture that is enabled to learn to use this information deferentially depending upon the type of mention.", "The results demonstrate that this level of nuance enables a small but significant improvement in coreference accuracy, even with automatically constructed RST trees.", "Though recency is the strongest predictor for coreference resolution (CR), prior work in CR has bene-fited from the inclusion of semantic features such as type-information on top of the surface and syntax-level features.", "Soon et al. (2001); Bengtson and Roth (2008) used dictionaries like WordNet to extract the semantic class for a noun.", "More recently, Khosla and Rose (2020) showed that adding NER style type-information to Lee et al. (2017) substantially improves performance across multiple datasets.", "employed in multiple downstream NLP tasks like summarization (Louis et al., 2010), sentiment analysis (Somasundaran et al., 2009), and student writing evaluation (Burstein et al., 2013).", "For coreference resolution, Cristea et al. (1999) showed that the potential of natural language systems to correctly determine co-referential links, can be increased by exploiting the hierarchical structure of texts.", "Their discourse model was informed by Vein Theory (Fox, 1987), which identifies chains of elementary discourse units, over discourse structure trees that are built according to the RST (Mann and Thompson, 1987) requirements.", "Haghighi and Klein (2010) proposed an entity-centered model that leveraged discourse features like dependency-parse tree distance, sentence distance, and the syntactic positions (subject, object, and oblique) of the mention and antecedent to perform coreference.", "In this work, we use Yu et al. (2018)'s RST parser to convert documents into RST discourse-structure trees (Mann and Thompson, 1987; Taboada and Mann, 2006).", "From these trees, we derive distance and coverage-based features to model the discourse-level structural constraints, which are passed as input to a neural-network based coreference resolver.", "To our knowledge, ours is the first work that tries to explicitly incorporate discourse-level constraints for coreference resolution in a neural setting.", "In this section, we explain how we introduce discourse-level features into a neural CR system.", "We leverage Lee et al. (2017) as our baseline.", "We replace the word-embeddings with a BERT encoder.", "A preprocessing step for CR is to identify the mentions within the text that need to be resolved.", "Following Bamman et al. (2020) and Khosla and Rose (2020), we remove this possible source of error from our evaluation of entity coreference accuracy by using gold-standard mentions.", "The baseline model's prediction of coreference for a pair of mentions, S ( m i , m j ) , is computed as follows.", "The representations of the two mentions m i and m j along with their element-wise product ( m i (cid:12) m j ) and other features like distance between the mentions ( d m ), and distance between the sentences that contain the mentions ( d s ), are joined together and passed through a fully-connected layer F (blue boxes in Figure 1).", "By incorporating a representation of the hierarchical discourse structure into the representation that is input to the neural model, we seek to add the capability for reasoning that is not possible in the baseline for each mention-pair ( mm ij ).", "None of the features included in the baseline distinguish between pairs that occur within the same or different discourse segments, for example.", "The closest feature in the baseline that approximates document-level relationships is d s , since it can be assumed that mentions are less likely to occur within the same segment the further apart they are in the discourse.", "RST (Mann and Thompson, 1987) offers a theoretical framework in which documents can be parsed into trees that capture the hierarchical discourse structure of the text.", "In this work, we incorporate structural features from such discourse trees, obtained automatically from Yu et al. (2018).", "We concatenate three structural features, extracted from the discourse-tree of the document, with mm ij to model these constraints (as shown in Figure 1).", "We use binarized RST-trees to represent the discourse hierarchy and relationships within each document.", "Discourse-units identified by the parser occur at the leaves ( l ) of the output tree.", "Consider the document under consideration doc and its RST-tree t doc .", "For the current mention m j and candidate mention m i , and the position of the smallest discourse-unit they belong to in the tree ( l u mj and l u mi respectively): DistLCA ( d jlca ) encodes the distance between l u mj and LCA ( l u mi , l u mj ) .", "This feature provides information about the amount of generality required to have the two mentions in the same discourse subtree.", "The smaller the DistLCA , the closer the two mentions are assumed to be in the discourse.", "LeafCoverageLCA ( lc lca ) encodes the number of sentences that are covered by the discourse subtree with LCA ( l u mi , l u mj ) as its root.", "This feature captures the coverage of the level of discourse that encloses both mentions.", "The larger the LeafCoverageLCA , the more the document area that needs to be covered to include both mentions.", "WordCoverageLCA ( wc lca ) encodes the number of words that are covered by the discourse subtree with LCA ( l u mi , l u mj ) as its root.", "This feature is analogous to LeafCoverageLCA but operates on word-level rather than the discourse-unit-level.", "Across different types of anaphoric mentions, depending upon how much information about the antecedent is made apparent, there are differences with respect to the cognitive load imposed on the reader.", "Because this places differential constraints on the interpretation process, we hypothesize that enabling the model to learn different strategies depending upon the mention-type will be advantageous.", "We divide mentions into three types ( type ) motivated by the above-mentioned intuition:", "(i) pronouns (low lexical information, high cognitive load on the reader),", "(ii) named-entities (al-ready grounded mentions), and", "(iii) all other noun phrases.", "A mention is put in the second category if it contains at least one named-entity as predicted by an off-the-shelf NER system.", "1 To identify pronouns, we compare the mention against a manually curated list of English pronouns.", "Ultimately, the discourse and mention-type features are concatenated with mm ij and passed through a fully-connected layer for scoring (Figure 1).", "S ( m i , m j ) = F ([ mm ij ; d jlca ; lc lca ; wc lca ; type j ]) 4 Experimental Setup In this section, we describe the datasets and evaluation metrics we use in our experiments.", "We gauge the benefits of using RST-tree features on two state-of-the-art entity CR datasets discussed below.", "Since, our off-the-shelf RST parser (Yu et al., 2018) is trained on news articles, the choice of datasets is motivated by the attempt at reducing the distribution shift between training and inference while ensuring that the parser was trained on different data than we are using for testing.", "We use the English subset of OntoNotes (Pradhan et al., 2012).", "The corpus contains multiple sub-genres ranging from news articles to telephone conversations.", "We also evaluate our approach on a subset of the RST sub-genre of the ARRAU corpus (Poesio et al., 2018) ( A-RST(gt) ), which contains RST ground-truth parse-tree annotations in the RST Discourse-Treebank (Carlson et al., 2003).", "Following Yu et al. (2018), we keep 347 A-RST(gt) articles for training (out of which we set aside 22 articles for de-velopment), and 38 articles for testing.", "Although ARRAU also annotates bridging (Clark, 1975) and abstract anaphora (Webber, 1991), in this work, we only focus on entity anaphora.", "Both OntoNotes and A-RST(gt) are input to the system in the CoNLL 2012 format.", "We evaluate the systems on the F1-score for MUC, B3, and CEAF metrics using the CoNLL-2012 offi-cial scripts.", "However, we only show the average F1-score of the above-mentioned metrics in this 1 https://demo.allennlp.org/named-entity-recognition Model OntoNotes A-RST(gt) Lee et al. (2017) 83.36 85.80 + type 83.70 85.95 + disc 83.63 86.19 + disc + type 83.89 86.51 + disc(gt) -86.41 + disc(gt) + type -86.70 + disc(gt) + type d s -86.66 Table 1: Performance (Avg. F1) of discourse-informed model variants (gold-mentions) on OntoNotes and A-RST(gt).", "We report the mean score of 5 independent runs with different seeds.", "3 5 Results Ground-truth RST-Trees: To establish an upper-bound for the improvement through introduction of the discourse-tree features, we use features extracted from ground-truth trees.", "We evaluate the upper-bound performance on A-RST(gt) as it contains documents with annotations for coreference as well as RST-structures.", "Our results show that incorporating ground-truth tree features along with the mention's type ( + disc(gt) + type ) gives a boost of 0 .", "90 Avg.", "F1 ( p < 0 . 01 ) over the baseline (Ta-ble 1), suggesting that discourse-level features are beneficial on A-RST(gt).", "Furthermore, we also find that removing d s from this discourse-informed model does not cause a statistically-significant drop in performance.", "We believe that this happens because when discourse-structure features are included in the model, the signal from d s becomes redundant and sub-optimal.", "Predicted RST-Trees: In our second set of experiments we use discourse-trees extracted using Yu et al. (2018)'s RST-parser.", "As shown in Table 1, adding predicted discourse-tree features improves over the baseline on both datasets, with A-RST(gt) corpus witnessing the highest absolute gain of 0 .", "71 Avg.", "F1 points.", "Please note that the results are statistically significant with p < 0 .", "01 .", "4 The relative improvement on OntoNotes is smaller than A-RST(gt) ( 0 . 53 absolute Avg. F1 points).", "This could partially be explained by the fact that the 2 We leave the evaluation of the impact of including RST structural features in the end-to-end CR setting as future work.", "RST-parser is trained on news articles, and therefore, might not generalize well on conversational sub-genres of OntoNotes like tc or bc.", "Ablation Study: To evaluate the contribution of each feature separately, we also perform an ablation study (Table 1).", "On A-RST(gt), we find that the type feature by itself does not provide a considerable boost over the baseline.", "Use of RST-tree based structural features, on the other hand, shows statistically significant improvements ( p < 0 . 01 ), however, the jump is small (from 85 . 80 to 86 . 19 ).", "Our final model which includes both RT-tree features and type gives the best results.", "+ disc + type performs much better than + disc on both datasets (improvement of 0 .", "32 Avg.", "F1 points on A-RST(gt) and 0 .", "26 points on Onto) suggesting that the use of type as a feature enhances the discriminative power of discourse-tree features.", "Mention Type Analysis: To study the influence of different mention-types on the discriminative power of discourse features, we analyze the distribution of d lca across different mention-pair categories in the A-RST(gt) training set.", "Setup.", "To this end, we firstly extract relevant coreferent mention-pairs from the ground-truth clusters.", "To create a pair for each mention m j , we choose the mention m i that belongs to the same cluster ( C ) as m j , occurs before it in the document ( i < j ), and is the closest instance of C to m j .", "Pairs created using this algorithm do not have other supporting mentions from the same cluster in between them.", "We then extract three types of mention-pairs from these relevant pairs for our analysis:", "(i) m j is a pronoun and m i is not a pronoun ( PRP-N );", "(ii) m j contains a named entity and m i is not a pronoun ( NE-N ); and", "(iii) m j is neither a pronoun nor contains a named entity, m i is not a pronoun, and m i , m j have no lexical overlap ( NP-N ).", "Results.", "Figure 2 shows that there is indeed a dependence between d lca and mention-pair type.", "Most of the PRP-N pairs have a d lca < 5 even though the the full RST-tree of a document can be as deep as 24 levels.", "This corroborates our intuition that anaphors with higher ambiguity occur closer to their antecedents in the discourse.", "For NP-N, we find that 90% of the pairs have d lca < 8 , whereas, d lca can go as large as 10 for NE-N.", "This trend explains, at least partially, the difference between the performance of discourse-informed models with and without the mention-type feature.", "In this paper, we show that a representation of hierarchical discourse structure is beneficial for entity coreference resolution.", "Our proposed discourse-informed model observes small but statistically significant improvements over a state-of-the-art neural baseline on two coreference resolution datasets.", "Our analysis shows that the impact of the representation on performance is related to the cognitive load imposed by the type of anaphoric mention.", "While the model proposed in this work could serve as a useful baseline for the benefits of including discourse structure-based features in neural coreference resolution models, we realize that there is potential for achieving additional improvements by including more complex constraints (e.g. Right Frontier Constraint (Asher et al., 2003)).", "We plan to study the affect of such features in future work.", "We thank the anonymous NAACL reviewers for their insightful comments.", "We are also grateful to the members of the TELEDIA group at LTI, CMU for the invaluable feedback.", "This work was funded in part by NSF grants 1949110, 1822831, 1546393, and funding from Dow Chemical." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "objective", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "result", "abstain", "method", "other", "other", "other" ]
[ "Relational triple extraction is critical to understanding massive text corpora and constructing large-scale knowledge graph, which has attracted increasing research interest.", "However, existing studies still face some challenging issues, including information loss, error propagation and ignoring the interaction between entity and relation.", "To intuitively explore the above issues and address them, in this paper, we provide a revealing insight into relational triple extraction from a stereoscopic perspective, which rationalizes the occurrence of these issues and exposes the shortcomings of existing methods.", "Further, a novel model is proposed for relational triple extraction, which maps relational triples to a three-dimension (3-D) space and leverages three decoders to extract them, aimed at simultaneously handling the above issues.", "Extensive experiments are conducted on five public datasets, demonstrating that the proposed model outperforms the recent advanced baselines.", "Relational triple is a common structural representation of semantic facts.", "A triple is always in form of (subject, relation, object), where subject and object are two entities connected by a type of predefined semantic relation.", "Relational triple extraction from unstructured texts is critical to understanding massive text corpora and constructing large-scale knowledge graph (Ren et al., 2017; Wei et al., 2020), which is widely concerned in recent years.", "Early researches (Zhou et al., 2005; Chan and Roth, 2011; Zhang et al., 2017) first recognize entities and predict the relations for each entity pair.", "Such approaches suffer from error propagation problem and thus recent researches (Zheng et al., 2017; Zeng et al., 2018; Fu et al., 2019; Corresponding author: Liping Jing. Nayak and Ng, 2020; Wei et al., 2020; Liu et al., 2020) try to build a jointly-decoding schema for entities and relations.", "However, relational triple extraction still faces the following challenging issues: Information loss ( I-IL ).", "Information loss includes entity incompleteness (Zeng et al., 2020) and entity overlapping (Zeng et al., 2018; Wei et al., 2020).", "Entity incompleteness (I-IL-EI) refers to that only head or tail token rather than completed entity is recognized, while entity overlapping (I-IL-EO) is that one entity belonging to multiple triples cannot be marked.", "Error propagation ( I-EP ).", "Error propagation comes from the prediction process with strict order.", "For examples, pipeline models (Zhang et al., 2017; Takanobu et al., 2019) recognize entities first and predict relations based on each specific entity pair.", "Generative models (Zeng et al., 2018, 2019) extract subject, object and relation with a predetermined order.", "Ignoring the interaction between entity and relation ( I-II ).", "Subjects (or objects) in different predefined relations should have different recognition patterns, which are not modelled when ignoring the interaction between entity and relation.", "To intuitively explore the above issues and address them, from a stereoscopic perspective, we map the relational triples of a text to a three-dimensional (3-D) space, which is like a cube as Figure 1. The relational triples are actually some small cubes in the whole cube.", "Existing researches are actually to model the cube from different perspectives and further extract the triples.", "Based on the representation of triples in 3-D space, three operations (i.e. slice, projection and shrinkage) Figure 1: The representation of triples in 3-D space, where a text corresponds to a cube with size being ( |L| |T | ) ( |L| |T | ) |R| , while each triple is mapped to a small cube with size being ( m |T | ) ( n |T | ) 1 .", "are defined as Figure 2, to understand why existing methods suffer from the above issues.", "Furthermore, we propose a novel model for relational triple extraction, which can simultaneously handle the above issues, named StereoRel .", "More precisely, the cube is modelled from three perspectives, including ( x, z ) -plane projection, ( y, z ) -plane projection and z -slices, which indicates the subjects, objects and their correspondences for each predefined relation.", "Correspondingly, the proposed method leverages three decoders to extract relational triples in a unified model.", "This work has the following main contributions: We provide a revealing insight into relational triple extraction from a stereoscopic perspective, where the occurrence of several challenging issues and shortcomings of existing methods are rationalized.", "We propose a novel StereoRel model for relational triple extraction, which can simultaneously reduce information loss, avoid error propagation and not ignore the interaction between entity and relation.", "Extensive experiments are conducted on five public datasets, demonstrating that the proposed model outperforms the recent advanced baselines.", "In form of (subject, relation, object), triples can naturally be mapped to a three-dimensional (3-D) space, which is elaborated in this section.", "Meanwhile, we define three operations (i.e. slice, projection and shrinkage) in 3-D space, to make it easy to understand the strengths and shortcomings of previous researches.", "Given a text L with length being |L| and a predefined relation set R having |R| relations, L may have several triples, that is, p ([ s, r, o ] |L ) where [ ] represents a collection.", "Each triple consists of a subject ( s ), an object ( o ) and one relation ( r ) belonging to R .", "Subject is one entity, that is, n-gram in L and so does object.", "To model p ([ s ] |L ) or p ([ o ] |L ) , the common strategy is to leverage sequence tagging on L , which has some existing strategies, such as BMES tagging (Zhang and Yang, 2018; Li et al., 2020) and start-and-end binary tagging (Wei et al., 2020; Sui et al., 2020).", "Anyway, there is a tag set T , and thus p ([ s ] |L ) or p ([ o ] |L ) can be represented by a vector with length being |L| |T | .", "Meanwhile, due to r R , modeling p ([ r ] |L ) can be taken as a classification task, which requires a vector with length being |R| to represent.", "Therefore, when modeling p ([ s, r, o ] |L ) by", "consid-(a) Slice .", "ering all possible connections, it should be equivalent to a cube with size being ( |L| |T | ) 2 |R| in a 3-D space.", "As shown in Figure 1, the line segments of the cube mapping on x -axis, y -axis and z -axis are respectively regarded as the representations of subjects, objects and relations, that is, p ([ s ] |L ) , p ([ o ] |L ) and p ([ r ] |L ) .", "Similarly, the rectangles of the cube mapping on ( x, y ) -plane, ( x, z ) -plane and ( y, z ) -plane are respectively regarded as p ([ s, o ] |L ) , p ([ s, r ] |L ) and p ([ o, r ] |L ) .", "Further, each triple is mapped to a small cube in the space.", "Based on the stereoscopic representation of relational triples, we define the following operations.", "Slice , denoted as sli ( ) .", "As shown in Figure", "2(a), when some elements (i.e. subject, object or relation) are specified, the representation space will be reduced.", "The operation is like slicing the cube.", "For instances, a specific relation corresponds to a z -slice with size being ( |L| |T | ) 2 1 .", "Both subject and object being specified leads to an xy slice with size being ( m |T | ) ( n |T | ) |R| , which can be seen as the intersection of an x -slice and a y -slice.", "If subject, object or relation are Table 1: The correspondence between the issues of relational triple extraction and the operations in 3-D space.", "Projection , denoted as pro ( ) .", "As depicted in Figure", "2(b), two types of projection are defined, cube-to-plane and plane-to-axis .", "The former seems to look at the whole cube from a certain plane.", "For example, in the projection from cube to ( x, y ) -plane, two triples with the same subject and object are indistinguishable.", "Similarly, in the projection from cube to ( x, z ) -plane, there is only subject and relation information but no object information.", "The later seems to look at a plane from a certain axis, such as ( x, z ) -plane to x -axis projection, where the subjects in different z -slices may have the same representation on x -axis.", "Hereafter, for easy reading, ( x, z ) -plane to x -axis and ( y, z ) -plane to y -axis projections are denoted as pro x ( ) and pro y ( ) respectively.", "The projection from cube to ( x, y ) plane is denoted as pro xy ( ) .", "The rest ones are similar.", "Shrinkage , denoted as shr ( ) .", "In the cube representation, each token pair is represented by an xy -slice with size being |T | |T | |R| .", "Such an xy -slice can reflect all possible entity-tagging combinations of a token pair.", "As described in Figure", "2(c), a shrinkage over a cube only represent whether the token pair satisfies one specific entity-tagging combination, such as (start, start).", "Thus, the size of a shrinkage is |L| |L| |R| .", "As aforementioned, relational triple extraction faces three challenging issues: information loss ( I-IL-EI or I-IL-EO ), error propagation ( I-EP ) and ignoring the interaction between entity and relation ( I-II ).", "It is clear to match them to the three operations in 3-D space as Table 1. sli ( ) corresponds to the prediction process with strict order, and thus leads to error propagation.", "Without considering the nested entities in a text, pro xz/yz/z ( ) do not result in any problems, while pro x/y/xy ( ) is the Table 2: The analysis of previous researches from the stereoscopic perspective.", "opposite.", "Both pro x/y ( ) and pro xy ( ) lead to ignoring the interaction between entity and relation.", "Meanwhile, pro xy ( ) makes the triples with overlapped entities indistinguishable.", "The cube can be disassembled into |T | |T | shr ( ) .", "Modeling only one shr ( ) will cause entity incompleteness.", "To get deep insights on relational triple extraction, based on the correspondence between the operations and issues, we analyze previous researches as shown in Table 2. Early researches (Zelenko et al., 2002; Zhou et al., 2005; Chan and Roth, 2011) adopt pipeline approaches, where the entities are recognized first and the relations for each entity pair are predicted.", "Arguing that such approaches neglect the inherent relevance between entity recognition and relation extraction, some solutions (Miwa and Bansal, 2016; Zhang et al., 2017; Takanobu et al., 2019) still extract entities and relations sequentially, but make two tasks share the same encoder.", "These methods model p ([ s, r, o ] |L ) as p ([ s ] [ o ] |L ) and p ([ s i , r, o j ] |L , sli xy ( s i , o j )) , where p ([ s ] [ o ] |L ) = p ( pro x ( pro xz ([ s, r, o ])) pro y ( pro yz ([ s, r, o ])) |L ) .", "Therefore, pipeline paradigm suffers from I-EP and I-II issues.", "MHS (Bekoulis et al., 2018) is another two-stage method.", "The model recognizes entities firstly and extracts relational triples with a multi-head selection strategy on each subject, where pro x/y ( ) and sli x ( ) lead to I-EP and I-II issues respectively.", "In the following researches on relational triple extraction, several methods with joint decoding schema are proposed.", "Specifically, NovelTagging (Zheng et al., 2017) and PA-Tagging (Dai et al., 2019) achieve joint decoding by designing a unified tagging scheme and convert relational triple extraction to an end-to-end sequence tagging problem.", "Such a tagging schema has to model p ( pro xy ([ s, r, o ]) |L ) and thus suffers from I-IL-EO and I-II issues.", "CopyRE (Zeng et al., 2018) and CopyRRL (Zeng et al., 2019) leverage sequence-to-sequence model with copy mechanism.", "GraphRel (Fu et al., 2019) introduces graph convolutional network jointly learn entities and relations.", "Despite their initial success, the three methods only model p ( shr ([ s, r, o ]) |L ) and thus suffer from I-IL-EI issue.", "Sequence generation models, CopyRE and CopyRRL, predict triples one by one and model p ( sli xyz ( s i , r i , o i )) via p ( sli z ( r i )) p ( sli xz ( s i , r i ) | sli z ( r i )) p ( sli xyz ( s i , r i , o i ) | sli xz ( s i , r i )) , which leads to I-EP issue.", "GraphRel cannot avoid I-IL-EO and I-II issues due to its utilizing pro xy ( ) .", "Recently, to address I-IL issue, CopyMTL (Zeng et al., 2020) proposes a multi-task learning framework based on CopyRE, to simultaneously predict completed entities and capture relational triples.", "However, the model still does not solve I-EP issue.", "Meanwhile, entity recognition is implemented by modeling p ([ s ] [ o ] |L ) via a standalone module, which leads to I-II issue.", "Following sequence-to-sequence schema, WDec and PNDec (Nayak and Ng, 2020) design specific decoder block which can generate triples with completed entities.", "Such models ease I-II issue, but still suffers from I-EP issue since that it models p ( sli xyz ( s i , r i , o i )) via p ( sli x ( s i )) p ( sli xy ( s i , o i ) | sli x ( s i )) p ( sli xyz ( s i , r i , o i ) | sli xy ( s i , o i )) .", "CasRel (Wei et al., 2020) regards relations as functions that map subjects to objects in a text.", "It is necessary to recognize subjects first and then objects, which leads to I-EP issue.", "To recognize subjects, p ([ s ] |L ) is modelled via p ( pro x ( pro xz ([ s, r, o ])) |L ) , where pro x ( ) leads to I-II issue.", "Att-as-Rel (Liu et al., 2020) models the triples by multi-head attention, where completed entities are recognized by modeling p ([ s ] [ o ] |L ) separately and thus there is I-II issue.", "Similarly, TPLinker (Wang et al., 2020b) regards joint extraction as a token pair linking problem, where entity recognition is also modelled separately via p ([ s ] [ o ] |L ) .", "To handle the above three issues simultaneously, we avoid to make the operations in Table 1 and try to model p ([ s, r, o ] |L ) via (cid:80) |R| i [ p ([ s, r i ] |L ) + p ([ r i , o ] |L ) + p ( shr ([ s, r i , o ]) |L )] .", "As depicted in Figure 3, the proposed StereoRel model first leverages BERT encoder to extract the text representation for the original text.", "Then, for each predefined relation, the text representation is transformed to its subject and object spaces.", "Based on them, three decoders are built to separately model p ( pro xz ([ s, r, o ]) |L ) , p ( pro yz ([ s, r, o ]) |L ) and p ( shr ([ s, r, o ]) |L ) .", "To sufficiently capture the textual information, the encoder is built by a pre-trained language model, BERT (Devlin et al., 2019).", "BERT encoder tok-enizes a text L using a predefined vocabulary and generates a corresponding sequence L by concatenating a [CLS] token, the tokenized text and a [SEP] token.", "The detailed steps can be referred to (Devlin et al., 2019).", "BERT encoder will embed a text L into a matrix T R ( |L| +2) d b , where d b is the hidden size of BERT, and T j can be seen as the word embedding of j -th token, j [0 , |L| + 1] .", "After this, for each relation r i , T is transformed to a new text representation T i R ( |L| +2) d r by: T i = ( T W i + b i ) , (1) where { W i } |R| i =1 R d b d r , { b i } |R| i =1 R 1 d r are trainable parameters and ( ) is predetermined activation function.", "Subject decoder is to model (cid:80) |R| i p ([ s, r i ] |L ) , that is, ( x, z ) -plane projection p ( pro xz ([ s, r, o ]) |L ) , which recognizes the subjects for each predefined relation.", "For one specific relation r i , we transform its text representation to T subi R ( |L| +2) d e in r i 's subject space with d e being the hidden size.", "The transformation is implemented by T subi = T sub q i + T sub k i + T sub b i , (2) T sub q/k/b i = ( T i W sub q/k/b i + b sub q/k/b i ) , (3) where T sub q i , T sub k i , T sub b i are linear transformations on top of T i .", "T sub q i , T sub k i will be used by shrinkage decoder, while T sub b i only works for subject decoder.", "{ W sub q i } |R| i =1 , { W sub k i } |R| i =1 , { W sub b i } |R| i =1 R d r d e , { b sub q i } |R| i =1 , { b sub k i } |R| i =1 , { b sub b i } |R| i =1 R 1 d e are trainable parameters and ( ) is predetermined activation function.", "Based on T sub i , all possible subjects in relation r i 's subject space are recognized by a sequential conditional random field (CRF) (Lafferty et al., 2001) layer with Figure 3: The proposed StereoRel model.", "[ Begin , Inside , Outside ] tagging schema, where the probability of the final label sequence, y subi = [ y subi 1 , y subi 2 , ..., y subi |L| ] , is modeled as follows: P( y subi |L ) = (cid:81) |L| j =1 j ( y subi ( j 1) , y subij |L ) (cid:80) y (cid:48) Y (cid:81) |L| j =1 j ( y (cid:48) j 1 , y (cid:48) j |L ) , (4) j ( y, y |L ) = exp ( T sub w crf sub y, y + b crf sub y, y ) , (5) where Y denotes all possible label sequence of L .", "w crf sub y, y and b crf sub y, y are trainable parameters corresponding to the label pair ( y, y ) .", "Object decoder is to model (cid:80) |R| i p ([ r i , o ] |L ) , that is, ( y, z ) -plane projection p ( pro yz ([ s, r, o ]) |L ) , which recognizes the objects for each predefined relation.", "Similar to subject decoder, the text representation in object space, T obji R |R| ( |L| +2) d e , is obtained in object decoder as: T obji = T obj q i + T obj k i + T obj b i , (6) T obj q/k/b i = ( T i W obj q/k/b i + b obj q/k/b i ) , (7) where T obj q i , T obj k i , T obj b i are linear transformations on top of T i .", "{ W obj q i } |R| i =1 , { W obj k i } |R| i =1 , { W obj b i } |R| i =1 R d r d e , { b obj q i } |R| i =1 , { b obj k i } |R| i =1 , { b obj b i } |R| i =1 R 1 d e are trainable parameters.", "In like wise, objects of the i -th predefined relation are tagged as y obji = [ y obji 1 , y obji 2 , ..., y obji |L| ] via another CRF layer as: P( y obji |L ) = (cid:81) |L| j =1 j ( y obji ( j 1) , y objij |L ) (cid:80) y (cid:48) Y (cid:81) |L| j =1 j ( y (cid:48) j 1 , y (cid:48) j |L ) , (8) j ( y, y |L ) = exp ( T obj w crf obj y, y + b crf obj y, y ) , (9) where w crf obj y, y and b crf obj y, y are trainable parameters.", "To extract the correspondences between subjects and objects, shrinkage decoder is leveraged to model (cid:80) |R| i p ( shr ([ s, r i , o ]) |L ) , where each element of shr ( ) denotes whether the corresponding token pair is one specific position of a (subject, object) pair, such as (start, start) or (end, end).", "To model this, a pair-wise classification function f is established as: p shrijj (cid:48) = f ( T subij , T obj ij (cid:48) ) , (10) indicating the probability that j -token and j (cid:48) -token is the specifc position of a (subject, object) pair, which satisfies the i -th predefined relation.", "We design the function as follows: p shrijj (cid:48) = ( p sub obj ijj (cid:48) , p obj sub ijj (cid:48) ) , (11) p sub obj ijj (cid:48) = softmax j ( ( T sub q ij , T obj k ij (cid:48) )) , (12) p obj sub ijj (cid:48) = softmax j (cid:48) ( ( T obj q ij (cid:48) , T sub k ij )) , (13) where ( ) is implemented by dot product or neural network to provide an initial probability.", "p sub obj ijj (cid:48) and p obj sub ijj (cid:48) respectively indicate the probability distributions for a subject searching for its objects and an object searching for its subjects, which are integrated via a predetermined function ( ) , such as minimum, maximum and multiplication.", "Subject and object decoders are learned by text-level log-likelihood loss, while shrinkage decoder is learned by token-level binary cross-entropy loss.", "Thus, the unified model is learned by a combined loss function L total = L sub + L obj + L shr , where L sub = |R| (cid:88) i log (P( y subi |L )) , (14) L obj = |R| (cid:88) i log (P( y obji |L )) , (15) L shr = |R| (cid:88) i |L| (cid:88) j |L| (cid:88) j (cid:48) (cid:104) p shrijj (cid:48) log ( p shrijj (cid:48) )+ (1 p shrijj (cid:48) ) log (1 p shrijj (cid:48) ) (cid:105) .", "(16)", "The relational triples can be inferred based on the three decoders.", "Concretely, for each predefined relation r i , the subjects and objects can be obtained by y subi and y obji respectively.", "For the subject s ij and object o ij (cid:48) with ( j -th token, j (cid:48) -th token) satisfying the specific position, if p shrijj (cid:48) is greater than a predetermined threshold , ( s ij , r i , o ij (cid:48) ) will be extracted as a relational triple.", "To evaluate the proposed StereoRel model, we conduct a performance comparison on five public datasets in this section.", "Evaluation Metrics and Datasets .", "Generally, the performance on relational triple extraction is evaluated by precision (Pre.), recall (Rec.) and F1-score (F1) , where a triple is regarded as correct if subject, relation and object are all matched.", "Notably, in previous works, there are two evaluation modes: Partial Match and Exact Match .", "The former holds that subject (or object) is correct as long as its head or tail is correct, while the latter requires it to be recognized completely.", "To properly compare our model with various baselines, benchmark datasets are selected for the two modes separately.", "Concretely, we utilize NYT (Riedel et al., 2010), WebNLG (Gardent et al., 2017), NYT10 (Takanobu et al., 2019) and NYT11 (Takanobu et al., 2019) datasets for Partial Match , while NYT (Riedel et al., 2010) and Wiki-KBP (Dai et al., 2019) datasets for Exact Match .", "The details are shown in Table 3. The splits of validation set are the same as previous researches.", "Implementation Details .", "For making a fair comparison, we utilize the cased BERT-base 1 model in our experiments, which is the same as CasRel (Wei et al., 2020) and TPLinker (Wang et al., 2020b), and thus d b = 768 .", "Adam optimizer (Kingma and Ba, 2015) is utilized to train the proposed method with initial learning rate being 1e-5.", "The hidden size d r , d e are set as 64, 32.", "The threshold is tuned for each relation and determined by the validation set.", "( ) is set as relu activation function.", "( ) is set as dot product.", "( ) is set as the multiplication function.", "1 https://storage.googleapis.com/bert_ models/2018_10_18/cased_L-12_H-768_A-12.zip Table 4: Performance comparison by Partial Match on NYT Model Pre.", "We employ some recent advanced methods as baselines, mainly including the models analyzed in Table 2. Table 4, 5, 6 and 7 report the results of our method against the baselines for Partial Match evaluation mode, and Table 8 and 9 report the results for Exact Match .", "The models before CasRel do not employ BERT encoder and the rest does.", "As aforementioned in Table 2, existing models do not handle three challenging issues simultaneously, while our proposed StereoRel model does.", "Among the baselines, Att-as-Rel is the first work to extract triples for each predefined relation with no I-IL and I-EP issues, and thus achieves a huge performance improvement compared with previous methods.", "Based on BERT encoder, the performance on relational triple extraction has been further improved by CasRel and TPLinker.", "Due Table 6: Performance comparison by Partial Match on NYT10 Model Pre.", "to no I-EP issue, TPLinker outperforms CasRel.", "However, TPLinker still suffers from I-II issue.", "Our proposed StereoRel model further considers it and achieves a better performance.", "From the results, comparing with the second best baseline, the performance improvement of the existing best baseline on the five datasets were 2.5%, 0.1%, 1.4%, 0.1% and 1.5% respectively, in terms of F1-score .", "Our model obtains performance gain about 0.3%, 0.2%, 0.2%, 0.7% and 0.6% in terms of the best baseline.", "It can be seen that the improvement is satisfied.", "For relational triple extraction, from the stereoscopic perspective, there are the following two aspects worthy of discussion.", "The first one is about learning strategy.", "Most of previous studies and ours employ binary cross-entropy loss to learn the models.", "However, since the label space of relational triple in 3-D space is huge, binary cross-entropy is available but not necessarily optimal.", "Meanwhile, cross-entropy is permutation-sensitive loss function (Sui et al., 2020), which is incompatible with generative models (Zeng et al., 2018, 2019, 2020) since it is necessary to predetermine extraction order of multiple triples.", "To this question, CGT (Ye et al., 2020) incorporates contrastive learning strategy and SPN (Sui et al., 2020) transforms relational Table 8: Performance comparison by Exact Match on NYT Model Pre.", "triple extraction into set prediction problem learned by bipartite matching loss.", "These ideas may be introduced in the future.", "The second one is to recognize nested entities in relational triples.", "Nested entities are the entities among which there are substring relationships, like U.N. being a substring of U.N. Ambassador.", "Such entities definitely affect the overall performance on Exact Match mode.", "Take NYT dataset as an example, there are about 2.5% sentences containing nested entities.", "Nested entity recognition has been widely studied (Li et al.; Wang et al., 2020a), but most studies on relational triple extraction have not considered it.", "TPLinker (Wang et al., 2020b) provides a solution to recognize nested entities via a token pair tagging, but it ignores the interaction between entity and relation.", "For StereoRel model, although not focusing on nested entities, it has not been much affected.", "The reason is that StereoRel recognizes subjects and objects for each predefined relation separately.", "In this case, only 0.06% of nested entities in NYT cannot be marked.", "Anyway, modeling nested entities from the stereoscopic perspective is worth exploring in the future.", "Relational triple extraction is critical to understanding massive text corpora.", "However, existing studies face some challenging issues, including information loss, error propagation and ignoring the interaction between entity and relation.", "In this paper, aiming at simultaneously handling the above issues, we provide a revealing insight into relational triple extraction from a stereoscopic perspective, which rationalizes the occurrence of these issues and exposes the shortcomings of existing methods.", "Further, we propose a novel model leveraging three decoders to respectively extract subjects, objects and their correspondences for each predefined relation.", "Extensive experiments are conducted on five public datasets, demonstrating that the proposed model outperforms the recent advanced baselines.", "This work was supported in part by the National Natural Science Foundation of China under Grant 61822601 and 61773050; the Beijing Natural Science Foundation under Grant Z180006; the Open Project Program Foundation of the Key Laboratory of Opto-Electronics Information Processing, Chinese Academy of Sciences(OEIP-O-202004); National Key Research and Development Project No. 2019YFB1405202." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "other", "abstain", "objective", "abstain", "abstain", "objective", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "other", "abstain", "other", "abstain", "other", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "other" ]
[ "Transformer architecture has become the defacto model for many machine learning tasks from natural language processing and computer vision.", "As such, improving its computational efficiency becomes paramount.", "One of the major computational inefficiency of Transformer-based models is that they spend the identical amount of computation throughout all layers.", "Prior works have proposed to augment the Transformer model with the capability of skimming tokens to improve its computational efficiency.", "However, they suffer from not having effectual and end-to-end optimization of the discrete skimming predictor.", "To address the above limitations, we propose the Transkimmer architecture, which learns to identify hidden state tokens that are not required by each layer.", "The skimmed tokens are then forwarded directly to the final output, thus reducing the computation of the successive layers.", "The key idea in Transkimmer is to add a parameterized predictor before each layer that learns to make the skimming decision.", "We also propose to adopt reparameterization trick and add skim loss for the end-to-end training of Transkimmer.", "Transkimmer achieves 10 .", "97 average speedup on GLUE benchmark compared with vanilla BERT base baseline with less than 1% accuracy degradation.", "The Transformer model (Vaswani et al., 2017) has pushed the accuracy of various NLP applications to a new stage by introducing the multi-head attention (MHA) mechanism (Lin et al., 2017).", "Further, the BERT (Devlin et al., 2019) model advances its performances by introducing self-supervised pre-training, and has reached the state-of-the-art accuracy on many NLP tasks.", "Compared to the recurrent fashion models, e.g. RNN (Rumelhart et al., 1986), LSTM (Hochreiter and Schmidhuber, 1997), the Transformer model leverages the above attention mechanism to process Layer 1 Layer 2 Layer 3 Transkimmer TransformerLayers [CLS] It is a goodfilm .", "all the input sequence.", "By doing so, extremely large scale and long span models are enabled, resulting in a huge performance leap in sequence processing tasks.", "However, the computation complexity of the attention mechanism is O ( N 2 ) with the input length of N , which leads to the high computation demand of the Transformer model.", "Some prior works (Goyal et al., 2020; Kim and Cho, 2021; Kim et al., 2021; Ye et al., 2021) explore the opportunity on the dynamic reduction of input sequence length to improve the Transformer's computational efficiency.", "Its intuition is similar to the human-being's reading comprehension capability that does not read all words equally.", "Instead, some words are focused with more interest while others are skimmed.", "For Transformer models, this means adopting dynamic computation budget for different input tokens according to their contents.", "To excavate the efficiency from this insight, we propose to append a skim predictor module to the Transformer layer to conduct fine-grained dynamic token pruning as shown in Fig.", "1. When processed by the Transformer layers, the sequence of token 7275 hidden state embeddings are pruned at each layer with reference to its current state.", "Less relevant tokens are skimmed without further computation and forwarded to the final output directly.", "Only the significant tokens are continued for successive layers for further processing.", "This improves the Transformer model inference latency by reducing the input tensors on the sequence length dimension.", "However, the optimization problem of such skim decision prediction is non-trivial.", "To conduct pruning of dynamic tensors, non-differentiable discrete skim decisions are applied.", "Prior works have proposed to use soft-masking approximation or reinforcement learning to resolve, which leads to approximation mismatch or nonuniform optimization.", "Transkimmer propose to adopt reparameterization technique (Jang et al., 2017) to estimate the gradient for skim prediction.", "As such, we can achieve the end-to-end joint optimization obejective and training paradigm.", "By jointly training the downstream task and skim objective, the Transformer learns to selectively skim input contents.", "In our evaluation, we show Transkimmer outperforms all prior input reduction works on inference speedup gain and model accuracy.", "Specifically, BERT base is accfelerated for 10 .", "97 on GLUE benchmark and 2 .", "81 without counting the padding tokens.", "Moreover, we also demonstrate the method proposed by Transkimmer is generally applicable to pre-trained language models and compression methods with RoBERTa, DistillBERT and ALBERT models.", "This paper contributes to the following 3 aspects.", "We propose the Transkimmer model which accelerates the Transformer inference with dynamic token skimming.", "We further propose an end-to-end joint optimization method that trains the skim strategy together with the downstream objective.", "We evaluate the proposed method on various datasets and backbone models to demonstrate its generality.", "Recurrent Models with Skimming.", "The idea to skip or skim irrelevant sections or tokens of input sequence has been studied in NLP models, especially recurrent neural networks (RNN) (Rumel-hart et al., 1986) and long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997).", "When processed recurrently, skimming the computation of a token is simply jumping the current step and keep the hidden states unchanged.", "LSTM-Jump (Yu et al., 2017), Skim-RNN (Seo et al., 2018), Structural-Jump-LSTM (Hansen et al., 2019) and Skip-RNN (Campos et al., 2018) adopt this skimming design for acceleration in recurrent models.", "Transformer with Input Reduction.", "Unlike the sequential processing of the recurrent models, the Transformer model calculates all the input sequence tokens in parallel.", "As such, skimming can be regarded as the reduction of hidden states tensor on sequence length dimension.", "Universal Transformer (Dehghani et al., 2019) proposes a dynamic halting mechanism that determines the refinement steps for each token.", "DeFormer (Cao et al., 2020) proposes a dual-tower structure to process the question and context part separately at shallow layers specific for QA task.", "The context branch is preprocessed off-line and pruned at shallow layers.", "Also dedicated for QA tasks, Block-Skim (Guan et al., 2021) proposes to predict and skim the irrelevant context blocks by analyzing the attention weight patterns.", "Progressive Growth (Gu et al., 2021) randomly drops a portion of input tokens during training to achieve better pre-training efficiency.", "Another track of research is to perform such input token selection dynamically during inference, which is the closest to our idea.", "POWER-BERT (Goyal et al., 2020) extracts input sequence at token level while processing.", "During the fine-tuning process for downstream tasks, Goyal et al. proposes a soft-extraction layer to train the model jointly.", "Length-Adaptive Transformer (Kim and Cho, 2021) improves it by forwarding the inflected tokens to final downstream classifier as recovery.", "Learned Token Pruning (Kim et al., 2021) improves POWER-BERT by making its pre-defined sparsity ratio a parameterized threshold.", "TR-BERT (Ye et al., 2021) adopts reinforcement learning to independently optimize a policy network that drops tokens.", "Comparison to these works are discussed in detail in Sec. 3.", "Moreover, SpAttn (Wang et al., 2021) facilitate POWER-BERT design with a domain-specific hardware design for better acceleration and propose to make skimming decisions with attention values from all layers.", "Early Exit Early exit (Panda et al., 2016; Teer-apittayanon et al., 2016) is another method to execute the neural network with input-dependent computational complexity.", "The idea is to halt the execution during model processing at some early exits.", "Under the circumstance of processing sequential inputs, early exit can be viewed as a coarse-grained case of input skimming.", "With the hard constraint that all input tokens are skimmed at the same time, early exit methods lead to worse accuracy and performance results compared to input skimming methods.", "However, the early exit method is also generally applicable to other domains like convolutional neural networks (CNN).", "DeeBERT (Xin et al., 2020), PABEE (Zhou et al., 2020), FastBERT (Liu et al., 2020) are some recent works adopting early exit in Transformer models.", "Magic Pyramid (He et al., 2021) proposes to combine the early exit and the input skimming ideas together.", "Tokens are skimmed with fine-grained granularity following POWER-BERT design and the whole input sequence is halted at some early exits.", "Efficient Transformer.", "There are also many efforts for designing efficient Transformers (Zhou et al., 2020; Wu et al., 2020; Tay et al., 2020).", "For example, researchers have applied well studied compression methods to Transformers, such as pruning (Guo et al.), quantization (Wang and Zhang, 2020; Guo et al., 2022), distillation (Sanh et al., 2019), and weight sharing.", "Other efforts focus on dedicated efficient attention mechanism considering its quadratic complexity of sequence length (Kitaev et al., 2020; Beltagy et al., 2020; Za-heer et al., 2020) or efficient feed-forward neural network (FFN) design regarding its dominant complexity in Transformer model (Dong et al., 2021).", "Transkimmer is orthogonal to these techniques on the input dimension reduction.", "In this section, we discuss the challenges of dynamic input skimming idea in details.", "Moreover, we compare techniques and design decisions from prior works described in Tbl.", "1. 3.1 Optimization Method The first challenge of input skimming is the optimization with discrete skimming decisions.", "In specific, the decision for pruning the hidden state tensors (i.e., reducing their sequence length) is Models Optimization Input Discard Strategy POWER-BERT Soft-Masking Attention Discard Searched (Goyal et al., 2020) LAT Soft-Masking Attention Forward Searched (Kim and Cho, 2021) LTP Soft-Masking Attention Discard Learned (Kim et al., 2021) TR-BERT RL Embedding Forward Searched (Ye et al., 2021) Transkimmer Reparameterize Embedding Forward Learned Table 1: Summary of prior token reduction works and their design choices including POWER-BERT, Length-Adaptive Transformer (LAT), Learned Token Pruning (LTP) and TR-BERT.", "a binary prediction.", "As such, the skim prediction model is non-differentiable and unable to be directly optimized by gradient back propagation.", "Prior works handle the discrete binary skimming decision by using a set of complicated training techniques, which we categorize in Tbl.", "1. Soft-Masking.", "Some works (Goyal et al., 2020; Kim and Cho, 2021; Kim et al., 2021) propose to use the soft-masking training trick which uses a continuous value for predicting the skimming prediction.", "During the training process, the predicted value is multiplied to the hidden states embedding vectors so that no actual pruning happens.", "In the inference phase, this continuous skimming prediction value is binarized by a threshold-based step function.", "The threshold value is pre-defined or determined through a hyper-parameter search process.", "Obviously, there exists a training-inference paradigm mismatch where the actual skimming only happens at the inference time.", "Such a mismatch leads to a significant accuracy degradation.", "Reinforcement Learning.", "TR-BERT (Ye et al., 2021) proposes to use the reinforcement learning (RL) to solve the discrete skimming decision problem.", "It uses a separated policy network as the skimming predictor, and the backbone Transformer model is considered as the value network.", "At first, the backbone Transformer is fine-tuned separately.", "It then updates the skimming policy network by using the RL algorithm.", "This multi-step training paradigm is tedious.", "And training the backbone Transformer and skimming policy network separately is sub-optimal compared to the joint optimization paradigm.", "Moreover, the large search space of such RL objective is difficult to converge especially on small downstream datasets.", "Reparameterization.", "In this work, we propose to use the reparameterization technique to address the discrete skimming decision challenge.", "Its core idea is to sample the backward propagation gradient during training, whose details we describe in Sec. 4.", "The advantage of our method is that it enables the joint optimization of skim predictor and backbone Transformer model and therefore achieves the optimal solution.", "For example, we will later demonstrate in Fig. 4 that the different tasks or datasets prefer different layer-wise skimming strategies, which are learned by our method.", "We will further explain the results in Sec. 5.4.", "In our work, we also jointly consider other design choices regarding the skimming optimization, which includes the choice of input to the skimming module and how to deal with the skimmed input.", "We first explain the choices made by prior works, and then explain the choice of our method.", "Strategy.", "For the skimming optimization methods described above, there can be different strategies regarding the implementation details.", "Generally, the skimming strategy can be categorized into search-based or learning-based approach, as described in Tbl.", "1. However, when applied to various downstream NLP tasks and datasets, the dynamic skimming scheme prefers different layer-wise strategies as we mentioned above.", "This layer-wise skimming characteristics makes the search-based approach not scalable and generally applicable.", "In contrast, our method enables the joint training of skimming strategy and downstream task , which leads to better skimming decisions with reference to both efficiency and accuracy.", "LTP is the only by prior works adopting learning-based method, which, however, uses the soft-masking approach and suffers from the training-inference mismatch.", "Input for Skimming.", "POWER-BERT, LAT and LTP treat the attention weight value as importance score and utilize it as the criterion for making the skimming decision.", "Compared to this value-based method (Guan et al., 2020), TR-BERT uses hidden state embeddings as input feature.", "In our work, we use the hidden state embeddings because they enclose contextual information of the corresponding input token.", "Our work shows that the joint training of skimming module and backbone Transformer model leads to that the embeddings also learn to carry features for skimming prediction.", "Skimming Tokens.", "For the tokens pruned dynamically by the skimming decision during processing, it is natural to remove them from all the successive layers.", "However, LAT and TR-BERT propose to forward such tokens to the final output of the Transformer encoder, which keeps the dimension of the Transformer output unchanged.", "Our work adopts the forward-based design because it is more friendly for the Transformer decoder module on downstream tasks.", "To predict which tokens to be pruned, we append an extra prediction module before each layer as shown in Fig.", "2. This prediction module outputs a skimming mask M , which is used to gather the hidden state embedding H at the sequence length dimension.", "The pruned embedding is then feed to the Transformer layer as its input.", "In the skim mask, we use output 1 to denote remaining tokens and 0 to denote pruned tokens.", "The gathering operation is to select the input tensor with a provided mask.", "By optimizing this stand-alone skim module, syntactically redundant and semantically irrelevant tokens are skimmed and pruned.", "The proposed skim predictor module is a multilayer perceptron (MLP) network composed of 2 linear layers with a layer norm operation (Ba et al., 2016) and GeLU activation (Hendrycks and Gimpel, 2016).", "The activation function is an arbitrary function with discrete output as skim decision.", "This skim predictor introduces extra model parameters and computation overhead.", "However, both of them are very small compared to the vanilla Transformer model, which are about 7 .", "9% and 6 .", "5% respectively.", "We demonstrate later that the computation overhead of skim module is much smaller than the benefits brought by the reduction of input tensor through skimming.", "For the tokens pruned by the skim module at each layer, we forward the these pruned hidden state embeddings to the last Transformer layer.", "As such, the final output of the whole Transformer model is composed of token embeddings skimmed at all layers and the ones processed by all layers without being skimmed.", "And this output is used for classification layers on various downstream tasks.", "This makes the skimming operation also compatible for token classification tasks such as extractive question answering (QA) and named entity recognition (NER).", "This also restores the once abandoned information for downstream tasks.", "In the above discussion, we have described that Transkimmer can be easily augmented to a backbone model without modification to its current structure.", "Furthermore, Transkimmer is also capable to utilize the pre-trained model parameters and finetune the Transkimmer activated Transformer-based models on downstream tasks.", "With an extra skim loss appended to the optimization object, this fine-tuning process is also performed end-to-end without changing its origin paradigm.", "Skim Attention.", "In the training procedure, Transkimmer does not prune the hidden state tensors as it does in the inference time.", "Because the gathering and pruning operation of a portion of tokens prevents the back-propagation of their gradients.", "The absence of error signal from negative samples interference the convergence of the Transkimmer model.", "Therefore, we propose skim-attention to mask the reduced tokens in training instead of actually pruning them.", "The attention weights to the skimmed tokens are set to 0 and thus unreachable by the other tokens.", "By doing so, the remaining tokens will have the identical computational value as actually pruning.", "And the gradient signal is passed to the skim predictor module from the skim attention multiplication.", "Gumbel Softmax.", "Following the discussion in Sec. 3.1, the output decision mask of skim predictor is discrete and non-differentiable.", "To conquer this inability of back propagation, we use the reparameterization method (Jang et al., 2017) to sample the discrete skim prediction from the output probability distribution i of the MLP.", "The gradient of the non-differentiable activation function is estimated from the Gumbel-Softmax distribution during back propagation.", "M ij = Activation ( ij ) ,for j = 0,1 = GumbelSoftmax ( ij ) = exp (( log ( ij ) + g ij ) / ) (cid:80) 1 k =0 exp (( log ( ik ) + g ik ) / ) (5) g ij are independent and identically sampled from Gumbel (0 , 1) distribution.", "is the temperature hyper-parameter controlling the one-hot prediction distribution.", "We take = 0 .", "1 for all experiments.", "To achieve better token sparsification ratio, we further add a skim loss term to the overall optimization objective as follows Loss skim = 1 L 1 (cid:88) L 1 sum ( M i ) len ( M i ) .", "(6) The skim loss is essentially the ratio of tokens remained in each layer thus representing the computation complexity speedup.", "By decreasing this objective, more tokens are forced to be pruned during processing.", "To collaborate with the original downstream task loss, we use a harmony coefficient to balance the two loss terms.", "As such, the total loss used for training is formulated as Loss total = Loss downstream + Loss skim .", "(7) With the use of the previous settings, the Transkimmer model is trained end-to-end without any change to its original training paradigm.", "Unbalanced Initialization.", "Another obstacle is that skimming tokens during the training process makes it much unstable and decreases its accuracy performance.", "With the pre-trained language modeling parameters, the skim predictor module is random initialized and predicts random decisions.", "This induces significant processing mismatch in the backbone Transformer model, where all tokens are accessible.", "Consequently, the randomly initialized skim predictor makes the training unstable and diverged.", "We propose an unbalance initialization technique to solve this issue.", "The idea is to force positive prediction at first and learn to skim gradually.", "Generally, parameters are initialized by zero mean distribution as N (0 , ) .", "We propose to initialize the bias vector of the last linear layer in the skim predictor MLP with unbalanced bias as", "where i stands for the bias vector for prediction 1 or 0.", "Consequently, the skim predictor tends to reserve tokens rather than skimming them when innocent.", "The mean value 0 of the unbalanced distribution set to 5 for all the experiments.", "Datasets.", "We evaluate the proposed Transkimmer method on various datasets.", "We use the GLUE(Wang et al., 2019) benchmark including 9 classification/regression datasets, extractive question answering dataset SQuAD-v2.0, and sequence classification datasets 20News (Lang, 1995), YELP (Zhang et al., 2015) and IMDB (Maas et al., 2011).", "These datasets are all publicly accessible and the summary is shown in Tbl.", "2. The diversity of tasks and text contexts demonstrates the general applicability of the proposed method.", "Models.", "We follow the setting of the BERT model to use the structure of the Transformer encoder and a linear classification layer for all the datasets.", "We evaluate the base setting with 12 heads and 12 layers in prior work (Devlin et al., 2019).", "We implement Transkimmer upon BERT and RoBERTa pre-trained language model on downstream tasks.", "Baselines.", "We compare our work to prior token reduction works including POWER-BERT (Goyal et al., 2020), Length-Adaptive Transformer (LA-Transformer) (Kim and Cho, 2021), Learned Token Pruning (LTP) (Kim et al., 2021), DeFormer (Cao et al., 2020) and TR-BERT (Kim et al., 2021).", "We also compare our method with model compression methods of knowledge distillation and weight sharing.", "Knowledge distillation uses a teacher model to transfer the knowledge to a smaller student model.", "Here we adopt DistilBERT (Sanh et al., 2019) setting to distill a 6-layer model from the BERT base model.", "By sharing weight parameters among layers, the amount of weight parameters reduces.", "Note that weight sharing does not impact the computa-7280 Method Padding COLA RTE QQP MRPC SST-2 MNLI WNLI QNLI STS-B Matthews FLOPs Acc.", "tion FLOPs (floating-point operations).", "We evaluate Transkimmer on ALBERT (Lan et al., 2020) that shares weight parameters among all layers.", "To express that token reduction method is compatible with these model compression methods, we further implement Transkimmer method with this works to demonstrate their cooperation effect.", "Besides, Dee-BERT(Xin et al., 2020) is a Transformer early exit baseline which can be regarded as coarse-grained input skimming.", "Padding.", "While processing batched input samples, Transformer models perform a padding operation on the input sequences to align the input length.", "Sequences are appended with a special padding token [PAD] to a predefined sequence length for the convenience of successive computing.", "This is a trivial setting for general evaluation but could lead to possible pseudo speedup for token reductions works.", "Because the padded tokens can be pruned without prediction.", "For the prior works, there are three evaluation settings with reference to padding, padding to a fixed sequence length, padding to mini-batch maximum length and no padding (denoted as Sequence , Batch and No in Fig. 3 & 4).", "We indicate the padding methods of prior works and evaluate Transkimmer with different padding settings for a fair comparison.", "The speedup of padding to mini-batch maximum length setting is related to batch size and processing order of input samples.", "So it is difficult to make a direct comparison under this setting.", "However, it can be estimated with padding to fixed sequence length as upper bound and no padding as lower bound.", "The sequence length on different datasets is determined following prior works' settings (Goyal et al., 2020; Kim et al., 2021).", "We measure the inference FLOPs as a general measurement of the model computational complexity on all platforms.", "We use the TorchProfile( ? ) tool to calculate the FLOPs for each model.", "Training Setting.", "We implement the proposed method based on open-sourced library from Wolf et al. (2020) 1 .", "For each baseline model, we use the released pre-trained checkpoints 2 .", "We follow the training setting used by Devlin et al. (2019) and Liu et al. (2019) to perform the fine-tuning on the above datasets.", "We perform all the experiments reported with random seed 42.", "We use four V100 GPUs for training experiments.", "The harmony coefficient is determined by hyper-parameter grid search on development set with 20% data random picked from training set set.", "The search space is from 0 .", "1 to 1 with a step of 0 .", "1 .", "We show the overall results on several datasets and demonstrate our observations.", "Tbl.", "3 demonstrates the accuracy and speedup evaluated on GLUE benchmark.", "And Tbl.", "4 further demonstrates the results on other datasets with longer input.", "Comparison to vanilla model baseline.", "Generally, Transkimmer achieves considerably speedup to the vanilla models with a minor accuracy degradation, which is less than 1% for nearly all cases.", "The average speedup is 2 .", "81 on GLUE benchmark and over 2 on the other datasets.", "This demonstrates the inference efficiency improvement of the Transkimmer input reduction method.", "We also evaluate Transkimmer with RoBERTa model as backbone and reach 3 .", "24 average speedup on GLUE benchmark.", "This result further expresses the general applicability of Transkimmer with different Transformer-based pre-trained language models.", "Among all the datasets we evaluated, Transkimmer tends to have better acceleration ratio on the easier ones.", "For example, sequence classification tasks like QQP and STS-B are better accelerated than QA or NLI datasets.", "We suggest that the Transformer backbone is able to process the information at shallower layers and skim the redundant part earlier.", "This is also demonstrated in the following post-hoc analysis Sec. 5.4.", "Comparison to input reduction prior works.", "As shown in Tbl.", "3, Transkimmer outperforms all the input reduction methods by a margin on GLUE benchmark.", "To make a fair comparison, we evaluate Transkimmer with two padding settings, padding to fixed sequence length or no padding.", "For most cases, Transkimmer has better accuracy performance and higher speedup ratio at the same time.", "When taking the special padding token into account, Transkimmer is able to accelerate 1 2 3 4 5 6 7 8 9 10 11 12 Layer 0 20 40 60 80 100 N u m b e r o f T o k e n s ( % ) MRPC WNLI STS-B COLA SST-2 QNLI QQP RTE MNLI Figure 4: Layer-wise skim strategies analysis of datasets from GLUE benchmark.", "BERT base model for 10 .", "97 on GLUE benchmark.", "Transkimmer also outperforms the other methods on tasks shown in Tbl.", "4.", "TR-BERT has the closet performance compared with Transkimmer but with a much complicated RL paradigm and larger search space.", "Comparison to model compression methods.", "The comparison to two model compression methods is shown in Tbl.", "3.", "Transkimmer outperforms the knowledge distillation and weight sharing baseline by a margin.", "Besides, the dynamic skimming idea itself is orthogonal to this existing model compression methods.", "To elaborate, we further adopt the proposed Transkimmer method on DistilBERT and ALBERT models.", "With the proposed end-to-end training objective, Transkimmer is easily augmented to these methods.", "There is also no need to change the original training process.", "The result shows that the Transkimmer method further accelerates the inference efficiency of compressed models with nearly no extra accuracy degradation.", "Fig. 3 demonstrates the accuracy and performance trade-off analysis by tuning the harmony coefficient.", "We show the results on MRPC and SQuAD-v2.0 datasets to give comparisons with different baselines.", "It is shown that Transkimmer achieves a better accuracy to speedup Pareto curve compared to prior works.", "Transkimmer is able to provide better acceleration gain with less accuracy degradation.", "Especially, Transkimmer has a 1 .", "5 speedup without accuracy loss.", "The result validates our design decisions analyzed in the input reduction search space choices.", "Skim Strategy.", "Fig. 4 is the result of the number of tokens remained for the processing of each Transformer layer.", "The normalized area under each curve is a rough approximation of the speedup ratio with reference to the tokens number.", "By end-to-end optimization, Transkimmer learns significant distinguished strategies on different tasks.", "On WNLI dataset, over 90% of tokens are pruned within the first 3 layers and guarantees a high acceleration gain.", "The steer cliff at layer 7 on COLA demonstrates a large portion of skimming at this particular position.", "We suggest that this is because the processing of contextual information is sufficient for the skimming decision at this specific layer.", "Post-Hoc Case Study.", "Moreover, several posthoc case studies are demonstrated with Tbl.", "5.", "In the SST-2 sentimental analysis example, the defi-nite articles and apostrophes are discarded at the beginning.", "And all words are encoded in contextual hidden states embeddings and gradually discarded except for a few significant key words.", "Only the special token [CLS] is fully processed in this example for final sentimental classification.", "However, on the token classification task example from SQuAD dataset, all tokens are given to the downstream classifier to predict the answer position.", "The answer tokens are processed by all Transformer layers.", "Similarly, the question part is also kept with tokens containing enough information.", "Another detail worth mentioning is that we use subword tokenization for the SQuAD dataset.", "As such, subword tokens of the same word might be discarded at different layers.", "For instance, the word Francia is tokenized into franand -cia two subword tokens, which are pruned at layer 4 and 6 respectively.", "Input skimming or dynamic input reduction is an emerging Transformer model acceleration method studied by many works recently.", "This idea utilizes the semantic structure of language and the syntactic information of the input context for inference acceleration.", "Compared to static model weight compression methods, input skimming explores the redundancy in the input and hidden state tensors.", "As such, it is orthogonal and compatible with those model compression algorithms with its dynamic feature.", "In this work, we propose an accurate and efficient Transformer inference acceleration method by teaching it how to skim input contents.", "The proposed Transkimmer method is trained with an easy and end-to-end paradigm.", "Furthermore, Transkimmer is also generally applicable to various Transformer-based model structures.", "It is even compatible with the static model compression methods like knowledge distillation and weight sharing.", "We believe that the above features guarantee the Transkimmer method a wide range of applicable production scenarios.", "This work was supported by the National Key R&D Program of China under Grant 2021ZD0110104, the National Natural Science Foundation of China (NSFC) grant (U21B2017, 62106143, 62072297, and 61832006), and Shanghai Pujiang Program.", "We would like to thank the reviewers of ACL rolling review for their supportive comments and suggestions.", "Jingwen Leng and Minyi Guo are the corresponding authors of this paper." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "other", "other", "other" ]
[ "The commonly used framework for unsupervised machine translation builds initial translation models of both translation directions, and then performs iterative back-translation to jointly boost their translation performance.", "The initialization stage is very important since bad initialization may wrongly squeeze the search space, and too much noise introduced in this stage may hurt the final performance.", "In this paper, we propose a novel retrieval and rewriting based method to better initialize unsupervised translation models.", "We first retrieve semantically comparable sentences from monolingual corpora of two languages and then rewrite the target side to minimize the semantic gap between the source and retrieved targets with a designed rewriting model.", "The rewritten sentence pairs are used to initialize SMT models which are used to generate pseudo data for two NMT models, followed by the iterative back-translation.", "Experiments show that our method can build better initial unsupervised translation models and improve the final translation performance by over 4 BLEU scores.", "Recent work has shown successful practices of unsupervised machine translation (UMT) (Artetxe et al., 2017; Lample et al., 2017, 2018; Artetxe et al., 2018b; Marie and Fujita, 2018; Ren et al., 2019; Lample and Conneau, 2019).", "The common framework is to build two initial translation models (i.e., source to target and target to source) and then do iterative back-translation (Sennrich et al., 2016a; Zhang et al., 2018) with pseudo data generated by each other.", "The initialization stage is important because bad initialization may wrongly squeeze the search space, and too much noise introduced in this stage may hurt the final performance.", "Previous methods for UMT (Lample et al., 2018; Artetxe et al., 2018b; Marie and Fujita, 2018; Ren et al., 2019) usually use the following n-gram embeddings based initialization.", "They first build phrase translation tables with the help of unsupervised cross-lingual n-gram embeddings (Conneau et al., 2017; Artetxe et al., 2018a), and then use them to build two initial Phrase-based Statistical Machine Translation (PBSMT) (Koehn et al., 2003) models with two language models.", "However, there are two problems with their initialization methods.", "(1) Some complex sentence structures of original training sentences are hard to be recovered with the n-gram translation tables.", "(2) The initial translation tables inevitably contain much noise, which will be amplified in the subsequent process.", "In this paper, we propose a novel retrieve-and-rewrite initialization method for UMT.", "Specifically, we first retrieve semantically similar sentence pairs from monolingual corpora of two languages with the help of unsupervised cross-lingual sentence embeddings.", "Next, with those retrieved similar sentence pairs, we run GIZA++ (Och and Ney, 2003) to get word alignments which are used to delete unaligned words in the target side of the retrieved sentences.", "The modified target sentences are then rewritten with a designed sequence-to-sequence rewriting model to minimize the semantic gap between the source and target sides.", "Taking the pairs of the source sentences and corresponding rewritten targets as pseudo parallel data, we then build two initial PBSMT models (source-to-target and target-to-source), which are used to generate pseudo parallel data to warm up NMT models, followed by an iterative back-translation training process.", "Our code is released at https://github.com/Imagist-Shuo/RRforUNMT.git.", "Our contributions are threefold.", "(1) We propose a novel method to initialize unsupervised MT models with a retrieve-and-rewrite schema, which can Figure 1: Method overview.", "(2) We design an effective seq-to-seq architecture based on the Transformer to rewrite sentences with semantic constraints.", "(3) Our method significantly outperforms the previous non-pre-training based UMT results on en fr and en de translation tasks, and give the first unsupervised en zh translation results on WMT17.", "Our method can be divided into three steps as shown in Figure", "1. First, we do similar sentences retrieval ( 2.1) from two monolingual corpora with the help of unsupervised cross-lingual sentence embeddings.", "Next, to minimize the semantic gap between the source and retrieved targets, we do target sentences rewriting ( 2.2) by deleting unaligned words in the target side, and generate complete and better-aligned targets via our rewriting model with the help of missing information provided by the source.", "After that, we treat the rewritten pairs as the pseudo parallel data for translation models initialization and training ( 2.3).", "Given two monolingual corpora D x and D y of two languages X and Y respectively, we first build unsupervised cross-lingual word embeddings of X and Y using fastText (Bojanowski et al., 2017) and vecmap (Artetxe et al., 2018a), and then we obtain cross-lingual sentence embeddings based on the cross-lingual word embeddings via SIF (Arora et al., 2017).", "After that, we use the marginal-based scoring (Artetxe and Schwenk, 2018) to retrieve Figure 2: Example of rewriting.", "Examples retrieved from monolingual English and Chinese corpora are shown in Figure 1 in the Appendix A. 2.2 Target Sentences Rewriting As shown in Figure 2, having retrieved similar sentence pairs { x, y } , we first run GIZA++ (Och and Ney, 2003) on these pairs and obtain the word alignment information.", "Then, for each target sentence y , we remove the unaligned words from it according to lexical translation probabilities of GIZA++ output.", "We replace each deleted word with (cid:104) DEL (cid:105) in y to get the incomplete target sentence y (cid:48) .", "Meanwhile, we record the unaligned words in the source as x m 1 where m is the number of the unaligned source words.", "Next, we feed y (cid:48) and x m 1 into a sequence-to-sequence model to generate the refined target sentence y .", "The rewritten pairs { x, y } are 1 For each source sentence, we choose 30 nearest neighbors in the target language, which have approximately similar lengths to the source (within the difference of 5 words), and keep the neighbors with the scores more than 0.6.", "Our rewriting model is a modification of Transformer (Vaswani et al., 2017) shown as Figure 3.", "We initialize the embedding layer of the second input part with pre-trained cross-lingual word embeddings because its content should be independent of languages.", "We keep it fixed during training.", "Thus the second part is like a memory recording semantic information of words.", "We concatenate the readout embeddings of both parts with a separator, and feed them to the Transformer encoder, so that the attention mechanism will take effect on both parts together.", "For model training , due to the lack of references, we need to build training data for the rewriting model from monolingual corpus D y .", "Firstly, we remove 20 to 30 percent of words from a given sentence y D y , and replace them with (cid:104) DEL (cid:105) to get y (cid:48) .", "Next, we randomly swap contiguous words in y (cid:48) with the probability of 0.2 to introduce some noises.", "Then we record the removed words as set s m 1 and randomly drop/add some words from/to this set.", "We then treat y (cid:48) and s m 1 as the inputs, and y as the output to train the model.", "For model inference , we feed the incomplete sentence y (cid:48) and unaligned source words x m 1 into the trained model and generate the refined sentence y .", "Note there seems to be a bias between the training and inference that s m 1 during training are in the same language as y , while during inference, they are from the source language X .", "But the bias has been eliminated since the second input part of the encoder is the readout cross-lingual embeddings, which is independent of languages.", "Once we get { x, y } generated above, we use them to train initial PBSMT models, and use the SMT models to produce pseudo data to setup two NMT models, followed by the iterative back-translation.", "In our experiments, we consider three language pairs, English-French ( en fr ), English-German ( en de ) and English-Chinese ( en zh ).", "For en , fr and de , we use 50 million monolingual sentences in NewsCrawl from 2007 to 2017.", "As for zh , we use the Chinese side from WMT17 en zh parallel data.", "2 For the convenience of comparison, we use newstest 2014 as the test set for en fr , newstest 2016 for en de , and newstest 2017 for en zh .", "The data preprocessing is described in Appendix D. Baselines Our method is compared with eight baselines of unsupervised MT systems listed in the upper area of Table", "1. The first three baselines are unsupervised NMT models, and the fourth baseline is an unsupervised PBSMT model.", "The fifth baseline is an extract-and-edit schema for unsupervised neural machine translation.", "The sixth and seventh baselines are hybrid models of NMT and PBSMT.", "And the last baseline is a pre-training based method.", "The comparison results are reported in Table", "1. From the table, we find that our method significantly outperforms the best non-pre-training based baseline with an average of 4.63 BLEU scores on all pairs.", "Note that Lample and Conneau (2019) is based on pre-training, which uses much more monolingual data than our method.", "Even so, we reach comparable results on the en fr pair.", "2 Note that we only retrieve similar sentences from sampled 20 million sentences in each monolingual corpus and use Hierarchical Navigable Small World (HNSW) (Malkov and Yashunin, 2018) to build embedding index for space and time efficiency.", "During the iterative back-translation process in 2.3, we use the whole monolingual corpora.", "three baselines initialize their SMT models with phrase tables inferred from n-gram embeddings and language models.", "From the table, we find that our proposed method gives better initialization to SMT models.", "Even the SMT models trained with only the retrieved sentences reach higher performance than previous methods, which verifies that the noise within the retrieved sentences is random to a greater extent and can be easily eliminated by SMT models, which is consistent with Khayrallah and Koehn (2018).", "With the target sentences rewritten by our rewriting model, the quality of extracted phrases can be further improved.", "We also try to directly train NMT models with the rewritten pseudo data, but only get the BLEU scores under 10, which means there is still much noise for SMT to eliminate in the pseudo pairs.", "We build two test sets to quantify the performance of our rewriting models.", "The first test set denoted as in-domain , is from our synthetic training data.", "As described before, we build training samples using monolingual data according to the rules in 2.2.", "We select 8M sentences from the monolingual corpus of a certain language for model training and randomly sample 8k sentences as development and test sets respectively.", "In addition, we also test our rewriting model on newstest2014 ( en fr ), which is denoted as out-domain .", "We first run GIZA++ on the parallel sentences in the original test set to find the golden alignments between source and target words.", "Next, we randomly delete up to 30% words in the target side and record their aligned source words.", "Then we feed the incomplete target sentence and the recorded source words into our model to recover the original target.", "The BLEU scores on both test sets are listed in Table 3, which shows our rewriting model has good performance.", "Unsupervised machine translation becomes a hot research topic in recent years.", "The pioneering methods are based on NMT models (Transformer) (Artetxe et al., 2017; Lample et al., 2017; Yang et al., 2018) trained with denoising auto-encoder (Vincent et al., 2010) and iterative back-translation.", "The following work shows that SMT methods and the hybrid of NMT and SMT can be more effective (Artetxe et al., 2018b; Lample et al., 2018; Marie and Fujita, 2018; Ren et al., 2019; Artetxe et al., 2019).", "They build the initial PBSMT models with language models and phrase tables inferred from unsupervised cross-lingual n-gram embeddings.", "Recently, Lample and Conneau (2019) propose a pre-training method and achieve state-of-the-art performance on unsupervised en fr and en de translation tasks.", "But they use much more monolingual data from Wikipedia than previous work and this paper.", "We must also mention the work of Wu et al. (2019).", "They similarly use retrieval and rewriting framework for unsupervised MT. However, ours is different from theirs in two aspects.", "First, we efficiently calculate the cross-lingual sentence embeddings via a training-free method SIF rather than a pre-trained language model.", "Second, our rewriting method is based on the word alignment information which is more explicit than their max pooling, and our rewriting model is more simple but effective so that the rewriting results can be directly used without extra training techniques.", "In this paper, we propose a novel method for unsupervised machine translation with a retrieve-and-rewrite schema.", "We first retrieve similar sentences from monolingual corpora and then rewrite the targets with a rewriting model.", "With the pseudo parallel data, we better initialize PBSMT models and significantly improve the final iteration performance as the experiments show.", "This work is supported in part by National Key R&D Program of China AAA0102301 , and NSFC 61925203 & U1636210 & 61421003 ." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "other", "objective", "objective", "method", "objective", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "method", "abstain", "other", "objective", "abstain", "objective", "objective", "result", "other" ]
[ "Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words.", "However, the underlying reasons for their strong performance have not been well explained.", "In this paper, we bridge the gap by assessing the strengths of selective SANs (SSANs), which are implemented with a flexible and universal Gumbel-Softmax.", "Experimental results on several representative NLP tasks, including natural language inference, semantic role labelling, and machine translation, show that SSANs consistently outperform the standard SANs.", "Through well-designed probing experiments, we empirically validate that the improvement of SSANs can be attributed in part to mitigating two commonly-cited weaknesses of SANs: word order encoding and structure modeling .", "Specifically, the selective mechanism improves SANs by paying more attention to content words that contribute to the meaning of the sentence.", "The code and data are released at https://github.com/xwgeng/SSAN .", "Self-attention networks (SANs) (Lin et al., 2017) have achieved promising progress in various natural language processing (NLP) tasks, including machine translation (Vaswani et al., 2017), natural language inference (Shen et al., 2018b), semantic role labeling (Tan et al., 2018; Strubell et al., 2018) and language representation (Devlin et al., 2019).", "The appealing strength of SANs derives from high parallelism as well as flexibility in modeling dependencies among all the input elements.", "of NLP tasks.", "For example, some researchers incorporated a hard constraint into SANs to select a subset of input words, on top of which self-attention is conducted (Shen et al., 2018c; Hou et al., 2019; Yang et al., 2019b).", "Yang et al. (2018) and Guo et al. (2019) proposed a soft mechanism by imposing a learned Gaussian bias over the original attention distribution to enhance its ability of capturing local contexts.", "Shen et al. (2018c) incorporated reinforced sampling to dynamically choose a subset of input elements, which are fed to SANs.", "Although the general idea of selective mechanism works well across NLP tasks, previous studies only validate their own implementations in a few tasks, either on only classification tasks (Shen et al., 2018c; Guo et al., 2019) or sequence generation tasks (Yang et al., 2018, 2019b).", "This poses a potential threat to the conclusive effectiveness of selective mechanism.", "In response to this problem, we adopt a flexible and universal implementation of selective mechanism using Gumbel-Softmax (Jang et al., 2017), called selective self-attention networks ( i.e., SSANs).", "Experimental results on several representative types of NLP tasks, including natural language inference (i.e., classification ), semantic role labeling (i.e., sequence labeling ), and machine translation (i.e., sequence generation ), demonstrate that SSANs consistently outperform the standard SANs ( 3).", "Despite demonstrating the effectiveness of SSANs, the underlying reasons for their strong performance have not been well explained, which poses great challenges for further refinement.", "In this study, we bridge this gap by assessing the strengths of selective mechanism on capturing essentially linguistic properties via well-designed experiments.", "The starting point for our approach is recent findings: the standard SANs suffer from two representation limitation on modeling word order encoding (Shaw et al., 2018; Yang et al., 2019a) and syntactic structure modeling (Tang et al., 2018; Hao et al., 2019a), which are essential for natural language understanding and generation.", "Experimental results on targeted linguistic evaluation lead to the following observations: SSANs can identify the improper word orders in both local ( 4.1) and global ( 4.2) ranges by learning to attend to the expected words.", "SSANs produce more syntactic representations ( 5.1) with a better modeling of structure by selective attention ( 5.2).", "The selective mechanism improves SANs by paying more attention to content words that posses semantic content and contribute to the meaning of the sentence ( 5.3).", "SANs (Lin et al., 2017), as a variant of attention model (Bahdanau et al., 2015; Luong et al., 2015), compute attention weights between each pair of elements in a single sequence.", "Given the input layer H = { h 1 , , h N } RN d , SANs first transform the layer H into the queries Q RN d , the keys K RN d , and the values V RN d with three separate weight matrices.", "The output layer O is calculated as: O = ATT ( Q , K ) V (1) where the alternatives to ATT ( ) can be additive attention (Bahdanau et al., 2015) or dot-product attention (Luong et al., 2015).", "Due to time and space efficiency, we used the dot-product attention in this study, which is computed as: ATT ( Q , K ) = softmax ( QKT d ) (2) where d is the scaling factor with d being the dimensionality of layer states (Vaswani et al., 2017).", "Despite SANs have demonstrated its effectiveness on various NLP tasks, recent studies empirically revealed that SANs suffer from two representation limitations of modeling word order encoding (Yang et al., 2019a) and syntactic structure modeling (Tang et al., 2018).", "In this work, we concentrate on these two commonly-cited issues.", "Word Order Encoding SANs merely rely on attention mechanism with neither recurrence nor convolution structures.", "In order to incorporate sequence order information, Vaswani et al. (2017) proposed to inject position information into the input word embedding with additional position embedding.", "Nevertheless, SANs are still weak at learning word order information (Yang et al., 2019a).", "Recent studies have shown that incorporating recurrence (Chen et al., 2018; Hao et al., 2019b,c), convolution (Song et al., 2018; Yang et al., 2019b), or advanced position encoding (Shaw et al., 2018; Wang et al., 2019a) into vanilla SANs can further boost their performance, confirming its shortcomings at modeling sequence order.", "Structure Modeling Due to lack of supervision signals of learning structural information, recent studies pay widespread attention on incorporating syntactic structure into SANs.", "For instance, Strubell et al. (2018) utilized one attention head to learn to attend to syntactic parents of each word.", "Towards generating better sentence representations, several researchers propose phrase-level SANs by performing self-attention across words inside a n-gram phrase or syntactic constituent (Wu et al., 2018; Hao et al., 2019a; Wang et al., 2019b).", "These studies show that the introduction of syntactic information can achieve further improvement over SANs, demonstrating its potential weakness on structure modeling.", "In this study, we implement the selective mechanism on SANs by introducing an additional selector , namely SSANs, as illustrated in Figure 1.", "The selector aims to select a subset of elements from the input sequence, on top of which the standard self-attention (Equation 1) is conducted.", "We implement the selector with Gumbel-Softmax, which has proven effective for computer vision tasks (Shen et al., 2018a; Yang et al., 2019c).", "Selector Formally, we parameterize selection action a { SELECT , DISCARD } for each input element with an auxiliary policy network, where SELECT indicates that the element is selected for self-attention while DISCARD represents to abandon the element.", "The output action sequence A RN is calculated as: ( A ) = sigmoid ( E s ) (3) E s = Q s K Ts (4) Bush held a talk with Sharon 1 1 0 0 0 1 Selector SANs Figure 1: Illustration of SSANs that select a subset of input elements with an additional selector network, on top of which self-attention is conducted.", "where Q s RN d and K s RN d are transformed from the input layer H with distinct weight matrices.", "We utilize sigmoid as activation function to calculate the distribution for choosing the action SELECT with the probability or DISCARD with the probability 1 .", "Gumbel Relaxation There are two challenges for training the selector: (1) the ground-truth labels indicating which words should be selected are unavailable; and (2) the discrete variables in A lead to a non-differentiable objective function.", "In response to this problem, Jang et al. (2017) proposed Gumbel-Softmax to give a continuous approximation to sampling from the categorical distribution.", "We adopt a similar approach by adding Gumbel noise (Gumbel, 1954) in the sigmoid function, which we refer as Gumbel-Sigmoid .", "Since sigmoid can be viewed as a special 2-class case ( E s and 0 in our case) of softmax, we derive the Gumbel-Sigmoid as: Gumbel-Sigmoid ( E s ) = sigmoid (( E s + G (cid:48) G (cid:48)(cid:48) ) / ) = exp(( E s + G (cid:48) ) / ) exp(( E s + G (cid:48) ) / ) + exp( G (cid:48)(cid:48) / ) (5) where G (cid:48) and G (cid:48)(cid:48) are two independent Gumbel noises (Gumbel, 1954), and (0 , ) is a temperature parameter.", "As diminishes to zero, a sample from the Gumbel-Sigmoid distribution becomes cold and resembles the one-hot samples.", "At training time, we can use Gumbel-Sigmoid to obtain differentiable sample A as Gumbel-Sigmoid ( E s ) .", "To demonstrate the robustness and effectiveness of the SSANs, we evaluate it in three representative NLP tasks: language inference, semantic role labeling and machine translation.", "We used them as NLP benchmarks, which cover classification, sequence labeling and sequence generation categories.", "Specifically, the performances of semantic role labeling and language inference models heavily rely on structural information (Strubell et al., 2018), while machine translation models need to learn word order and syntactic structure (Chen et al., 2018; Hao et al., 2019c).", "Natural Language Inference aims to classify semantic relationship between a pair of sentences, i.e., a premise and corresponding hypothesis.", "We conduct experiments on the Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015), which has three classes: Entailment, Contradiction and Neutral.", "We followed Shen et al. (2018b) to use a token2token SAN layer followed by a source2token SAN layer to generate a compressed vector representation of input sentence.", "The selector is integrated into the token2token SAN layer.", "Taking the premise representation s p and the hypothesis vector s h as input, their semantic relationship is represented by the concatenation of s p , s h , s p s h and s p s h , which is passed to a classification module to generate a categorical distribution over the three classes.", "We initialize the word embeddings with 300D GloVe 6B pre-trained vectors (Pennington et al., 2014), and the hidden size is set as 300.", "Semantic Role Labeling is a shallow semantic parsing task, which aims to recognize the predicate-argument structure of a sentence, such as who did what to whom, when and where.", "Typically, it assigns labels to words that indicate their semantic role in the sentence.", "Our experiments are conducted on CoNLL2012 dataset provided by Pradhan et al. (2013).", "We evaluated selective mechanism on top of DEEPATT 1 (Tan et al., 2018), which consists of 1 https://github.com/XMUNLP/Tagger .", "stacked SAN layers and a following softmax layer.", "Following their configurations, we set the number of SAN layers as 10 with hidden size being 200, the number of attention heads as 8 and the dimension of word embeddings as 100.", "We use the GloVe embeddings (Pennington et al., 2014), which are pre-trained on Wikipedia and Gigaword, to initialize our networks, but they are not fixed during training.", "We choose the better feed-forward networks (FFN) variants of DEEPATT as our standard settings.", "Machine Translation is a conditional generation task, which aims to translate a sentence from a source language to its counterpart in a target language.", "We carry out experiments on several widely-used datasets, including small English Japanese (En Ja) and English Romanian (En Ro) corpora, as well as a relatively large English German (En De) corpus.", "For En De and En Ro, we respectively follow Li et al. (2018) and He et al. (2018) to prepare WMT2014 2 and IWSLT2014 3 corpora.", "For En Ja, we use KFTT 4 dataset provided by Neubig (2011).", "All the data are tokenized and then segmented into subword symbols using BPE (Sennrich et al., 2016) with 32K operations.", "We implemented the approach on top of advanced TRANSFORMER model (Vaswani et al., 2017).", "On the large-scale En De dataset, we followed the base configurations to train the NMT model, which consists of 6 stacked encoder and decoder layers with the layer size being 512 and the number of attention heads being 8.", "On the small-scale En Ro and En Ja datasets, we followed He et al. (2018) to decrease the layer size to 256 and the number of attention heads to", "4. For all the tasks, we applied the selector to the first layer of encoder to better capture lexical and syntactic information, which is empirically validated by our further analyses in Section", "4. 3.2 Experimental Results Table 1 shows the results on the three NLP benchmarks.", "Clearly, introducing selective mechanism significantly and consistently improves performances in all tasks, demonstrating the universality and effectiveness of the selective mechanism for SANs.", "Concretely, SSANs relatively improve prediction accuracy over SANs by +0.8% and +0.5% 2 http://www.statmt.org/wmt14 .", "respectively on the NLI and SRL tasks, showing their superiority on structure modeling.", "Shen et al. (2018c) pointed that SSANs can better capture dependencies among semantically important words, and our results and further analyses ( 5) provide supports for this claim.", "In the machine translation tasks, SSANs consistently outperform SANs across language pairs.", "Encouragingly, the improvement on translation performance can be maintained on the large-scale training data.", "The relative improvements on the En Ro, En Ja, and En De tasks are respectively +3.0%, +1.9%, and +3.3%.", "We attribute the improvement to the strengths of SSANs on word order encoding and structure modeling, which are empirically validated in Sections 4 and", "5. Shen et al. (2018c) implemented the selection mechanism with the REINFORCE algorithm.", "Jang et al. (2017) revealed that compared with Gumbel-Softmax (Maddison et al., 2014), REINFORCE (Williams, 1992) suffers from high variance, which consequently leads to slow converge.", "In our preliminary experiments, we also implemented REINFORCE-based SSANs, but it under-performs the Gumbel-Softmax approach on the benchmarking En De translation task (BLEU: 27.90 vs. 28.50, not shown in the paper).", "The conclusion is consistent with Jang et al. (2017), and we thus use Gumbel-Softmax instead of REINFORCE in this study.", "In this section, we investigate the ability of SSANs of capturing both local and global word orders on the bigram order shift detection ( 4.1) and word reordering detection ( 4.2) tasks.", "Task Description Conneau et al. (2018) propose a bigram order shift detection task to test whether an encoder is sensitive to local word orders.", "Given a monolingual corpus, a certain portion of sentences are randomly extracted to construct instances with illegal word order.", "Specially, given a sentence X = { x 1 , . . . , x N } , two adjacent words ( i.e., x n , x n +1 ) are swapped to generate an illegal instance X (cid:48) as a substitute for X .", "Given processed data which consists of intact and inverted sentences, examined models are required to distinguish intact sentences from inverted ones.", "To detect the shift of bigram word order, the models should learn to recognize normal and abnormal word orders.", "The model consists of 6-layer SANs and 3-layer MLP classifier.", "The layer size is 128, and the filter size is 512.", "We trained the model on the open-source dataset 5 provided by Conneau et al. (2018).", "The accuracy of SAN-based encoder is higher than previously reported result on the same task (Li et al., 2019) (52.23 vs. 49.30).", "Detection Accuracy Table 2 lists the results on the local bigram order shift detection task, in which SSANs are applied to different encoder layers.", "Clearly, all the SSANs variants consistently outperform SANs, demonstrating the superiority of SSANs on capturing local order information.", "Applying the selective mechanism to the first layer achieves the best performance, which improves the prediction accuracy by +19.8% over SANs.", "The performance gap between the SSANs variants is very large (i.e., 19.8% vs. around 4%), which we attribute to that the detection of local word reorder depends more on lexical information embedded in the bottom layer.", "Attention Behaviors The objective of local reordering task is to distinguish the swap of two adjacent words, which requires the examined model to pay more attention to the adjacent words.", "Starting from this intuition, we investigate the attention distribution over the attended words with different relative distances from the query word, as illustrated in Figure 2.", "We find that both SANs and SSANs focus on neighbouring words (e.g., distance < 3), and SSANs pays more attention to the adjacent words (distance=1) than SANs (14.6% vs. 12.4%).", "The results confirm our hypothesis that the selective mechanism helps to exploit more bigram patterns to accomplish the task objective.", "Figure 3 shows an example, in which SSANs attend most to the adjacent words except the inverted bigram he what .", "In addition, the surrounding words exactly and wanted also pay more attention to the exceptional word he .", "We believe such features help to distinguish the abnormally local word order.", "Task Description Yang et al. (2019a) propose a word reordering detection task to investigate the ability of SAN-based encoder to extract global word order information.", "Given a sentence X = { x 1 , . . . , x N } , a random word x i is popped and inserted into another position j ( i (cid:54) = j ).", "The objective is to detect both the original position the word is popped out (labeled as O), and the position the word is inserted (labeled as I).", "The model consists of 6-layer SANs and a output layer.", "The layer size is 512, and the filter size is 2048.", "We trained the model on the open-source dataset 6 provided by Yang et al. (2019a).", "Detection Accuracy Table 3 lists the results on the global reordering detection task, in which all the SSANs variants improve prediction accuracy.", "Similarly, applying the selective mechanism to the first layer achieves the best performance, which is consistent with the results on the global word reordering task (Table 2).", "However, the performance gap between the SSANs variants is much lower that that on the local reordering task (i.e., 4% vs. 15%).", "One possible reason is that the detection of global word reordering may also need syntactic and semantic information, which are generally embedded in the high-level layers (Peters et al., 2018).", "Attention Behaviors The objective of the WRD is to distinguish a global reordering (averaged distance is 8 . 7 words), which requires the examined model to pay more attention to distant words.", "Figure 4 depicts the attention distribution according to different relative distances.", "SSANs alleviate the leaning-to-local nature of SANs and pay more attention to distant words (e.g., distance > 5 ), which better accomplish the task of detecting global reordering.", "Figure 5 illustrates an example, in which 6 https://github.com/baosongyang/WRD .", "more queries in SSANs attend most to the inserted word the than SANs.", "Particularly, SANs pay more attention to the surrounding words (e.g., distance < 3 ), while the inserted word the only accepts subtle attention.", "In contrast, SSANs dispense much attention over words centred on the inserted position ( i.e., the ) regardless of distance, especially for the queries current rules for now .", "We speculate that SSANs benefits from such features on detecting the global word reordering .", "In this section, we investigate whether SSANs better capture structural information of sentences.", "To this end, we first empirically evaluate the syntactic structure knowledge embedded in the learned representations ( 5.1).", "Then we investigate the attention behaviors by extracting constituency tree from the attention distribution ( 5.2).", "Task Description We leverage two linguistic probing tasks to assessing the syntactic information embedded in a given representation.", "Both tasks are cast as multi-label classification problem based on the representation of a given sentence, which is produced by an examined model: Tree Depth ( TreeDepth ) task (Conneau et al., 2018) checks whether the examined model can group sentences by the depth of the longest path from root to any leaf in their parsing tree.", "Tree depth values range from 5 to 11, and the task is to categorize sentences into the class corresponding to their depth (7 classes).", "Top Constituent ( TopConst ) task (Shi et al., 2016) classifies the sentence in terms of the sequence of top constituents immediately below the root node, such as ADVP NP VP ..", "The top constituent sequences fall into 20 categories: 19 classes for the most frequent top constructions, and one for all other constructions.", "We trained the model on the open-source dataset provided by Conneau et al. (2018), and used the same model architecture in Section 4.1.", "form SANs by 22.4% on the overall performance.", "Concretely, the performance of SANs dramatically drops as the depth of the sentences increases.", "7 On the other hand, SSANs is more robust to the depth of the sentences, demonstrating the superiority of SSANs on capturing complex structures.", "Table 5 shows the results on the TopConst task.", "We categorize the 20 classes into 4 categories based on the types of sentences: question sentence (* SQ .), declarative sentence (* NP VP * etc.), clause sentence (SBAR * and S *), and others (OTHER).", "Similarly, the performance of SANs drops as the complexity of sentence patterns increases (e.g., Ques. Others, 95.90 50.67).", "SSANs significantly improves the prediction F1 score as the complexity of sentences increases, which reconfirm the superiority of SSANs on capturing complex structures.", "Task Description We evaluate the ability of self-attention on structure modeling by constructing constituency trees from the attention distributions.", "Under the assumption that attention distribution within phrases is stronger than the other, Marecek and Rosa (2018) define the score of a constituent with span from position i to position j as the attention merely inside the span denoted as score ( i, j ) .", "Based on these scores, a binary constituency tree is generated by recurrently splitting the sentence.", "When splitting a phrase with span ( i, j ) , the target is to look for a position k maximizing the scores of the two resulting phrases: k = arg max k (cid:48) ( score ( i, k (cid:48) ) score ( k (cid:48) , j )) (6) We utilized Stanford CoreNLP toolkit to annotate English sentences as golden constituency trees.", "We used EVALB 8 to evaluate the generated constituency trees, including bracketing precision, bracketing recall, and bracketing F1 score.", "Parsing Accuracy As shown in Table 6, SSANs consistently outperform SANs by 4.6% in all the metrics, demonstrating that SSANs better model structures than SANs.", "Figure 6 shows an example of generated trees.", "As seen, the phrases he ran and heart pumping can be well composed for both SANs and SSANS.", "However, SANs fail to parse the phrase structure legs churning , which is correctly parsed by SSANs.", "In this section, we follow He et al. (2019) to analyze the linguistic characteristics of the attended words in the above structure modeling tasks, as listed in Table 7.", "Larger relative increase ( (cid:52) ) denotes more attention assigned by SSANs.", "Clearly, SSANs pay more attention to content words in all cases , although there are considerable differences among NLP tasks.", "Content words possess semantic content and contribute to the meaning of the sentence, which are essential in various NLP tasks.", "For example, the depth of constituency trees mainly relies on the nouns, while the modifiers (e.g., adjective and content-free words) generally make less contributions.", "The top constituents mainly consist of VP (95% examples) and NP (75% examples) categories, whose head words are verbs and nouns respectively.", "In machine translation, content words carry essential information, which should be fully transformed to the target side for producing adequate translations.", "Without explicit annotations, SANs are able to learn the required linguistic features, especially on the machine translation task (e.g., dominating attention on nouns).", "SSANs further enhance the strength by paying more attention to the content words.", "However, due to their high frequency with a limited vocabulary (e.g., 150 words 9 ), content-free words, or function words generally receive a lot of attention, although they have very little substantive meaning.", "This is more series in structure probing tasks (i.e., TreeDepth and TopConst), since the scalar guiding signal (i.e., class labels) for a whole sentence is non-informative as it does not necessarily preserve the picture about the intermediate syntactic structure of the sentence that is being 9 https://en.wikipedia.org/wiki/Function word.", "generated for the prediction.", "On the other hand, the problem on content-free words is alleviated on machine translation tasks due to the informative sequence signals.", "SSANs can further alleviate this problem in all cases with a better modeling of structures.", "In this work, we make an early attempt to assess the strengths of the selective mechanism for SANs, which is implemented with a flexible Gumbel-Softmax approach.", "Through several well-designed experiments, we empirically reveal that the selective mechanism migrates two major weaknesses of SANs, namely word order encoding and structure modeling , which are essential for natural language understanding and generation.", "Future directions include validating our findings on other SAN architectures (e.g., BERT (Devlin et al., 2019)) and more general attention models (Bahdanau et al., 2015; Luong et al., 2015).", "We thank the anonymous reviewers for their insightful comments.", "We also thank Xiaocheng Feng, Heng Gong, Zhangyin Feng, and Xiachong Feng for helpful discussion.", "This work was supported by the National Key R&D Program of China via grant 2018YFB1005103 and National Natural Science Foundation of China (NSFC) via grant 61632011 and 61772156." ]
[ "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "result", "abstain", "other", "other", "other" ]
[ "Recent pre-trained abstractive summarization systems have started to achieve credible performance, but a major barrier to their use in practice is their propensity to output summaries that are not faithful to the input and that contain factual errors.", "While a number of annotated datasets and statistical models for assessing factuality have been explored, there is no clear picture of what errors are most important to target or where current techniques are succeeding and failing.", "We explore both synthetic and human-labeled data sources for training models to identify factual errors in summarization, and study factuality at the word-, dependency-, and sentence-level.", "Our observations are threefold.", "First, exhibited factual errors differ significantly across datasets, and commonly-used training sets of simple synthetic errors do not reflect errors made on abstractive datasets like XSUM .", "Second, human-labeled data with fine-grained annotations provides a more effective training signal than sentence-level annotations or synthetic data.", "Finally, we show that our best factuality detection model enables training of more factual XSUM summarization models by allowing us to identify non-factual tokens in the training data.", "1 1 Introduction Hallucination of unsupported or incorrect facts is a known shortcoming of current text generation and summarization models (Cao et al., 2018; Falke et al., 2019).", "This has been established for both abstractive summarization models (Maynez et al., 2020) and extractive summarization models (Kryscinski et al., 2020; Falke et al., 2019).", "Past work has explored using off-the-shelf frameworks such as entailment models (Falke et al., 2019) or QA systems (Durmus et al., 2020; Wang et al., 1 Code and data available at https://github.com/ tagoyal/factuality-datasets 2020) to detect and sometimes correct errors in generated summaries.", "Another line of recent work has used synthetically generated data to specifically train models on the factuality detection task (Kryscinski et al., 2020; Zhao et al., 2020; Goyal and Durrett, 2020a).", "However, these efforts have focused on different datasets, summarization systems, and error types, often shedding little light on what errors state-of-the-art systems are actually making and how to fix them.", "In this paper, we aim to answer two main questions.", "First, while synthetic data generation approaches are specifically designed for factuality evaluation, do these align with actual errors made by generation models?", "We find the answer is no: techniques using surface-level data corruption (Kryscinski et al., 2020; Zhao et al., 2020; Cao et al., 2020) or paraphrasing (Goyal and Durrett, 2020a) target inherently different error distributions than those seen in actual model generations, and factuality models trained on these datasets perform poorly in practice.", "Furthermore, we show that different summarization domains, CNN/Daily Mail (Hermann et al., 2015; Nallapati et al., 2016) and XSum (Narayan et al., 2018) (which differ in the style of summaries and degree of abstraction), exhibit substantially different error distributions in generated summaries, and the same dataset creation approach cannot be used across the board.", "Second, we investigate the best approach for modeling and learning factuality, particularly for highly abstractive summarization settings (Narayan et al., 2018).", "Specifically, we compare the utility of fine-grained human annotations (such as error highlighting at the wordor span-level) with sentence-level factuality annotations.", "We use a prior factuality detection model capable of leveraging such fine-grained annotations (Goyal and Durrett, 2020a) and show that these allow us to more reliably detect errors as well as localize those errors within generated texts.", "In fact, fine-grained Reference Summary: An early-medieval gold pendant created from an imitation of a Byzantine coin that was found in a Norfolk field is a rare find, a museum expert has said.", "human annotations are almost essential for any of our techniques to work well with high-performing summarizers in the challenging XSUM setting.", "Finally, we demonstrate a practical application for such error localization capabilities beyond in-terpretibility.", "Given noisy training data for summarization, we employ a modified training objective that leverages information about error spans in gold summaries, derived from factuality models, to train the summarizer.", "Our results show that models trained using this approach are inherently more factual than standard training objectives when dealing with error-prone gold datasets.", "We first seek to answer how well synthetic training data can help address factuality errors observed in real summarization datasets.", "Figure 1 shows a summary of the approaches we consider, which we describe in detail in Section 2.1 and 2.2.", "The summarization models we analyse are trained on two English-language domains: (1) XSUM , an extreme summarization dataset from British Broadcasting Corporation (BBC) articles, where the first sentence of the article is treated as a summary of the article.", "These summaries are highly abstractive in nature: summarization models trained on this dataset have to learn to model long-range dependencies and may still be unable to recover all information in the gold summary.", "(2) CNN /D AILYMAIL , a multi-sentence abstractive summary dataset.", "The level of abstraction in this dataset is considerably lower and reference summaries exhibit high overlap with source articles (Zhang et al., 2018).", "For both of these domains, we compare the distribution of factuality errors from synthetic training data with the distribution of observed factuality errors from models trained on that data.", "In Section 4, we further dive into factuality models' performance in these settings.", "A recent thread of work has focused on leveraging synthetic data transformations for evaluating factuality (Kryscinski et al., 2020), imposing decoding-time constraints (Zhao et al., 2020), or post-correction of summaries (Cao et al., 2020).", "Each of these approaches assumes that corruption strategies will yield useful non-factual summaries, while gold summaries are treated as factual.", "Figure 1 illustrates this process: these approaches apply transformations to either the source article (shown) or a reference summary to obtain a corrupted summary ( Ohio instead of Norfolk ).", "We call this set of approaches entity-centric because the transformations largely focus on perturbing entities and noun phrases and addressing these types of hallucinations.", "The approach from Kryscinski et al. (2020) has the broadest set of transformations out of this line of prior work, so we follow them to generate training examples representative of this class of techniques.", "The data Gold: Apple has been accused of misleading customers over its new iPad 3.0 version.", "corruptions or transformations included are entity and number swapping, pronoun swapping, sentence negation, and arbitrary noise injection.", "Additionally, backtranslation is used to paraphrase summaries and further augment the dataset.", "Figure 2 illustrates the complete set of transformations applied to the reference summary to construct the synthetic dataset.", "For CNN /D M , we use a dataset of 50k labeled pairs that is a subset of the data distributed by Kryscinski et al. (2020); this subset is sufficient to reproduce the performance of their factuality classifier.", "We generate a similarly-sized dataset for XSUM .", "Note that although the data creation procedure produces sentence-level annotations, since data corruptions are introduced in a rule-based manner, we can highlights spans within the summaries where the error was actually introduced to get span-level factuality annotations as well.", "Figure 1 illustrates these spans in red.", "The figure also demonstrates how to obtain dependency-level factuality judgements from error span highlights; what these mean and how these are derived is explained in Section 2.2.", "Goyal and Durrett (2020a) introduce a different method for obtaining factuality annotations that more closely align with errors made by generation models.", "The core assumption of that generation-centric approach (see Figure 1) is that generated paraphrases at the bottom of a paraphrasing model's beam (the 10th-best paraphrase) are more likely to contain factual errors than 1-best generations, and new information in these generations can be labeled non-factual.", "Moreover, these generations align with realistic errors made by generation models, unlike purely synthetic entity swaps.", "In addition to sentence-level annotations, this approach also extracts factuality labels corresponding to each dependency arc of the generated summary .", "According to the definition given in Goyal and Durrett (2020a), an arc is factual (or entailed) if the semantic relationship described by that particular dependency arc is entailed by the source article.", "Figure 1 shows a non-factual created necklace collapsed dependency arc.", "To adapt this data creation approach for our current experimental setting, we generated paraphrases of gold summaries using the paraphrase generation model of Goyal and Durrett (2020b).", "We use the 10th-best generated summaries to generate both sentence-level and dependency-level annotations automatically.", "See Figure 1 for an example of this process.", "We generate 40k training examples for both CNN /D M and XSUM domains.", "The two techniques, Ent-C and Gen-C, naturally generate annotations at different levels.", "We take steps to unify these formats to enable apples-to-apples comparison of them.", "For Ent-C as well as human-labeled data (dis-cussed later), we have access to span highlights within the summary that are non-factual with respect to the source article.", "From these, we can derive dependency-level annotations in the following way: for each arc in the summary, if either the head word or the child word is highlighted as non-factual, the dependency arc is annotated as non-factual.", "Otherwise, the arc is factual.", "This process is demonstrated in Figure", "1. Table 1 gives a summary of the type of annotations available for the 3 types of training datasets.", "Mapping Gen-C dependency-level annotations to word-level classification decisions is less well-defined, so we do not attempt to do this.", "Our focus in this work will be on training sentence-level and dependency-level classification models, which is possible on all our sources of data.", "Source Article US technology firm Apple has offered to refund Australian customers who felt misled about the 4G capabilities of the new iPad.", "The country's consumer watchdog has taken Apple to court for false advertising because the tablet computer does not work on Australia's 4G network.", "Apple's lawyers said they were willing to publish a clarification.", "[] At a preliminary hearing, Apple lawyer Paul Anastassiou said Apple had never claimed the device would work fully on the current 4G network operated by Telstra.", "Apple says the new iPad works on what is globally accepted to be a 4G network.", "The matter will go to a full trial on 2 May.", "Past work using synthetic training data implicitly assumes that training a factuality model on such data will allow it to transfer to realistic settings.", "We start by qualitatively analyzing the actual errors produced by summarization models to see how these align with the synthetic data, which helps us better understand this assumption.", "We identify four broad categories of errors (see Figure 3) that we will identify through manual inspection.", "Each of these categories is further divided into Intrinsic (errors that arise as a result of misinterpreting information from the source article) and Extrinsic (errors that hallucinate new information or facts not present in the source article), following the characterization from Maynez et al. (2020).", "dates, etc.", "Hallucination of new entities is an extrinsic error; incorrectly combining distinct entities from the source article is an intrinsic error ( Paul Telstra in Figure 3).", "2. Event-Related : errors with incorrect claims about events in the summary, such as predicates with arguments filled by incorrect entities.", "Hallucinations of new events ( held a press conference in Figure 3) are extrinsic; mixed-up attributes from within the source article are intrinsic ( apple lawyer never claimed in Figure 3, incorrect agent).", "3. Noun Phrase-Related : errors related to noun phrases other than the entity-specific errors.", "Examples include hallucinating new NP modifiers (extrinsic) or combining with a wrong modifier from the article (intrinsic).", "4. Other Errors : errors such as ungrammatical text, repeated words, highly erroneous spans, etc. that don't fall into one of the above categories.", "These are not broken down by intrin-sic/extrinsic.", "Our taxonomy of summarization errors differs from that of Lux et al. (2020): theirs is targeted at the effects on the reader, whereas ours is more directly tied to the grammatical role of the error, which we believe is more useful to improve our Ext Int Ext Int Ext Int Ext Int *only pronoun swap errors Entity-Related Event-Related NP-Related Other Other Other Other XSUMCNN /D MENT-C GEN-C Other 0.5 0.25 0 0.5 0.25 0 0.5 0.25 0 0.5 0.25 0 A c t u a l G e n er a t i o n E rr o r s S y n t h e t i c E rr o r s Fraction of non-factual summaries in the test set 75% 14% Figure 4: Fractions of examples in each dataset exhibiting different error types (note a single example may have multiple errors).", "data and our systems.", "We use the above taxonomy to annotate examples from both summarization domains.", "For XSUM , we use the state-of-the-art BART model (Lewis et al., 2020) to generate summaries followed by manual annotation (100 ex-amples).", "For CNN /D M , annotation was done on the 50 summaries across 10 different models collected by Kryscinski et al. (2020).", "We additionally do this annotation for the artificially introduced errors in Ent-C and Gen-C.", "2 Results Figure 4 shows the distribution of errors for these different settings.", "First, we see that summarization models from different domains make substantially different types of errors .", "Models trained on XSUM learn to hallucinate new content and consequently produce extrinsic errors: 60% of the errors made by BART models are extrinsic.", "One reason for this is that the XSUM data was automatically constructed and contains gold summaries that are noisy or non-factual (75% of gold summaries, according to Maynez et al. (2020)).", "In addition to this, the gold summaries are also highly abstractive, and XSum-trained summarization models learn to combine informa-2 Discussion of inter-annotator agreement is included in Appendix A. tion from different parts of an article , leading to models making long-range dependency errors.", "This misinterpretation of content is largely responsible for the 40% of the errors which are intrinsic.", "On the other hand, the CNN /D M summarization datasets contain human written gold summaries and are therefore generally much more reliable.", "The models trained on this dataset reflects that.", "Only 14% of the generated summaries contains errors in the CNN /D M validation set from (Kryscin-ski et al., 2020).", "Of these 14%, the bulk of the errors produced are intrinsic errors , primarily event-related caused by sentence compression or fusion, which is common in this dataset (Lebanoff et al., 2019).", "For example, the two Delaware boys are in critical condition at the U.S. Virgin Islands should instead be ...at the hospital after a trip to the U.S. Virgin Islands.", "The generation models rarely makes extrinsic hallucinations, and we observed that these are even less common in recent systems like PEGASUS (Zhang et al., 2020a).", "This aligns with the findings from prior work analysing summarization models (Fabbri et al., 2021).", "Comparing these with synthetic error distributions, we can see that synthetic datasets do not reflect the error distributions of actual generation models .", "To the extent that Ent-C covers intrinsic event-related errors, these are almost exclusively from pronoun swaps.", "Moreover, because CNN /D M and XSUM feature such different errors, a synthetic dataset inspired by observed errors on one setting is not likely to be effective on the other.", "Later (in Section 5.1), we provide further evidence of this mismatch for both datasets: models trained on this synthetic data perform poorly when evaluated on actual generation errors.", "Also, models trained on human annotated XSUM training data do not transfer to the CNN /D M domain.", "Next, we investigate how factuality models trained on these synthetic datasets perform on real generation errors.", "Given a document D , a factuality model predicts whether all the information in a generated summary S is supported by the source document D .", "3 We consider two factuality modeling formulations: (1) a Sentence-Factuality model that 3 Factuality is ill-defined: whether inferences, world knowledge, implicatures, etc. are viewed as factual is not standardized and is dependent on human annotators for each dataset or task.", "However, existing generation models only rarely exhibit tricky cases along these dimensions.", "makes a factuality judgment at the entire summary-level, and (2) an Arc-Factuality model (Goyal and Durrett, 2020a) that makes independent factuality judgments for dependency arcs of the generated summary, which are then combined to obtain a sentence-level decision.", "This helps in localizing factuality errors and was shown to be more effective than sentence-level models in prior work.", "4 4.1 Sentence-Factuality Model Prior work (Kryscinski et al., 2020) used a BERT based sequence-pair classification model (Devlin et al., 2019) as follows: the source document D and the generated summary S are concatenated and fed into a pre-trained transformer encoder model (BERT , ELECTRA , etc.).", "The representation of the [CLS] token is fed into a linear and softmax layer that outputs a probability distribution over the output labels ( y = { Factual, Non-Factual } ).", "This model can be trained on any data with summary-level factuality labels.", "The Dependency Arc Entailment (DAE) model (Goyal and Durrett, 2020a) evaluates factuality at the dependency arc level.", "Let d ( S ) be the dependency-parse of the generated summary S .", "For each arc a d ( S ) , the DAE model predicts whether the relationship described by the arc is entailed by the input document.", "Note that these factuality judgements are made independently for each arc in the summary, and can differ across arcs within the same summary.", "For instance, in the ex-4 We describe models for single-sentence summaries to align with the available human-annotated test set (described later in Section 5.1).", "However, it is straightforward to extend these frameworks for multi-sentence summaries.", "ample in Figure 5, the arc arrested games is non-factual: in context, it is not the case that the games are being arrested.", "However, the arc seven games is supported by the input (there are seven games) and hence, entailed.", "The model architecture is detailed in Figure", "5. First, the document D and summary S are concatenated and fed through a pre-trained encoder E .", "Arc representations r a are derived for each dependency arc a d ( S ) : r a = [ E ( D ; S ) a h ; E ( D ; S ) a c ] .", "Here, a h and a c correspond to the head and child words of arc a respectively.", "The arc representation r a is fed into a classification layer that outputs a probability distribution over the output labels ( y a = { Factual, Non-Factual } ).", "Finally, summary-level judgments are extracted from these arc-level decisions: if any dependency arc is non-factual, the generated summary is labeled as non-factual.", "The DAE model is trained from arc-labeled examples of the form ( D, S, { y a } a d ( S ) ) .", "These are derived from either synthetic or human-labeled data, as described in Section", "2. DAE with weak supervision (DAE-Weak) DAE training requires gold annotations at the dependency-level; however, such fine-grained annotations may not always be available.", "We extend the DAE framework to address this.", "The core idea behind our approach is that the sentence-level labels naturally impose loose constraints on the arc-level labels.", "The constraints are as follows: for a factual example, all individual arcs in the summary must be factual.", "For a non-factual example, at least one arc must be non-factual, and this arc should be one not present in the source document.", "The DAE-Weak model is trained to maximize the marginal likelihood of all labelings that obey these constraints.", "Let F be the set of all arcs that should be factual (contains all arcs with sent-label = 1 and arcs common with the source article for sent-label = 0).", "The above constraints are formulated as the following training objective: L = log (cid:34)(cid:89) a FP ( y a = 1 | D, S ) (cid:35) + log 1 (cid:89) a D ( S ) \\ FP ( y a = 1 | D, S ) The second term in the above equation is the probability of predicting at least one non-factual arc in Ent-C Gen-C Majority Label 50 50 Kryscinski et al. (2020) 74.1 Sent-Factuality 72.3 64.4 DAE 76.7 72.1 DAE-Weak 75.2 71.1 Table 2: Label-balanced accuracy of factuality models when trained on synthetic factuality training datasets in the CNN /D AILYMAIL domain.", "CNN /D AILYMAIL First, we compare the performance of the three models (Sent-Factuality, DAE and DAE-Weak) trained on the two synthetic factuality datasets (outlined in Section 2) on the CNN /D AILYMAIL domain.", "We compare their performance on the human-annotated test dataset from Kryscinski et al. (2020).", "The test set contains human-annotated sentence-level factuality judgements for 503 (article, summary) pairs for summaries generated using 10 different generation models.", "We use the validation set provided by the authors to choose the best model checkpoint across all settings.", "Similar to the original paper, we report class-balanced accuracy values .", "Table 7 outlines our results.", "The results show that models trained on Ent-C perform slightly better than those trained on Gen-C, but many of the systems are in the same range, with accuracy values of around 75%.", "However, the reported accuracy values on held-out Ent-C/Gen-C examples are consistently over 90% (results included in Appendix B).", "This demonstrates that while models trained on these factuality datasets are able to fit the synthetic data distributions well, these are inherently different from actual generation errors.", "The Appendix also includes graphs of how the human annotated dev set performance varies with training iterations, showing that constant performance on the held-out training set corresponds with highly fluctuating performance on the human annotated data, further 5 This techniques resembles posterior regularization (Ganchev et al., 2010); however, these constraints are enforced in a hard way on individual examples rather than in expectation at the corpus level.", "It can also be viewed as an instance of constraint-driven learning (Chang et al., 2007).", "XSUM Next, we similarly evaluate the synthetic datasets and factuality models on the more challenging XSUM domain.", "Again, we evaluate on a human annotated dataset collected by prior work (Maynez et al., 2020).", "The dataset contains span highlights indicating hallucinated/incorrect content or information with respect to the source article for 4 different summarization models trained on the XSUM domain (as well as for gold summaries).", "Figure 1 illustrates this.", "Similar to prior work, if any word in a summary is marked as hallucinated, we mark the sentence as non-factual.", "Therefore, for XSUM-HUMAN , the annotation is available at both the sentence-level and span-level.", "In total, this dataset contains 2500 ( A, S ) pairs (along with their factuality labels).", "We use 500 examples from these to construct our test dataset.", "The remaining 2000 examples are used to train models, explained in Section 5.2.", "Table 3 outlines the results.", "Unlike on CNN /D M , we see that all models trained on synthetic factuality datasets perform very poorly, achieving close to the majority label baseline.", "Again, the performance on the held-out synthetic datasets was observed to be very high (see Appendix B).", "There is a fundamental difference between the errors that are produced by XSUM summarization models and those introduced by artificial data corruption mechanisms.", "Other data that more closely resembles the generation errors is needed to train factuality models in this setting.", "To investigate whether human annotated data is useful to train factuality models, we train our 3 factuality models on the remaining 2000 human annotated examples from XSUM-HUMAN .", "In order to train DAE model on this dataset, we use the span highlights to derive dependency-level gold annotations, using the same strategy from 2.3 (illustrated Model Balanced-Acc Sent-Factuality 65.6 DAE 78.7 DAE-Weak 70.9 Table 4: Comparison of different factuality models when trained on human annotated data and evaluated on XSUM (compare to Table 3).", "The results are shown in Table", "4. Comparing these with results from Table 3, we see that a small number of human annotated examples can outperform large auto-generated training datasets by a large margin.", "Notably, we see that availability of fine-grained factuality annotations significantly boosts performance , with models that leverage that information (DAE) significantly outperforming sentence-level models.", "Even in the absence of fine-grained annotations, we see that the DAE-Weak model that decomposes the error computation and explicitly tries to localize errors is better than the sentence-level model.", "However, these factuality models do not transfer to CNN /D M : the best model achieves an accuracy of 55.9, substantially lower than 76.7% in Table 7.", "This demonstrates that summarization models make different types of errors on different domains, and data collection and modelling efforts for factuality should account for these differences.", "Our evaluation so far has focused on the sentence-level performance of factuality models.", "Next, we evaluate the models' ability to localize errors within the generated summary as well as show how such a capability can be leveraged to train less error-prone summarization models.", "We evaluate the error localization performance of the models at two granularity levels: (1) Dependency arc-level and (2) Word-level .", "6 Table 5 outlines the results of our experiments.", "The DAE model outperforms the DAE-Weak model at both levels of granularity.", "This reiterates our earlier claim that fine-grained annotations lead to better factuality models with more 6 We can approximately extract word-level decision from the dependency-level predictions: if any arc containing word w is non-factual, then w is non-factual; otherwise, it is factual.", "reliable localization .", "However, the DAE-Weak model is able to achieve comparable recall at the dependency-level; both models are more recall-oriented, which is desirable for certain applications.", "For Section 6.2, we select our DAE model's best checkpoint on the test data ( best-ckpt ), which achieves a recall of 83.9, a significant gain if we directly optimize for this metric.", "Localizing errors potentially allows for post-hoc correction (Zhao et al., 2020; Cao et al., 2020); however, repairing a summary to be fully factual is a very hard problem and past work has focused on a subset of errors as a result.", "Instead, we show that even our imperfect error localization techniques can be used to meaningfully improve the training data for summarization.", "We use our DAE model to identify unsupported facts in the XSUM training data and ignore the corresponding tokens when training our summarization model.", "Training on a subset of tokens Summarization models are trained to maximize the log likelihood of the summary given the source article: L = (cid:80) i =1: | S | log p ( S i | D, S 1: i 1 ) .", "When a word in the summary is non-factual, training on it encourages the model to hallucinate new content.", "In our approach, we modify the training objective to only maximize the likelihood of factual words in the summary, factuality being determined by the DAE model from the previous sections: L = (cid:80) i =1: | S | M i log p ( S i | D, S 1: i 1 ) where M i = 1 if the word w i is factual, otherwise M i = 0 .", "A similar objective has been used by prior work (Song et al., 2020b) to encourage the model to copy words present in the source.", "a model using the loss truncation technique well-suited for noisy datasets from Kang and Hashimoto (2020).", "All models are trained on 50k examples using BART summarization model initialized from the BART-XSUM-LARGE checkpoint.", "For all these approaches, summaries generated on the original XSUM test set (11k examples) are compared.", "7 Evaluation First, we use our trained DAE model to evaluate the performance of our summarization models.", "That is, we generate summaries for all examples in the test set using the three models; the DAE model is then used to compute the word error rate (fraction of words determined to be non-factual according to the DAE model) and the sentence error rate (fraction of sentences determined to be non-factual).", "Table 6 outlines the results, which show that our DAE-masked training leads to better factuality performance.", "Next, we perform human evaluation to compare the factuality of summaries generated by the three models using Amazon Mechanical Turk.", "We randomly sampled 50 articles from the test set and generated summaries corresponding to the 3 models.", "8 .", "We asked 7 human annotators to classify each (article, summary) pair as either factual (score = 1) or non-factual (score = 0).", "An average score is computed for each summary by aggregating the 7 annotator scores.", "Table 6 reports the average summary scores for the 50 (article, summary) pairs across the 3 summarization models.", "The results show that the proposed approach outperforms both the baseline model and the loss truncation approach.", "This demonstrates that factuality models trained on a small number of annotated examples can be used to train factual summarization models, even when the underlying summarization dataset is noisy .", "Earlier work on abstraction (Barzilay et al., 1999; Carenini and Cheung, 2008) and compression (Knight and Marcu, 2000; Berg-Kirkpatrick et al., 2011; Woodsend and Lapata, 2012; Durrett et al., 2016) in summarization has typically focused evaluation on content selection and grammaticality, with little heed paid to factuality.", "Human evaluation similarly focused on content selection (Gillick and Liu, 2010).", "Methods such as Pyramid (Nenkova and Passonneau, 2004) that could have in principle been used to evaluate factuality were primarily used to understand content selection.", "Recent work has explored different methods for enforcing factuality: modifying the model, such as encoding SRL structures in the input (Cao et al., 2018), post-hoc correction (Dong et al., 2020), or constrained decoding (Song et al., 2020a; Mao et al., 2020).", "However, these techniques fundamentally struggle to handle the whole range of factual errors; factuality is a fuzzy notion and cannot be easily encapsulated into a set of discrete rules.", "Faithfulness and factuality have also been tackled in related tasks, including summarizing radiology reports (Zhang et al., 2020b) and data-to-text generation tasks (Tian et al., 2019).", "Another recent line of work has looked at fact verification (Thorne et al., 2018; Nie et al., 2019; Atanasova et al., 2020).", "In this literature, the claims are usually human-authored and a straightforward statement of a fact, whereas generated summaries might feature claims buried in nominal modifiers like two-time winner .", "In this work, we showed that existing synthetic datasets are not well-suited to factuality evaluation of recent summarization models (like BART ) in challenging domain (like XSUM ).", "Models trained on human-annotated data, especially those that leverage fine-grained annotations, can enable training of more factual summarization models.", "We hope future work will explore better modeling and data creation to address the pressing issues in current systems.", "This work was partially supported by NSF Grant IIS-1814522, a gift from Salesforce Inc, and an equipment grant from NVIDIA.", "Thanks as well to Jiacheng Xu and the anonymous reviewers for their helpful comments." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "method", "result", "abstain", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "result", "abstain", "objective", "other", "other" ]
[ "Transformers have advanced the field of natural language processing (NLP) in many ways.", "At the heart of the Transformer architecture is the multi-head attention (MHA) mechanism which models pairwise interactions between the elements of the sequence.", "Despite its massive success, the current framework ignores interactions among different heads, leading to the problem that many of the heads are redundant in practice, which underutilizes the capacity of the model.", "To improve parameter efficiency, we re-formulate the MHA as a latent variable model from a probabilistic perspective.", "We present c ascaded head-c o lli d ing a ttention ( CODA ) which explicitly models the interactions between attention heads through a hierarchical variational distribution.", "We conduct extensive experiments and demonstrate that CODA outperforms the transformer baseline, by 0 .", "6 perplexity on Wikitext-103 in language modeling, and by 0 .", "6 BLEU on WMT14 EN-DE in machine translation, due to its improvements on the parameter efficiency.", "1 1 Introduction Transformers (Vaswani et al., 2017) have advanced the field of natural language processing (NLP) on a variety of important tasks, including language modeling (Dai et al., 2019; Baevski and Auli, 2019), language understanding (Devlin et al., 2019; Yang et al., 2019b), and machine translation (Vaswani et al., 2017; Dehghani et al., 2019; Liu et al., 2020).", "It has also found its place in computer vision (Dosovitskiy et al., 2020), and in intelligent agents (Vinyals et al., 2019) where sequence modeling plays a key role as well.", "The cornerstone of the transformer architecture is the multi-head attention (MHA) mechanism which models pairwise interactions between the elements of the sequence.", "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors.", "The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.", "A multi-head attention (MHA) mechanism extends the idea through performing multiple separately parameterized attention functions acting in parallel to contextualize the input representations.", "Their outputs are then gathered by an affine transformation, allowing the model to jointly attend to information from different representation subspaces at different positions.", "Despite its massive success, the current framework ignores the interactions among different heads, leading to the problem that many of the heads are redundant in practice (i.e., attending to the same regions of the sequence), which underutilizes the capacity of the model (Voita et al., 2019; Michel et al., 2019a).", "At the same time, recent research (Tang et al., 2018; Clark et al., 2019; Voita et al., 2019; Wu et al., 2020, inter alia ) demonstrates that heads in MHA have the potential to capture distinct information from input sequences, ranging from syntactic and semantic features to alignment information between source and target sentence pairs.", "These observations suggest that multiple heads should be encouraged to extract complementary information.", "Therefore, it is highly appealing to take into account the interactions among different attention heads from the perspective of parameter efficiency and the expressiveness of the model.", "In this work, we introduce head-colliding attention (3).", "We formulate MHA as a probabilistic model, where each attention head is represented by a latent variable and all of them collide into the observed sequence data (Figure 1a).", "In this probabilistic graphical model structure, attention heads work as individual factors to explain the data.", "Although each factor is independent of each other a priori , they interact with each other automatically, conditioning on observations, thanks to the explaining-away effects (Pearl, 1989; Wellman and Henrion, 1993).", "The head-colliding attention mechanism introduces new computational challenges in training the model.", "We will discuss how we tackle these using variational methods (Blei et al., 2017).", "We propose c ascaded head-c o lli d ing a ttention ( CODA , Figure 1b).", "As our main model, CODA adopts a hierarchical variational distribution (Ranganath et al., 2016) to allow both rich head interactions and effective computations (4).", "We validate our method in language modeling and machine translation experiments (5).", "CODA outperforms the vanilla MHA transformer on both tasks, on Wikitext-103 by 0 .", "6 perplexity and on WMT14 EN-DE by 0 .", "6 BLEU.", "Further analysis shows that CODA learns to encourage diversity in different heads (Figure 2) and to promote parameter efficiency when increasing the number of heads (5.3).", "Multi-head attention (MHA) mechanism plays an important role in modern transformer architecture (Vaswani et al., 2017).", "It extends the classical attention mechanism by running multiple attention function heads in parallel.", "An MHA module is composed of h identical blocks (usually referred to as attention heads).", "Each head will generate a hidden state H i based on the input Query, Key and Value matrices, denoted as Q , K , and V respectively.", "The hidden states from different heads are then aggregated as the output of the MHA module: (cid:80) ni =1 H i W o i , where W oi are model parameters.", "In the i -th head, the input matrices Q , K and V are first linearly projected into different subspace representations (cid:101) Q i , (cid:102) K i , and (cid:101) V i , based on different learnable parameters.", "After that, we compute the inner product over all projected queries and keys as the attention logits z i , which are then passed through a row-wise softmax 2 to obtain head attention weights a i : a i = softmax( z i ) = softmax( (cid:101) Q i (cid:102) K Ti ) .", "As we can see, the core of MHA is to calculate a i in each head.", "We thus refer to a i as the i -th attention head.", "In sequence prediction tasks, the model takes as input a source sequence of length m and outputs a target sequence of length n in an autoregressive manner.", "It predicts each token Y within the target sequence through a categorical distribution p vanilla ( Y | X ) , where X includes the source sequence as well as a previously generated pre-fix.", "With respect to an MHA block a 1 , . . . , a h , the model predicts target tokens Y by first feeding these heads into a complex non-linear transformation 3 denoted by ( ) , and then passing it through a softmax function over the entire vocabulary.", "Therefore, the output probability can be written as p vanilla ( Y | X ) = f ( a 1 , . . . , a h ) , where f ( a 1 , . . . , a h ) := softmax( ( a 1 , . . . , a h )) .", "In this section, we introduce head-colliding attention .", "Specifically, we formulate MHA as a probabilistic model, where each attention head is represented by a latent variable.", "The name reflects a collider in the context of probabilistic graphical models (Figure 1a).", "We will first explain how head-colliding attention permits the modeling of interactions among different heads and then discuss how vanilla MHA can be viewed as a marginalized version of head-colliding attention, which ignores any head interactions.", "Considering a single MHA block, we cast each attention head a i as a latent variable.", "The probability of target Y conditioning on input X can be obtained by marginalizing over all heads A (we denote A := { a 1 , . . . , a h } ): p ( Y | X ) = (cid:90) A p ( Y | A , X ) p ( A | X ) d A = E p ( A | X ) [ f ( A )] .", "p ( A | X ) is the joint prior distribution.", "The corresponding directed graphical model is demonstrated 3 Since a transformer typically stacks several attentive layers, for an MHA block in some layer, subsequent layers will induce a non-linear transformation ( ) for its attention heads.", "For instance, ( ) may include several other MHA blocks and feed-forward networks.", "in Figure 1a, where the links from different heads collide on the observation variable Y .", "A crucial property of this graphical model is the explaining-away effect (Pearl, 1989; Wellman and Henrion, 1993) of attention heads A when observing the output Y .", "In other words, if a head a i attends to part of the input which accords well with observation, it immediately discourages other heads from attending to the same part of the input but encourages them to look into complementary information.", "4 This mechanism effectively reduces head redundancy and in turn improves parameter efficiency.", "Vanilla vs. head-colliding attention We now take a closer look at the vanilla MHA (2).", "Recall that in vanilla MHA, all attention heads are deterministic.", "From the perspective of latent variable models, this is computationally equivalent to taking expectations of latent head variables.", "The output probability distribution p vanilla ( Y | X ) can then be expressed as: f ( E p ( a 1 | X ) [ a 1 ] , . . . , E p ( a h | X ) [ a h ]) .", "This means we are only interested in the individual expectations when using the attention heads in vanilla MHA for predictions.", "On the contrary, 4 In other words, if we confirm that some head accords well with the observation, then the probability of other heads should be reduced since there is less need to invoke them, according to Occam's razor.", "in head-colliding attention the distribution of Y is defined as: p ( Y | X ) = E p ( a 1 ,..., a h | X ) [ f ( a 1 , . . . , a h )] .", "Note the inherent difference of when to take the expectation in vanilla and head-colliding attention.", "Since f ( ) is a complex non-linear function (2), these two formulations are not equivalent in general and may have a large gap between the two distributions.", "Concretely, vanilla MHA ignores any possible interactions among different heads.", "As indicated in equation 2, it first marginalizes out every single head before observing targets one head will not learn what other heads are attending to despite the fact Y is observed.", "This is why vanilla MHA is prone to redundancy as many previous studies (Voita et al., 2019; Michel et al., 2019a, inter alia ) discovered.", "Head-colliding attention, on the other hand, permits rich head interactions due to the expressive non-linear function f ( ) inside the expectation over different latent variables a 1 , . . . , a h .", "However, the complexity of head interactions also leads to intractability in training the model, which we will discuss in the next section.", "We train the model by performing maximum likelihood estimation.", "Here, the log marginal likelihood can be expressed as: log p ( Y | X ) = log E p ( A | X ) [ p ( Y | A , X )] .", "Unfortunately, this is intractable in general because it requires marginalizing over all possible configu-rations of attention heads.", "The standard technique is to use variational inference, which optimizes the log marginal by maximizing its evidence lower bound (called ELBO) (Blei et al., 2017): L := E q ( A | X ) (cid:20) log p ( Y | A , X ) p ( A | X ) q ( A | X ) (cid:21) (3) = log p ( Y | X ) KL( q ( A | X ) || p ( A | X , Y )) log p ( Y | X ) , where q ( A | X ) is the variational distribution 5 over latent variables A .", "p ( A | X , Y ) is the intractable posterior distribution of all heads given observations Y and the input X , which encodes the rich 5 Although the variational distribution q should depend on target Y in principle, such conditioning renders testing difficult since the target information is not available during testing.", "For this reason, we only consider the source X hereafter.", "head interactions we desire, as discussed in 3.", "Therefore, an ideal variational distribution q ( A | X ) should be close to the true posterior p ( A | X , Y ) .", "In this case, the samples would accurately reflect the head interactions and the variational distribution would yield a tighter bound to L to facilitate the training.", "A straight-forward choice of q ( A | X ) is to use the mean-field approximation (Kingma and Welling, 2013): q ( A | X ) = q ( a 1 , a 2 , . . . , a h | X ) = h (cid:89) i =1 q ( a i | X ) .", "However, it has similar drawbacks as the vanilla MHA.", "6 The mean-field approximation assumes the independence of different heads and hence the interactions are greatly limited.", "Alternatively, one could parameterize q ( A | X ) using an auto-regressive model.", "7 Although this is much more expressive, its sequential nature severely slows down training, making it infeasible in practice.", "Cascaded Head-colliding attention Our solution to this problem is to employ hierarchical structures for head-colliding attention, where interactions among heads could be effectively incorporated into the model (Snderby et al., 2016; Ran-ganath et al., 2016).", "Conveniently, the hierarchical nature of the transformer architecture offers an effective way of constructing such proposal distributions.", "Given a transformer with L layers, we denote the set of all attention heads at layer l 1 and l as A l 1 and A l , respectively.", "Following the bottom-up computation of the transformer, the distribution of A l must rely on the instantiated values of A l 1 .", "In this sense, A l 1 can be seen as the common variables that govern A l (Figure 1b).", "Formally, we have: q ( A 1 , ... , AL | X )= q ( A 1 | X ) L (cid:89) j =2 q ( A j | X , A j 1 ) .", "Despite the fact that each attention head a li A l at l -th layer is conditionally independent given A l 1 , they become dependent when we marginalize A l 1 6 Note that the vanilla MHA does not define distributions over heads in its original context.", "out.", "In particular, the marginal distribution of each A l becomes: q ( A l | X )= (cid:90) A l 1 q ( A l 1 | X ) q ( A l | X , A l 1 ) d A l 1 .", "This corresponds to an infinite mixture of the mean-field distributions q ( A l | X , A l 1 ) and is able to capture rich head interactions (Ranganath et al., 2016).", "Our main model adopts this cascaded proposal distribution in figure 1b, and therefore we name it c ascaded head-c o lli d ing a ttention ( CODA ).", "The only problem left now is how to specify the conditional distribution q ( A l | X , A l 1 ) for all l = 1 , 2 , . . . , L .", "We first impose the basic constraints on head values as in vanilla MHA, that is, all head values must range within a simplex n 1 : n 1 = { A l | n (cid:88) k =1 a li, : k = 1 , i = 1 , . . . , h } .", "Here a li, : k is the k -th column of the i -th attention head at layer l and 1 denotes the vector of all 1's.", "For efficient training and inference, we adopt Gaussian-logistic distributions (Blei and Lafferty, 2006; Cohen et al., 2008), which not only satisfy the constraints above but also benefit from the effective reparameterization trick (Kingma and Welling, 2013; Rezende et al., 2014; Titsias and Lzaro-Gredilla, 2014).", "In particular, recall that in vanilla MHA, a i = softmax( z i ) = softmax( (cid:101) Q i (cid:102) K Ti ) (equation 1).", "We also denote the attention logits at l -th layer as Z l := { z l 1 , . . . , z lh } .", "For head i at layer l , we first sample from a multivariate Gaussian distribution q ( z li,j : | z l 1 i,j : ) 8 and pass the samples into a row-wise softmax function to yield head values: z li,j : N ( li,j : , ) , a li,j : = softmax( z li,j : ) , where z l i,j : and a l i,j : represent the j -th row of the i -th attention logit and attention head at layer l respectively.", "To explicitly model hierarchical structures among attention heads, we propose to add a direct connection between attention heads at adjacent layers (Figure 1b).", "Such connections offer direct access to the information of attention in the previous layer.", "Specifically, for each head i at layer l we 8 We only explicitly define the attention logit z as random variables, while the distribution of heads a is induced via a deterministic transformation ( i.e. , softmax function) of z ; therefore it suffices to build dependencies between attentive logits instead.", "set the mean i l as the sum of two parts: i l = (cid:101) Q i (cid:102) K Ti (cid:124) (cid:123)(cid:122) (cid:125) vanilla MHA + i ( Z l 1 ) (cid:124) (cid:123)(cid:122) (cid:125) direct connection , (4) where i ( ) is a two-layer multilayer perceptron (MLP) to fuse information from different heads Z l 1 (see the cascading connections in Figure 1b for an illustration).", "We set the covariance matrix to the identity matrix for all attentive logits.", "We give the prior the same form as the variational posterior and parameters are shared between q ( A 1 , ... , AL | X ) and p ( A 1 , ... , AL | X ) for our objective (equation 3).", "With the help of parameter sharing, the KL term in equation 3 is also cancelled out due to the identical distributions.", "9 This choice works well in practice, where it not only allows CODA to use almost the same amount of parameters as vanilla Transformer, but also eliminates the need to invoke advanced training techniques for amortized variational inference.", "10 More details can be found in Appendix A. 5 Experiments We conduct experiments on language modeling and machine translation tasks.", "Datasets First, we conducted experiments for token-level language modeling on a large-scale benchmark dataset Wikitext-103 (Merity et al., 2016), which consists of articles from Wikipedia with the token number around 103M/218K/246K for the training/validation/testing splits respectively.", "The vocabulary size is 267,744.", "For machine translation, we consider two standard datasets: WMT14 EN-DE (Bojar et al., 2014), which contains about 4.5M/3K/3K sentences pairs for train-ing/validation/testing splits respectively.", "We follow Ott et al. (2018) and Peng et al. (2020) to preprocess the dataset, and obtain a shared vocabulary between source and target language of around 32K byte pair encoding (BPE, Sennrich et al. (2016)) types.", "9 Therefore, it can also be derived by directly applying the Jensen's inequality on the log marginal likelihood.", "10 For instance, training a standard variational auto-encoder (VAE) for NLP tasks often suffers from the posterior collapse problem due to the heavy KL regularization (Bowman et al., 2016), where some tricks have to be used to achieve good performance, such as KL annealing, etc.", "IWSLT14 DE-EN (Cettolo et al., 2014).", "Following standard practice (Edunov et al., 2018; Peng et al., 2020), we pre-process the 160K/7K/7K sentence pairs and build train-ing/validation/testing sets accordingly.", "This generates a vocabulary of around 9K(7K) BPE types for source(target).", "Implementation details We implement our model with PyTorch (Paszke et al., 2019) and FairSeq toolkit (Ott et al., 2019).", "In particular, our model is based on the vanilla transformer architecture (Vaswani et al., 2017).", "For CODA , we replace all vanilla MHA blocks with the cascaded head-colliding attention, for both self attention and cross attention (if any).", "In language modeling, we use adaptive input embeddings (Baevski and Auli, 2019) and set context size to 512 and 480 for training and testing respectively, due to constraints of computational resources.", "In machine translation, we set beam size to 5 and adopt the hyper-parameters from (Peng et al., 2020) for IWSLT14 DE-EN .", "For WMT14 EN-DE we set beam size to 4, length penalty to 0.6, and average last 10 checkpoints for testing, following Vaswani et al. (2017).", "Further implementation details can be found in Appendix A. 5.2 Main results The results of language modeling on Wikitext-103 dataset are reported in Table", "1. As we can see from the table, CODA barely introduces any additional parameters.", "However, by taking into account head interactions, CODA significantly outperforms TRANSFORMER by over 0.6 perplexity.", "For reference, we also report the best setting (denoted by TRANSFORMER ) in Baevski and Auli (2019), which uses a much larger context size (3072/2560 vs. 512/480 for training/testing), CODA still outperforms by a substantial margin of 0.3 perplexity.", "This indicates that encouraging head interactions can improve parameter efficiency.", "To show whether CODA has promoted head interactions and reduced head redundancy, we qualitatively visualize the attention heads in both CODA and TRANSFORMER via heatmaps.", "Concretely, we compute the Jensen-Shannon Divergence (JSD) between each pair of attention heads at the same layer.", "In particular, we assume head values define a categorical distribution in both TRANSFORMER and CODA model to facilitate comparison.", "That is, an Model # Params.", "attentive head a i induces n categorical distributions for each query position.", "For the j -th distribution, it indicates how the j -th target position attends to all m source positions and is denoted by p ( x | a i,j : ) .", "For two heads i and i (cid:48) , we first compute their average distribution as m := p ( x | a i,j : ) + p ( x | a i (cid:48) ,j : ) 2 Then the JSD value between the i -th and i (cid:48) -th attention head is computed by summing all of n induced distributions: n (cid:88) j =1 1 2 (cid:0) KL( p ( x | a i,j : ) || m )+KL( p ( x | a i (cid:48) ,j : ) || m )) (cid:1) We average computed JSDs for all validation samples.", "Note that a larger JSD value (darker color) indicates that two heads are behaving more differently (i.e. less redundancy between them), and vice versa.", "As shown in Figure 2, JSD heatmaps in CODA are clearly darker than those in TRANSFORMER .", "This suggests that CODA permits richer head interactions, which fosters different heads to communicate with each other and encourages them to become complementary.", "Consequently, our model effectively reduces head redundancy in MHA and improves parameter-efficiency.", "The results on IWSLT14 DE-EN and WMT14 EN-DE datasets are shown in Table", "2. We see that CODA exhibits clear improvements over TRANSFORMER : a 1.1 point gain in BLEU on IWSLT14 DE-EN dataset and a 0.6 BLEU improvement on WMT14 EN-DE dataset.", "Despite such significant gains over the baseline, CODA only introduce very few additional parameters ( e.g. , 0.03% extra parameters on IWSLT14 DE-EN ).", "This, again, shows Model IWSLT14 DE-EN WMT14 EN-DE # Params.", "that CODA is more parameter efficient than vanilla Transformer due to the cascaded head-colliding attention we proposed.", "Similar to experiments on language modeling, we also visualize the head behaviors to measure attentive head interactions (See Figure 5 and Figure 6 in Appendix B), where we observe similar phenomena on translation tasks.", "Specifically, different heads in CODA are often complementary to each other and focus on quite different regions of sequences, rather than becoming redundant or even identical as observed in TRANSFORMER models.", "Despite one would hope increasing the head number in MHA leads to a free-ride in achieving better performance, in practice it is often not the case as vanilla MHA suffers from the problem of parameter redundancy.", "Following Vaswani et al. (2017), we vary the number of attention heads (4,8,16,32), but keep the amount of computation constant.", "Our results on IWSLT14 DE-EN are shown in Table", "3. We observe that the translation quality of baseline transformer (which uses vanilla MHA as its main building blocks) decreases almost linearly when increasing number of attention heads (Figure 3), which agrees with previous studies (Vaswani et al., 2017; Voita et al., 2019; Michel et al., 2019b).", "Intuitively, since the total number of parameters in the model remains unchanged, more heads indicate that the number of parameters allocated to each head is reduced, which limits the representational power of every single attention head.", "Due to the independence assumption between the heads, many of them tend to focus on similar regions of the sequence, leading to a great waste of modeling capacity.", "In the case of CODA , we observe better BLEU scores in response to the increasing head number.", "Rich interactions in CODA could encourage different heads to cover broader regions of input sequence, which in turn offers more useful information for training.", "The perplexity (PPL) reflects 0 2 4 6 Layer 0 Layer 1 Layer 2 Layer 3 Layer 4 Layer 5 Layer 6 Layer 7 Layer 8 Layer 9 Layer 10 Layer 11 Layer 12 Layer 13 Layer 14 Layer 15 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6 0 200 400 Figure 2: Jensen-Shannon Divergences (JSD) for each pair of attention heads at all 16 layers on Wikitext-103 validation dataset.", "In this section, we present an ablation study to investigate effects of different components in CODA .", "Concretely, we compare four models on the IWSLT14 DE-EN machine translation task:", "(i) the full model CODA ,", "(ii) a variant of CODA ablating the cascaded structure (4),", "(iii) a variant of CODA without using head-colliding attention (3) and", "(iv) the baseline TRANSFORMER model.", "In more details, for model", "(ii), we remove the second term in equation 4, which turns off the direct cascading structure, despite still being a proper hierarchical latent variable model 11 .", "In model", "(iii) , attention heads are deterministic (instead of being latent variables) as in vanilla Transformers, but cascading connections are incorporated.", "We observe its close connection with the recently proposed REALFORMER (He et al., 2020), a TRANSFORMER model that adds a residual connection between attention logits at adjacent layers.", "Since in model", "(iii) all attention heads are deterministic, it is unnecessary to fuse different heads (see 4).", "In this case, we simply implement model", "(iii) as a REALFORMER (and thus referred to as REALFORMER hereafter) to demonstrate the effect of cascading-like structures more clearly.", "12 We report BLEU score for translation quality, and the Jensen-Shannon Divergences (JSD) averaged over all heads pairs of all MHA blocks for quantitative evaluation of head interactions.", "As demonstrated in Table 4 and Figure 4, even without cascading connections for explicit hierarchical structures, head-colliding attention has the ability (albeit limited) to induce reasonable correlations among different heads, reflected in the average JSD.", "This is due to the explaining-away effects and the native hierarchical structure in the transformers, as discussed in 3.", "In CODA , because individual heads have access to the other heads from a probabilistic perspective, they are more prone to offering complementary information for each other to jointly explain the observed data.", "This effect is further enhanced when cascading connections are added to the model.", "In contrast, if we simply incorporate such cascading connections into a vanilla TRANSFORMER model, we found it does not significantly 11 Note that the first term (cid:101) Q i (cid:102) K Ti in equation 4 also depends on the instantiated value of z l 1 i,j : , which induces an implicit hierarchical dependency for attention between adjacent layers.", "12 The main difference between residual connections in REALFORMER and cascading connections in CODA is that, the former directly performs a head-wise addition of previous-layer attention logits; in contrast, our cascading connection makes use of an MLP ( ) to mix different attention heads, which enhances head interactions for CODA .", "encourage head interactions and only improves the baseline marginally.", "In this case, the performance improvement might be mainly due to residual connections, which are often considered to be effective in facilitating training (He et al., 2016).", "Interestingly, we note a positive correlation between average JSD and BLEU, suggesting that encouraging complementary attention heads may help improve translation quality.", "Attention mechanisms were first applied to recurrent networks in (Bahdanau et al., 2014).", "It was then extended to multi-head attention (MHA) and became the key component in transformer architectures (Vaswani et al., 2017).", "To study the utility of multiple attention heads, Voita et al. (2019) focused on identifying individual contributions of each attention head.", "Michel et al. (2019a) conducted extensive experiments to demonstrate that pruning out most heads after training does not lead to a drop in performance during inference.", "You et al. (2020) further revealed that replacing learnable attention heads with samples from fixed Gaussian distributions can achieve almost the same performance as original models.", "Additionally, Behnke and Heafield (2020) proposed to iteratively prune attention heads during training based on the lottery ticket hypothesis.", "These works indicate that there is a lot of head redundancy in the MHA transformer architectures.", "Instead of pruning unnecessary parameters and down-sizing transformer models, there are also works that propose to improve parameter efficiency in transformers.", "For instance, Li et al. (2018) introduced a regularization term to explicitly promote diversity among different heads.", "Yang et al. (2019a) proposed to use convolutional kernels to capture correlations among not only local windows of sequences, but also different heads.", "An et al. (2020) considered each head as a sample from the same distribution, and presented a sampling algorithm that avoids samples from collapsing into local modes.", "It hence explicitly encouraged the repulsiveness in MHA.", "Besides, MAE (Peng et al., 2020) converted a vanilla MHA to a mixture-of-experts model, where each expert component activates only a subset of attention heads.", "With learned probabilities, different experts could be specialized on different inputs.", "Different from these works, CODA does not explicitly promote head diversity nor specialize different heads.", "Instead, we focus on studying head interactions from a probabilistic perspective, which reveals the close connection between vanilla MHA and CODA .", "Another research line relating to our work is to incorporate latent variables into attention modules.", "Xu et al. (2015) investigated the connection between vanilla deterministic single-head attention and its stochastic counterpart.", "Deng et al. (2018) explored this further and proposed to use variational inference techniques for training the model.", "They considered both cases of discrete and continuous latent variables.", "Bayesian attention modules (Fan et al., 2020) introduced continuous latent distributions for attention that are amenable to reparameterization tricks.", "Our work is different from them in that we mainly investigate the MHA mechanism and aim to improve parameter-efficiency by recovering potential interactions among different heads, which are ignored in vanilla MHA.", "Concurrently, He et al. (2020) proposed to add residual connections between attention scores at adjacent layers, similar to our cascading connections.", "Nevertheless, our motivation for using the cascaded structure is quite different: we aim to construct direct hierarchical dependencies for latent variable models, while He et al. (2020) is mainly motivated to improve transformer architectures and obtain performance gains.", "We present CODA by re-formulating the multi-head attention (MHA) as a latent variable model from a probabilistic perspective.", "CODA explicit models of the interactions among attention heads through a hierarchical variational distribution.", "We conduct extensive experiments and demonstrate that CODA outperforms the transformer baseline in language 0 1 2 3 Layer 0 Layer 1 Layer 2 Layer 3 Layer 4 Layer 5 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3", "modeling and machine translation.", "The analysis shows that CODA learns to encourage the diversity in different heads and to promote parameter efficiency when increasing the number of heads.", "In this framework, we will be able to impose explicit constraints or regularization on different attention heads in a principal way (e.g. informative priors that promote diversity).", "Besides, we can also consider more expressive (data-driven) variational distributions.", "We leave these as the future work.", "Our code is publicly available at https://github.com/LZhengisme/CODA .", "We thank the anonymous reviewers whose suggestions helped clarify this work.", "This research was supported in part by the University of Hong Kong Research Committee under account 104006039.111994.14200.301.01." ]
[ "abstain", "abstain", "abstain", "result", "method", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "objective", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other" ]
[ "Detecting stance on Twitter is especially challenging because of the short length of each tweet, the continuous coinage of new terminology and hashtags, and the deviation of sentence structure from standard prose.", "Fine-tuned language models using large-scale in-domain data have been shown to be the new state-of-the-art for many NLP tasks, including stance detection.", "In this paper, we propose a novel BERT-based fine-tuning method that enhances the masked language model for stance detection.", "Instead of random token masking, we propose using a weighted log-odds-ratio to identify words with high stance distinguishability and then model an attention mechanism that focuses on these words.", "We show that our proposed approach outperforms the state of the art for stance detection on Twitter data about the 2020 US Presidential election.", "Stance detection refers to the task of classifying a piece of text as either being in support, opposition, or neutral towards a given target.", "While this type of labeling is useful for a wide range of opinion research, it is particularly important for understanding the public's perception of given targets, for example, candidates during an election.", "For this reason, our focus in this paper is on detecting stance towards political entities, namely Joe Biden and Donald Trump during the 2020 US Presidential election.", "Stance detection is related to, but distinct from the task of sentiment analysis, which aims to extract whether the general tone of a piece of text is positive, negative, or neutral.", "Sobhani and colleagues (Sobhani et al., 2016) show that measures of stance and sentiment are only 60% correlated.", "For example, the following sample tweet 1 has an 1 All of the sample tweets in this paper are invented by the authors.", "obvious positive sentiment, but an opposing stance towards Donald Trump.", "Stance detection is an especially difficult problem on Twitter.", "A large part of this difficulty comes from the fact that Twitter content is short, highly dynamic, continually generating new hashtags and abbreviations, and deviates from standard prose sentence structure.", "Recently, learning models using pre-training (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019) have shown a strong ability to learn semantic representation and outperform many state-of-the-art approaches across different natural language processing (NLP) tasks.", "This is also true for stance detection.", "The strongest models for stance detection on Twitter use pre-trained BERT (Ghosh et al., 2019; Sen et al., 2018).", "A recent study that proposed models for sentiment analysis (Tian et al., 2020) showed that focusing the learning model on some relevant words, i.e. sentiment words extracted using Pointwise Mutual Information (PMI) (Bouma, 2009), performed better than using the standard pre-trained BERT model.", "We are interested in understanding whether or not focusing attention on specific stance-relevant vocabulary during the learning process will improve stance detection.", "To accomplish this, we consider the following two questions.", "First, how do we identify the most important stance-relevant words within a data set?", "And second, how much attention needs to be paid to these words versus random domain words?", "Toward that end, we propose building different knowledge enhanced learning models that integrate an understanding of important context-specific stance words into the pre-training process.", "correspond to any actual tweet in the data set in order to preserve the privacy of Twitter users.", "While we consider PMI as a way to identify important stance words, we find that using the log-odds ratio performs better.", "We also consider different options for fine-tuning an attention-based language model.", "To fine-tune an attention-based language model to a specific task, the most common approach is to fine-tune using unlabeled data with random masking (Devlin et al., 2019; Liu et al., 2019).", "Because of the noise within social media posts, random tokens that are not task-relevant can impact sentence representation negatively.", "Therefore, instead of letting the model pay attention to random tokens, we introduce Knowledge Enhanced Masked Language Modeling (KE-MLM) , where significant tokens generated using the log-odds ratio are incorporated into the learning process and used to improve a downstream classification task.", "To the best of our knowledge, this is the first work that identifies significant tokens using log-odds-ratio for a specific task and integrates those tokens into an attention-based learning process for better classification performance.", "In summary, we study stance detection on English tweets and our contributions are as follows.", "( i )", "We propose using the log-odds-ratio with Dirichlet prior for knowledge mining to identify the most distinguishable stance words.", "( ii )", "We propose a novel method to fine-tune a pre-trained masked language model for stance detection that incorporates background knowledge about the stance task.", "( iii )", "We show that our proposed knowledge mining approach and our learning model outperform the fine-tuned BERT in a low resource setting in which the data set contains 2500 labeled tweets about the 2020 US Presidential election.", "( iv )", "We release our labeled stance data to help the research community continue to make progress on stance detection methods.", "2 2 Related Work In the NLP community, sentiment analysis is a more established task that has received more attention than stance detection.", "A sub-domain of sentiment analysis is target-directed or aspect-specific sentiment, which refers to the tone with which an author writes about a specific target/entity or an aspect of a target (Mitchell et al., 2013; Jiang et al., 2011).", "One common use case is breaking down sentiment toward different aspects of a product 2 https://github.com/GU-DataLab/ stance-detection-KE-MLM in reviews, e.g., the price of a laptop versus its CPU performance (Schmitt et al., 2018; Chen et al., 2017; Poddar et al., 2017; Tian et al., 2020).", "Different approaches have been proposed to tackle this problem.", "Chen and colleagues combine attention with recurrent neural networks (Chen et al., 2017).", "Schmitt and colleagues propose combining a convolutional neural network and fastText embeddings (Schmitt et al., 2018).", "A recent study proposes modifying the learning objective of the masked language model to pay attention to a specific set of sentiment words extracted by PMI (Tian et al., 2020).", "The model achieves new state-of-the-art results on most of the test data sets.", "Because stance is a different task, 3 we will adjust their target-directed sentiment approach for stance and compare to it in our empirical evaluation.", "The most well-known data for political stance detection is published by the SemEval 2016 (Mo-hammad et al., 2016b; Aldayel and Magdy, 2019).", "The paper describing the data set provides a high-level review of approaches to stance detection using Twitter data.", "The best user-submitted system was a neural classifier from MITRE (Zarrella and Marsh, 2016) which utilized a pre-trained language model on a large amount of unlabeled data.", "An important contribution of this study was using pre-trained word embeddings from an auxiliary task where a language model was trained to predict a missing hashtag from a given tweet.", "The runner-up model was a convolutional neural network for text classification (Wei et al., 2016).", "Following the MITRE model, there were a number of both traditional and neural models proposed for stance detection.", "A study focusing on traditional classifiers proposed using a support vector machine (SVM) with lexicon-based features, sentiment features and textual entailment feature (Sen et al., 2018).", "Another SVM-based model consisted of two-step SVMs (Dey et al., 2017).", "In the first step, the model predicts whether an input sequence is relevant to a given target.", "The next step detects the stance if the input sequence is relevant.", "Target-specific attention neural network (TAN) is a novel bidirectional LSTM-based attention model.", "In this study, Dey and colleagues trained it on unpublished unlabeled data to learn the domain context (Du et al., 2017).", "Recently, 3 Stance detection aims to detect the opinion s to the specific target e , aspect-based sentiment focuses on extracting the aspect a towards the target e and corresponding opinion s (Wang et al., 2019).", "a neural ensemble model consisting of bi-LSTM, nested LSTMs, and an attention model was proposed for stance detection on Twitter (Siddiqua et al., 2019).", "The model's embedding weights were initialized with the pre-trained embeddings from fastText (Bojanowski et al., 2017).", "The emergence of transformer-based deep learning models has led to high levels of improvement for many NLP tasks, including stance detection (Ghosh et al., 2019; Kk and Can, 2020; AlDayel and Magdy, 2020).", "BERT (Devlin et al., 2019) is the most used deep transformer encoder.", "More specifically, BERT uses Masked Language Modeling (MLM) to pre-train a transformer encoder by predicting masked tokens in order to learn the semantic representation of a corpus.", "Ghosh and colleagues (Ghosh et al., 2019) show that the original pre-trained BERT without any further fine-tuning outperforms other former state-of-the-art models on the SemEval set including the model that utilizes both text and user information (Del Tredici et al., 2019).", "Because we are interested in the 2020 US Presidential election and many temporal factors relevant to stance exist (e.g. political topics), we introduce a new Election 2020 data set.", "For our empirical analysis, we will use this data set, and compare our approach to other state-of-the-art methods that used the SemEval data set.", "Our data sets are described in Section 5.1.", "Inspired by BERT, different variations of BERT have been proposed to solve different specific NLP tasks.", "SpanBERT (Joshi et al., 2019) masks tokens within a given span range.", "ERNIE (Sun et al., 2019) finds and masks entity tokens achieving new state-of-the-art results on many Chinese NLP tasks, including sentiment analysis.", "GlossBERT (Huang et al., 2019) uses gloss knowledge (sense definition) to improve performance on a word sense disambiguation task.", "SenseBERT (Levine et al., 2020) aims to predict both masked words and the Word-Net super-sense to improve word-in-context tasks.", "Zhang and colleagues introduce entity token masking (Zhang et al., 2019) for relation classification where the goal is to classify relation labels of given entity pairs based on context.", "A number of studies have been working on adjusting transformers for sentiment analysis tasks.", "A recent study (Tian et al., 2020) proposes a sentiment knowledge enhanced pre-training method (SKEP).", "It shows that masking sentiment words extracted by PMI guides the language model to learn more sentiment knowledge resulting in better sentiment classification performance.", "SentiLARE (Ke et al., 2020) uses an alternative approach that injects word-level linguistic knowledge, including part-of-speech tags and sentiment polarity scores obtained by SentiWord-Net (Guerini et al., 2013), into the pre-training process.", "Following these works, SENTIX (Zhou et al., 2020) was proposed to incorporate domain-invariant sentiment knowledge for cross-domain sentiment data sets.", "Our work differs because our task is stance detection and we employ a novel knowledge mining step that uses log-odds-ratio to determine significant tokens that need to be masked.", "We propose Knowledge Enhanced Masked Language Modeling (KE-MLM), which integrates knowledge that enhances the classification task in the fine-tuning process.", "We identify task-relevant tokens using text mining (Section 3.1).", "We then use these discovered tokens within a masked language model (Section 3.2).", "While TF-IDF is the preferred method for identifying important words in a corpus, we are interested in identifying important words for distinguishing stance, not just words that are important within the corpus.", "Therefore, we propose using the weighted log-odds-ratio technique with informed Dirichlet priors proposed by Monroe and colleagues (Mon-roe et al., 2008) to compute significant words for each stance class.", "Intuitively, this measure attempts to account for the amount of variance in a word's frequency and uses word frequencies from a background corpus as priors to reduce the noise generated by rare words.", "This technique has been shown to outperform other methods that were designed to find significant words within a corpus such as PMI and TF-IDF (Monroe et al., 2008; Jurafsky et al., 2014; Budak, 2019).", "More formally, we compute the usage difference for word w among two corpora using the log-odds-ratio with informative Dirichlet priors as shown in the Equation 1, where n i is the size of corpus i and n j is the size of corpus j .", "y iw and y jw indicate the word count of w in corpus i and j , respectively.", "0 is the size of the background corpus and w is the word count of w in the background corpus.", "To measure the significance of each word, we first compute the variance ( 2 ) of the log-odds-ratio using Equation 2, and then compute the Z-score using Equation 3. A higher score indicates more significance of word w within corpus i compared to corpus j .", "A lower score means more significance of word w within corpus j compared to corpus i .", "Since stance has three different classes (support, opposition and neutral), we need to adjust the log-odds-ratio technique in order to obtain a set of significant stance words.", "Using a training set, we find stance tokens which are significant tokens for support/non-support or opposition/non-opposition as follows: Supportive & Non-supportive tokens are the highest and lowest Z-score tokens, respectively when i only contains the support class and j contains only the opposition and neutral classes.", "Opposing & Non-opposing tokens are the highest and lowest Z-score tokens, respectively when i only contains the opposition class and j only contains the support and neutral classes.", "We select the highest and lowest k tokens based on Z-score from each token list above.", "This results in four k -token lists.", "The combined tokens of these lists after removing duplicates are defined to be the stance tokens .", "We hypothesize that these stance tokens will play a key role during stance detection.", "There are two main approaches to train a transformer encoder, Causal Language Modeling (CLM) and Masked Language Modeling (MLM).", "CLM has a standard language modeling objective, predicting the next token given all previous tokens in the input sequence.", "This means that it needs to learn tokens in order and can only see the previous tokens.", "On the other hand, MLM uses a masking technique that is more flexible, allowing researchers to explicitly assign which tokens to mask.", "The other tokens are used for masked token recovery.", "Intuitively, a language model that learns to recover a specific set of tokens well will tend to produce a better semantic representation for sequences containing those tokens (Tian et al., 2020; Ke et al., 2020; Zhou et al., 2020).", "Generally, randomly masking tokens is preferred when the task requires the language model to learn to recover all tokens equally.", "This tends to result in a semantic representation that is equally good for any input sequences.", "In many BERT-based models, when training the transformer encoder with masked language modeling, the input sequence is modified by randomly substituting tokens of the sequence.", "Specifically, BERT uniformly chooses 15% of input tokens of which 80% are replaced with a special masked token [ MASK ] , 10% are replaced with a random token, and 10% are not replaced and remain unchanged.", "The goal of significant token masking is to produce a corrupted version of the input sequence by masking the significant tokens rather than random tokens.", "We keep the same ratio of masked tokens by masking up to 15% of the significant tokens.", "If fewer than 15% of the tokens are significant, we randomly mask other tokens to fill up to 15%.", "4 Formally, significant word masking creates a corrupted version X for an input sequence X that is influenced by the extracted knowledge G .", "Tokens of sequences X and X are denoted by x i and x i , respectively.", "In the fine-tuning process, the transformer encoder is trained using a masked word prediction objective that is supervised by recovering masked significant words using the final state of the encoder x 1 , ..., x n , where n is the length of the sequence.", "After constructing this corrupted version of the sequence, MLM aims to predict the masked tokens to recover the original tokens.", "In this paper, we inject knowledge for our specific classification task during MLM, causing the model to pay more attention to stance tokens instead of random tokens.", "Formally, we get an embedding vector x i from the transformer encoder by feeding the corrupted version X of input sequence X .", "Next, the embedding vector is fed into a single layer of neural network with a softmax activation layer in order to produce a normalized probability vector y i over the entire vocabulary as shown in Equation 4, where W is a weight vector and b is a bias vector.", "Therefore, the prediction objective L is to maximize the probability of original token x i computed in Equation 5, where m i = 1 if the token at the i -th position is masked, otherwise m i = 0 and y i is a one-hot representation of the original token.", "y i = softmax ( x i W + b ) (4) L = i = n (cid:88) i =1 m i y i log y i (5) Finally, we fine-tune a pre-trained BERT with unlabeled in-domain (election 2020) data.", "The representation learned by the language model is expected to be customized for the stance detection task.", "In this section we describe our experimental design, beginning with the knowledge mining decisions, followed by the decisions and parameters used for the language models.", "We begin by determining the number of significant stance words to identify.", "Based on a sensitivity analysis, we set k = 10 to extract the top-10 significant words for each stance category as described in Section 3.1 (support, non-support, oppose, non-oppose).", "Examples of significant tokens from the strong supportive/opposing stance are shown in Table 1.", "Our stance detection models are independently trained for each candidate, so overlapping tokens are allowed (e.g. the word patriots tends to support Trump but oppose Biden).", "Once we have a set of tokens for the four categories, we union these four token sets.", "After removing duplicates, there are roughly 30 stance tokens for each candidate.", "Because the state-of-the-art models for stance detection are neural models with pre-trained language models on a large amount of in-domain data, (Zarrella and Marsh, 2016; Kk and Can, 2020), we use both original pre-trained BERT and BERT fine-tuned on the unlabeled election data as our benchmarks.", "We fine-tuned BERT for two epochs since it gives the best perplexity score 5 .", "For KE-MLM, we first initialize the weights of the model using the same values as the original BERT, then we fine-tune the model with unlabeled election data using the identified stance tokens masked.", "We exhaustively fine-tuned KE-MLM to produce the language model that focuses attention on the stance tokens from the training set.", "Because BERT's tokenizer uses WordPiece (Wu et al., 2016), a subword segmentation algorithm, it cannot learn new tokens after the pre-training is finished without explicitly specifying it.", "However, adding new tokens with random embedding weights would cause the pre-trained model to work differently since it was not pre-trained with those new tokens.", "We realize that some significant tokens for the stance of Election 2020 are new to the BERT and were not in the original BERT pretraining process.", "Therefore, we consider adding all the stance words to the BERT tokenizer.", "We hypothesize that adding such a small number of tokens will barely affect the pre-trained model.", "To test the effect of adding stance tokens into the normal fine-tuning process, we train language models in which stance tokens are added, but we fine-tune them with the normal random masking method.", "We refer to this model as a-BERT , where stance tokens 5 Perplexity is a performance measurement of the masked language model, a lower score is better.", "are added to the BERT tokenizer, but only the standard fine-tuning method is performed.", "To compare our performance to the sentiment knowledge enhanced pre-training method or SKEP (Tian et al., 2020), we use the pre-training method proposed in their paper and then fine-tune the model using our election 2020 data (SKEP).", "We hypothesize that applying KE-MLM may guide the language model to focus too much attention on the stance knowledge and learn less semantic information about the election itself.", "Therefore, we consider a hybrid fine-tuning strategy.", "We begin by fine-tuning BERT for one epoch.", "Then we fine-tune using KE-MLM in the next epoch.", "This hybrid strategy forces the model to continually learn stance knowledge along with semantic information about the election.", "We expect that this dual learning will construct a language model biased toward necessary semantic information about the election, as well as the necessary embedded stance knowledge.", "We refer to this approach as our KE-MLM (with continuous fine-tuning), while KE-MLM refers to a model that is overly fine-tuned with only stance token masking.", "To summarize, the language models we will evaluate are as follows: the original pre-trained BERT (o-BERT), a normally fine-tuned BERT that uses our election data (f-BERT), a normally fine-tuned BERT that uses stance tokens as part of its tokenizer (a-BERT), a fine-tuned BERT using the SKEP method (Tian et al., 2020) (SKEP), our overly fine-tuned model (KE-MLM), and our hybrid fine-tuned model (KE-MLM).", "For all the language models, we truncate the size of an input sequence to 512 tokens.", "The learning rate is constant at 1 e 4 and the batch size is 16.", "In masked language modeling, we fine-tune the model using a neural layer on top with the learning objective to predict masked tokens.", "In this step, we substitute that layer with a new neural layer as a stance classifier layer.", "Its weights are arbitrarily initialized.", "The prediction equation is similar to Equation 4 but now the input is not corrupted, and the output is a vector of the normalized probability of the three stance classes.", "We use a cross-entropy loss function and the objective is to minimize it.", "We use the Adam optimizer (Kingma and Ba, 2015) with five different learning rates, including 2 e 5 , 1 e 5 , 5 e 6 , 2 e 6 and 1 e 6 .", "The batch size is constantly set to 32 during the classification learning process.", "We train and test our models on each candidate independently with five different learning rates.", "The best model is determined by the best macro average F1 score over three classes among five learning rates.", "Because the weights of the classifier layer are randomly initialized, we run each model five times.", "The average F1 score is reported in Table 2 as the classification performance.", "After describing our data set (Section 5.1), we present our experimental evaluation, both quantitative (Section 5.2), and qualitative (Section 5.3).", "For this study, our research team collected English tweets related to the 2020 US Presidential election.", "Through the Twitter Streaming API, we collected data using election-related hashtags and keywords.", "Between January 2020 and September 2020, we collected over 5 million tweets, not including quotes and retweets.", "These unlabeled tweets were used to fine-tune all of our language models.", "Our specific stance task is to determine the stance for the two presidential candidates, Joe Biden and Donald Trump.", "For each candidate, we had three stance classes: support, opposition, and neutral.", "6 We consider two stance-labeled data sets, one for each candidate, Biden and Trump.", "Our data were labeled using Amazon Mechanical Turk (MTurk) workers (Crowston, 2012).", "These workers were not trained.", "Instead, we provided a set of examples for each stance category that they could refer to as they conducted the labeling task.", "Examples of statements presented to MTurk workers are presented in Table 3. We asked annotators to carefully review each tweet t ic from the tweet set TC = { t 1 c , t 2 c , ... } and determine whether the tweet t ic is", "(i) clearly in support of C ,", "(ii) clearly in opposition to C or", "(iii) not clearly in support or opposition to C , where t ic TC and C { Donald Trump , Joe Biden } .", "To increase the labeling yield, we verify that two tweet sets TC = Donald Trump and TC = Joe Biden are mutually exclusive.", "Each tweet was labeled by three annotators and the majority vote is considered to be the true label.", "If all three annotators vote for three differ-6 Our definition of stance labels is consistent with the definition from (Mohammad et al., 2016a) Table 2: The average F1 scores over five runs.", "ent classes, we assume the tweet's label is neutral because the stance is ambiguous.", "Our data set contains 1250 stance-labeled tweets for each candidate.", "The stance label distributions are shown in Table 4. The distributions of both candidates are skewed towards the opposition label.", "Overall, the stance class proportions vary from 27% to 39%.", "The inter-annotator agreement scores from different metrics are shown in Table 5. The task-based and worker-based metrics are recommended by the MTurk official site (Amazon, 2011), given their annotating mechanism.", "All scores are range from 86% up to 89%, indicating the high inter-rater reliability for these data sets.", "We conducted experiments on train-test sets using 70:30 split for both the Biden and Trump data", "sets.", "7 We evaluate the classification performance using the macro-average F1 score along with the F1 score of each class.", "The results presented in Table 2 show the average F1 scores over five runs with different random seeds.", "The highest score for each evaluation metric is highlighted in bold.", "For Biden-stance, every fine-tuning method (f-BERT, a-BERT, SKEP, KE-MLM and KE-MLM) improves the average F1 score from the original pre-trained model by 3.2%, 4.1%, 3.6%, 3.2% and 4.6%, respectively.", "For Trump-stance, the average F1 scores are also improved by 1.3%, 1.8%, 1.8%, 0.9% and 3.3%.", "The improvement is twice as much for Biden than for Trump.", "This is an indication that the additional background knowledge is more important for detecting stance for Biden than for Trump.", "In general, our knowledge enhanced model performs better than all the other models and outperforms the original BERT by three to five percent.", "a-BERT performs similarly to SKEP for Trump, but its performance is better for Biden.", "The model's overall performances are second-best with only a difference of 0.5% and 1.5% in the average F1-macro score when compared to KE-MLM for Biden and Trump, respectively.", "These results further highlight the importance of incorporating stance tokens into the tokenizer.", "While adding stance to the tokenization is important, the additional improvement of KE-MLM comes from focusing attention on both the stance tokens and the general election data.", "The result also supports our hypothesis that training KE-MLM alone for two epochs would result in better accuracy than original BERT (o-BERT), but a lower accuracy than normally fine-tuned BERT (f-BERT) because it learns stance knowledge but lacks in-domain election knowledge.", "To better understand the robustness of our models, we analyze the variance in the F1 scores across 7 Because we do not have sufficient unlabeled election data from 2016, we cannot fairly test our model with the SemEval 2016 stance data.", "the different runs.", "Figure 1 shows the box plots of the macro average F1 scores for each model.", "The scores of both candidates follow a similar pattern.", "For Biden, the highest F1 score and the lowest variance is KE-MLM.", "For Trump, the highest F1 score is KE-MLM, but the variance is comparable to the other models.", "The model with the lowest variance is SKEP.", "These figures further emphasize KE-MLM's ability to detect stance better than normally fine-tuning methods.", "Interestingly, a-BERT performs second-best (see gray boxes in Figure 1), further highlighting the importance of not ignoring stance tokens.", "Forcefully adding unseen stance tokens to the BERT tokenizer with random initial weights benefits overall classification performance.", "Additionally, we conducted a sensitivity analysis on different sizes of unlabeled data for pretraining to verify that the large unlabeled data is actually beneficial.", "We fine-tune f-BERT using different sizes of data (100K, 500K, 1M, 2M) and compare the results to those of BERT with zero-fine-tuning (o-BERT) and fine-tuning using the entire 5M tweets (f-BERT).", "We train each pre-trained language model on training and test on testing data set five times.", "The average F1 scores are shown in Fig 2.", "For Biden, the average F1 score is 3% lower when there is no fine-tuning compared to using all 5M tweets.", "For Trump, the score only improves a little over 1%.", "Interestingly, as the size of the unlabeled data increases, the F1 score also increases even though the increase is not always large.", "Therefore, pre-training using a smaller size unlabeled data set does still produce benefits, but when possible, using a large sample does lead to improvement.", "While we see from Table 2 that KE-MLM outperforms all baselines on average, we are interested in understanding when there is labeling disagreement between other methods and KE-MLM, what features are driving the disagreement.", "Therefore, we manually investigate samples in which f-BERT and a-BERT produced incorrect predictions, while KE-MLM produced correct ones.", "On average over multiple runs, 28.8% and 38.5% of misclassified tweets by f-BERT are correctly predicted by KE-MLM for Biden and Trump, respectively.", "For a-BERT, they are 22.5% and 25.7% on average.", "As a case example, Table 6 illustrates the attention distribution of the sequence representation learned by each language model for a few mislabeled tweets.", "Significant words are colored.", "The color darkness is determined by the attention weights of the representation learned for the classification token.", "8 The 8 The representation of classification tokens produced by a transformer encoder is usually referred to as [ CLS ] .", "Please see (Devlin et al., 2019) for details about the attention weight calculation.", "darker the color the more important the word.", "From the selected samples, we know from the knowledge mining step that the word \"maga\" and \"demcon-vention\" are two of the most distinguishing stance words (see Table 1), but both f-BERT and a-BERT fail to identify these strong stance words and therefore, produced incorrect predictions.", "In contrast, KE-MLM produces the correct predictions by paying reasonable attention to the stance information, further supporting the notion that KE-MLM is using meaningful, interpretable tokens.", "Intuitively, a language model fine-tuned using in-domain unlabeled data should result in better classification performance than using the vanilla pre-trained BERT.", "Since our goal is to maximize the accuracy of a specific classification task, we train an attention-based language model to pay attention to words that help distinguish between the classes.", "We have shown that for stance detection, using the log-odds-ratio to identify significant tokens that separate the classes is important knowledge for this classification task.", "Once these important tokens are identified, forcing the language model to pay attention to these tokens further improves the performance when compared to using standard data for fine-tuning.", "To the best of our knowledge, our approach is better than the other state-of-the-art approaches for stance detection.", "Additionally, we are releasing our data set to the community to help other researchers continue to make progress on the stance detection task.", "We believe this is the first stance-labeled Twitter data for the 2020 US Presidential election.", "There are several future directions of this work.", "First, to relax the trade-off between learning election semantics in general and learning stance knowledge, instead of fine-tuning one epoch with the normal fine-tuning method and another epoch with KE-MLM, we could reduce the masking probability of stance distinguishing words from 100% to something lower based on the distinguishability of the token.", "Theoretically, this would give a higher weight to words that are more polarizing.", "This also relaxes the potential overfitting that may occur when learning only stance knowledge and lets the model randomly learn more tokens.", "Another future direction is to test our language modeling method on other classification tasks (e.g. sentiment analysis, spam detection).", "Also, this paper uses BERT as the base language model.", "There are many variations of BERT that can be further investigated (e.g. RoBERTa).", "Finally, we view stance as an important task for understanding public opinion.", "As our models get stronger, using them to gain insight into public opinion on issues of the day is another important future direction.", "This research was funded by National Science Foundation awards #1934925 and #1934494, and the Massive Data Institute (MDI) at Georgetown University.", "We would like to thank our funders, the MDI staff, and the members of the Georgetown DataLab for their support.", "We would also like to thank the anonymous reviewers for the detailed and thoughtful reviews." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "abstain", "objective", "abstain", "result", "method", "abstain", "abstain", "result", "objective", "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "other", "other" ]
[ "Spelling error correction is an important yet challenging task because a satisfactory solution of it essentially needs human-level language understanding ability.", "Without loss of generality we consider Chinese spelling error correction (CSC) in this paper.", "A state-of-the-art method for the task selects a character from a list of candidates for correction (includ-ing non-correction) at each position of the sentence on the basis of BERT, the language representation model.", "The accuracy of the method can be sub-optimal, however, because BERT does not have sufficient capability to detect whether there is an error at each position, apparently due to the way of pre-training it using mask language modeling.", "In this work, we propose a novel neural architecture to address the aforementioned issue, which consists of a network for error detection and a network for error correction based on BERT, with the former being connected to the latter with what we call soft-masking technique.", "Our method of using Soft-Masked BERT' is general, and it may be employed in other language detection-correction problems.", "Experimental results on two datasets demonstrate that the performance of our proposed method is significantly better than the baselines including the one solely based on BERT.", "Spelling error correction is an important task which aims to correct spelling errors in a text either at word-level or at character-level (Yu and Li, 2014; Yu et al., 2014; Zhang et al., 2015; Wang et al., 2018b; Hong et al., 2019; Wang et al., 2019).", "It is crucial for many natural language applications such as search (Martins and Silva, 2004; Gao et al., 2010), optical character recognition (OCR) (Afli et al., 2016; Wang et al., 2018b), and essay scoring (Burstein and Chodorow, 1999).", "In this pa-Table 1: Examples of Chinese spelling errors Wrong:", "Egypt has golden towers.", "Correct:", "Egypt has pyramids.", "Wrong:", "He has a strong desire to win and is digging for prison breaks Correct:", "He has a strong desire to survive and is digging for prison breaks.", "Spelling error correction is also a very challenging task, because to completely solve the problem the system needs to have human-level language understanding ability.", "There are at least two challenges here, as shown in Table 1.", "First, world knowledge is needed for spelling error correction.", "Character in the first sentence is mistakenly written as , where means golden tower and means pyramid.", "Humans can correct the typo by referring to world knowledge.", "Second, sometimes inference is also required.", "In the second sentence, the 4-th character is mistakenly written as .", "In fact, and the surrounding characters form a new valid word (desire to win), rather than the intended word (desire to survive).", "Many methods have been proposed for CSC or more generally spelling error correction.", "Previous approaches can be mainly divided into two categories.", "One employs traditional machine learning and the other deep learning (Yu et al., 2014; Tseng et al., 2015; Wang et al., 2018b).", "Zhang et al. (2015), for example, proposed a unified framework for CSC consisting of a pipeline of error detection, candidate generation, and final candidate selection using traditional machine learning.", "Wang et al. (2019) proposed a Seq2Seq model with copy mechanism which transforms an input sentence into a new sentence with spelling errors corrected.", "More recently, BERT (Devlin et al., 2018), the language representation model, is successfully applied to many language understanding tasks including CSC", "(cf., (Hong et al., 2019)).", "In the state-of-the-art method using BERT, a character-level BERT is first pre-trained using a large unlabelled dataset and then fine-tuned using a labeled dataset.", "The labeled data can be obtained via data augmentation in which examples of spelling errors are generated using a large confusion table.", "Finally the model is utilized to predict the most likely character from a list of candidates at each position of the given sentence.", "The method is powerful because BERT has certain ability to acquire knowledge for language understanding.", "Our experimental results show that the accuracy of the method can be further improved, however.", "One observation is that the error detection capability of the model is not sufficiently high, and once an error is detected, the model has a better chance to make a right correction.", "We hypothesize that this might be due to the way of pre-training BERT with mask language modeling in which only about 15% of the characters in the text are masked, and thus it only learns the distribution of masked tokens and tends to choose not to make any correction.", "This phenomenon is prevalent and represents a fundamental challenge for using BERT in certain tasks like spelling error correction.", "To address the above issue, we propose a novel neural architecture in this work, referred to as Soft-Masked BERT.", "Soft-Masked BERT contains two networks, a detection network and a correction network based on BERT.", "The correction network is similar to that in the method of solely using BERT.", "The detection network is a Bi-GRU network that predicts the probability that the character is an error at each position.", "The probability is then utilized to conduct soft-masking of embedding of character at the position.", "Soft masking is an extension of conventional hard masking' in the sense that the former degenerates to the latter, when the probability of error equals one.", "The soft-masked embedding at each position is then inputted into the correction network.", "The correction network conducts error correction using BERT.", "This approach can force the model to learn the right context for error correction under the help of the detection network, during end-to-end joint training.", "We conducted experiments to compare Soft-Masked BERT and several baselines including the method of using BERT alone.", "As datasets we utilized the benchmark dataset of SIGHAN.", "We also created a large and high-quality dataset for evaluation named News Title.", "The dataset, which contains titles of news articles, is ten times larger than the previous datasets.", "Experimental results show that Soft-Masked BERT significantly outperforms the baselines on the two datasets in terms of accuracy measures.", "The contributions of this work include (1) proposal of the novel neural architecture Soft-Masked BERT for the CSC problem, (2) empirical verifica-tion of the effectiveness of Soft-Masked BERT.", "Chinese spelling error correction (CSC) can be formalized as the following task.", "Given a sequence of n characters (or words) X = ( x 1 , x 2 , , x n ) , the goal is to transform it into another sequence of characters Y = ( y 1 , y 2 , , y n ) with the same length, where the incorrect characters in X are replaced with the correct characters to obtain Y .", "The task can be viewed as a sequential labeling problem in which the model is a mapping function f : X Y .", "The task is an easier one, however, in the sense that usually no or only a few characters need to be replaced and all or most of the characters should be copied.", "The state-of-the-art method for CSC is to employ BERT to accomplish the task.", "Our preliminary experiments show that the performance of the approach can be improved, if the erroneous characters are designated (cf., section 3.6).", "In general the BERT based method tends to make no correction (or just copy the original characters).", "Our interpretation is that in pre-training of BERT only 15% of the characters are masked for prediction, resulting in learning of a model which does not possess enough capacity for error detection.", "This motives us to devise a new model.", "We propose a novel neural network model called Soft-Masked BERT for CSC, as illustrated in Figure 1.", "Soft-Masked BERT is composed of a detection network based on Bi-GRU and a correction net-Figure 1: Architecture of Soft-Masked BERT work based on BERT.", "The detection network predicts the probabilities of errors and the correction network predicts the probabilities of error corrections, while the former passes its prediction results to the latter using soft masking.", "More specifically, our method first creates an embedding for each character in the input sentence, referred to as input embedding.", "Next, it takes the sequence of embeddings as input and outputs the probabilities of errors for the sequence of characters (embeddings) using the detection network.", "After that it calculates the weighted sum of the input embeddings and [MASK] embeddings weighted by the error probabilities.", "The calculated embeddings mask the likely errors in the sequence in a soft way .", "Then, our method takes the sequence of soft-masked embeddings as input and outputs the probabilities of error corrections using the correction network, which is a BERT model whose final layer consists of a softmax function for all characters.", "There is also a residual connection between the input embeddings and the embeddings at the final layer.", "Next, we describe the details of the model.", "The detection network is a sequential binary labeling model.", "The input is the sequence of embeddings E = ( e 1 , e 2 , , e n ) , where e i denotes the embedding of character x i , which is the sum of word embedding, position embedding, and segment embedding of the character, as in BERT.", "The output is a sequence of labels G = ( g 1 , g 2 , , g n ) , where g i denotes the label of the i character, and 1 means the character is incorrect and 0 means it is correct.", "For each character there is a probability p i representing the likelihood of being 1.", "The higher p i is the more likely the character is incorrect.", "In this work, we realize the detection network as a bidirectional GRU (Bi-GRU).", "For each character of the sequence, the probability of error p i is defined as p i = P d ( g i = 1 | X ) = ( W d h di + b d ) (1) where P d ( g i = 1 | X ) denotes the conditional probability given by the detection network, denotes the sigmoid function, h di denotes the hidden state of Bi-GRU, W d and b d are parameters.", "Furthermore, the hidden state is defined as h di = GRU ( h di 1 , e i ) (2) h di = GRU ( h di +1 , e i ) (3) h di = [ h di ; h di ] (4) where [ h di ; h di ] denotes concatenation of GRU hidden states from the two directions and GRU is the GRU function.", "Soft masking amounts to a weighted sum of input embeddings and mask embeddings with error probabilities as weights.", "The soft-masked embedding e (cid:48) i for the i -th character is defined as e (cid:48) i = p i e mask + (1 p i ) e i (5) where e i is the input embedding and e mask is the mask embedding.", "If the probability of error is high, then soft-masked embedding e (cid:48) i is close to the mask embedding e mask ; otherwise it is close to the input embedding e i .", "The correction network is a sequential multi-class labeling model based on BERT.", "The input is the sequence of soft-masked embeddings E (cid:48) = ( e (cid:48) 1 , e (cid:48) 2 , , e (cid:48) n ) and the output is a sequence of characters Y = ( y 1 , y 2 , , y n ) .", "BERT consists of a stack of 12 identical blocks taking the entire sequence as input.", "Each block contains a multi-head self-attention operation followed by a feed-forward network, defined as: MultiHead ( Q, K, V ) = Concat ( head 1 ; , head h ) WO (6) head i = Attention ( QW Qi , KW Ki , V W Vi ) (7) FFN ( X ) = max (0 , XW 1 + b 1 ) W 2 + b 2 (8) where Q , K , and V are the same matrices representing the input sequence or the output of the previous block, MultiHead, Attention, and FNN denote multi-head self-attention, self-attention, and feed-forward network respectively, WO , W Qi , W Ki , W Vi , W 1 , W 2 , b 1 , and b 2 are parameters.", "We denote the sequence of hidden states at the final layer of BERT as H c = ( h c 1 , h c 2 , , h cn ) For each character of the sequence, the probability of error correction is defined as P c ( y i = j | X ) = softmax ( W h (cid:48) i + b )[ j ] (9) where P c ( y i = j | X ) is the conditional probability that character x i is corrected as character j in the candidate list, softmax is the softmax function, h (cid:48) i denotes the hidden state, W and b are parameters.", "Here the hidden state h (cid:48) i is obtained by linear combination with the residual connection, h (cid:48) i = h ci + e i (10) where h ci is the hidden state at the final layer and e i is the input embedding of character x i .", "The last layer of correction network exploits a softmax function.", "The character that has the largest probability is selected from the list of candidates as output for character x i .", "The learning of Soft-Masked BERT is conducted end-to-end, provided that BERT is pre-trained and training data is given which consists of pairs of original sequence and corrected sequence, denoted as = { ( X 1 , Y 1 ) , ( X 2 , Y 2 ) , . . . , ( XN , YN ) } .", "One way to create the training data is to repeatedly generate a sequence X i containing errors given a sequence Y i without an error, using a confusion table, where i = 1 , 2 , , N .", "The learning process is driven by optimizing two objectives, corresponding to error detection and error correction respectively.", "L d = n (cid:88) i =1 log P d ( g i | X ) (11) L c = n (cid:88) i =1 log P c ( y i | X ) (12) where L d is the objective for training of the detection network, and L c is the objective for training of the correction network (and also the final deci-sion).", "The two functions are linearly combined as the overall objective in learning.", "where [0 , 1] is coefficient.", "We made use of the SIGHAN dataset, a benchmark for CSC 1 .", "SIGHAN is a small dataset containing 1,100 texts and 461 types of errors (characters).", "The texts are collected from the essay section of Test of Chinese as Foreign Language and the top-ics are in a narrow scope.", "We adopted the standard split of training, development, and test data of SIGHAN.", "We also created a much larger dataset for testing and development, referred to as News Title.", "We sampled the titles of news articles at Toutiao, a Chinese news app with a large variety of content in politics, entertainment, sports, education, etc.", "To ensure that the dataset contains a sufficient number of incorrect sentences, we conducted the sampling from lower quality texts, and thus the error rate of 1 Following the common practice (Wang et al., 2019), we converted the characters in the dataset from traditional Chinese to simplified Chinese.", "the dataset is higher than usual.", "Three people conducted five rounds of labeling to carefully correct spelling errors in the titles.", "The dataset contains 15,730 texts.", "There are 5,423 texts containing errors, in 3,441 types.", "We divided the data into test set and development set, each containing 7,865 texts.", "In addition, we followed the common practice in CSC to automatically generate a dataset for training.", "We first crawled about 5 million news titles at the Chinese news app.", "We also created a confusion table in which each character is associated with a number of homophonous characters as potential errors.", "Next, we randomly replaced 15% of the characters in the texts with other characters to artificially generate errors, where 80% of them are homophonous characters in the table and 20% of them are random characters.", "This is because in practice about 80% of spelling errors in Chinese are homophonous characters due to the use of Pinyin-based input methods by people.", "For comparison, we adopted the following methods as baselines.", "We report the results of the methods from their original papers.", "NTOU is a method of using an n-gram model and a rule-based classifier (Tseng et al., 2015).", "NCTU-NTUT is a method of utilizing word vectors and conditional random field (Tseng et al., 2015).", "HanSpeller++ is an unified framework employing a hidden Markov model to generate candidates and a filter to re-rank candidates (Zhang et al., 2015).", "Hybrid uses a BiLSTM-based model trained on a generated dataset (Wang et al., 2018b).", "Confusionset is a Seq2Seq model consisting of a pointer network and copy mechanism (Wang et al., 2019).", "FASPell adopts a Seq2Seq model for CSC employing BERT as a denoising auto-encoder and a decoder (Hong et al., 2019).", "BERT-Pretrain is the method of using a pre-trained BERT.", "BERT-Finetune is the method of using a fine-tuned BERT.", "As evaluation measures, we utilized sentence-level accuracy, precision, recall, and F1 score as in most of the previous work.", "We evaluated the accuracy of a method in both detection and correction.", "Obviously correction is more difficult than detection, because the former is dependent on the latter.", "The pre-trained BERT model utilized in the experiments is the one provided at https://github.com/huggingface/transformers.", "In fine-tuning of BERT, we kept the default hyper-parameters and only fine-tuned the parameters using Adam.", "In order to reduce the impact of training tricks, we did not use the dynamic learning rate strategy and maintained a learning rate 2 e 5 in fine-tuning.", "The size of hidden unit in Bi-GRU is 256 and all models use a batch size of 320.", "In the experiments on SIGHAN, for all BERT-based models, we first fine-tuned the model with the 5 million training examples and then continued the fine-tuning with the training examples in SIGHAN.", "We removed the unchanged texts in the training data to improve the efficiency.", "In the experiments on News Title, the models were fine-tuned only with the 5 million training examples.", "The development sets were utilized for hyper-parameter tuning for both SIGHAN and News Title.", "The best value for hyper-parameter was chosen for each dataset.", "Table 2 presents the experimental results of all methods on the two test datasets.", "From the table, one can observe that the proposed model Soft-Masked BERT significantly outperforms the baseline methods on both datasets.", "Particularly, on News Title, Soft-Masked BERT performs much better than the baselines in terms of all measures.", "The best results for recall of correction level on the News Title dataset are greater than 54%, which means more than 54% errors will be found and correction level precision are better than 55%.", "HanSpeller++ achieves the highest precision on SIGHAN, apparently because it can eliminate false detections with its large number of manually-crafted rules and features.", "Although the use of rules and features is effective, the method has high cost in development and may also have difficulties in generalization and adaptation.", "In some sense, it is not directly comparable with the other learning-based methods including Soft-Masked BERT.", "The results of all methods except Confusionset are at sentence level not at character level.", "(The results at character level can look better.)", "Nonetheless, Soft-Mask BERT still performs significantly better.", "The three methods of using BERT, Soft-Masked BERT, BERT-Finetune, and FASPell, perform better than the other baselines, while the method of Table 2: Performances of Different Methods on CSC Test Set Method Detection Correction Acc.", "BERT-Pretrain performs fairly poorly.", "The results indicate that BERT without fine-tuning (i.e., BERT-Pretrain) would not work and BERT with fine-tuning (i.e., BERT-Finetune, etc) can boost the performances remarkably.", "Here we see another successful application of BERT, which can acquire certain amount of knowledge for language understanding.", "Furthermore, Soft-Masked BERT can beat BERT-Finetune by large margins on both datasets.", "The results suggest that error detection is important for the utilization of BERT in CSC and soft masking is really an effective means.", "We present the results of Soft-Masked BERT on (the test data of) News Title to illustrate the effect of parameter and data size.", "Table 3 shows the results of Soft-Masked BERT as well as BERT-Finetune learned with different sizes of training data.", "One can find that the best result is obtained for Soft-Masked BERT when the size is 5 million, indicating that the more training data is utilized the higher performance can be achieved.", "One can also observe that Soft-Masked BERT is consistently superior to BERT-Finetune.", "A larger value means a higher weight on error correction.", "Error detection is an easier task than error correction, because essentially the former is a binary classification problem while the latter is a multi-class classification problem.", "Table 5 presents the results of Soft-Masked BERT in different values of hyper-parameter .", "The highest F1 score is obtained when is 0.8.", "That means that a good compromise between detection and correction is reached.", "We carried out ablation study on Soft-Masked BERT on both datasets.", "Table 4 shows the results on News Title.", "(We omit the results on SIGHAN due to space limitation, which have similar trends.)", "In Soft-Masked BERT-R, the residual connection Table 4: Ablation Study of Soft-Masked BERT on News Title Method Detection Correction Acc.", "in the model is removed.", "In Hard-Masked BERT, if the error probability given by the detection network exceeds a threshold (0.95, 0.9, 07), then the embedding of the current character is set to the embedding of the [MASK] token, otherwise the embedding remains unchanged.", "In Rand-Masked BERT, the error probability is randomized with a value between 0 and 1.", "We can see that all the major components of Soft-Masked BERT are necessary for achieving high performance.", "We also tried BERT-Finetune + Force', whose performance can be viewed as an upper bound.", "In the method, we let BERT-Finetune to only make prediction at the position where there is an error and select a character from the rest of the candidate list.", "The result indicates that there is still large room for Soft-Masked BERT to make improvement.", "We observed that Soft-Masked BERT is able to make more effective use of global context information than BERT-Finetune.", "With soft masking the likely errors are identified, and as a result the model can better leverage the power of BERT to make sensible reasoning for error correction by referring to not only local context but also global context.", "For example, there is a typo in the sentence '(I can speak a little Chinese, but I don't understand man. So I got lost.).", "The word '(man) is incorrect and should be written as '(Chinese character).", "BERT-Finetune can not rectify the mistake, but Soft-Masked BERT can, because the error correction can only be accurately conducted with global context information.", "We also found that there are two major types of errors in almost all methods including Soft-Masked BERT, which affect the performances.", "For statistics of errors, we sampled 100 errors from test set.", "We found that 67% of errors require strong reasoning ability, 11% of errors are due to lack of world knowledge, and the remaining 22% of errors have no significant type.", "The first type of errors is due to lack of inference ability.", "Accurate correction of such typos requires stronger inference ability.", "For example, for the sentence , , ' (He intentionally took the girl's hand, and was very x, but was pretending to be angry.) where the incorrect word x' is not comprehensible, there might be two possible corrections, changing ' to '(chilled) and changing ' to '(happy), while the latter is more reasonable for humans.", "One can see that in order to make more reliable corrections, the models must have stronger inference ability.", "The second type of errors is due to lack of world knowledge.", "For example, in the sentence : , ' (Wuhu: the woman fell into the Qingge River, and people tried hard to rescue her.), ' (Qingge River) is a typo of ' (Qingyu River).", "Humans can discover the typo because the river in Wuhu city China is called Qingyu not Qingge.", "It is still very challenging for the existing models in general AI systems to detect and correct such kind of errors.", "Various studies have been conducted on spelling error correction so far, which plays an important role in many applications, including search (Gao et al., 2010), optical character recognition (OCR) (Afli et al., 2016), and essay scoring (Burstein and Chodorow, 1999).", "Chinese spelling error correction (CSC) is a special case, but is more challenging due to its con-flation with Chinese word segmentation, which received a considerable number of investigations (Yu et al., 2014; Yu and Li, 2014; Tseng et al., 2015; Wang et al., 2019).", "Early work in CSC followed the pipeline of error detection, candidate generation, and final candidate selection.", "Some researchers employed unsupervised methods using language models and rules (Yu and Li, 2014; Tseng et al., 2015) and the others viewed it as a sequential labeling problem and employed conditional random fields or hidden Markov models (Tseng et al., 2015; Zhang et al., 2015).", "Recently, deep learning was applied to spelling error correction (Guo et al., 2019; Wang et al., 2019), and for example, a Seq2Seq model with BERT as encoder was employed (Hong et al., 2019), which transforms the input sentence into a new sentence with spelling errors corrected.", "BERT (Devlin et al., 2018) is a language representation model with Transformer encoder as its architecture.", "BERT is first pre-trained using a very large corpus in a self-supervised fashion (mask language modeling and next sentence predic-tion).", "Then, it is fine-tuned using a small amount of labeled data in a down-stream task.", "Since its inception BERT has demonstrated superior performances in almost all the language understanding tasks, such as those in the GLUE challenge (Wang et al., 2018a).", "BERT has shown strong ability to acquire and utilize knowledge for language understanding.", "Recently, other language representation models have also been proposed, such as XLNET (Yang et al., 2019), Roberta (Liu et al., 2019), and ALBERT (Lan et al., 2019).", "In this work, we extend BERT to Soft Masked BERT for spelling error correction and as far as we know no similar architecture was proposed before.", "In this paper, we have proposed a novel neural network architecture for spelling error correction,", "more specifically Chinese spelling error correction (CSC).", "Our model called Soft-Masked BERT is composed of a detection network and a correction network based on BERT.", "The detection network identifies likely incorrect characters in the given sentence and soft-masks the characters.", "The correction network takes the soft-masked characters as input and makes correction on the characters.", "The technique of soft-masking is general and potentially useful in other detection-correction tasks.", "Experimental results on two datasets show that Soft-Masked BERT significantly outperforms the state-of-art method of solely utilizing BERT.", "As future work, we plan to extend Soft-Masked BERT to other problems like grammatical error correction and explore other possibilities of implementing the detection network." ]
[ "abstain", "method", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective" ]
[ "We present a simple yet effective T argeted A dversarial T raining ( TAT ) algorithm to improve adversarial training for natural language understanding.", "The key idea is to introspect current mistakes and prioritize adversarial training steps to where the model errs the most.", "Experiments show that TAT can significantly improve accuracy over standard adversarial training on GLUE and attain new state-of-the-art zero-shot results on XNLI.", "Our code will be released at: https://github.", "com/namisan/mt-dnn .", "Adversarial training has proven effective in improving model generalization and robustness in computer vision (Madry et al., 2017; Goodfellow et al., 2014) and natural language processing (NLP) (Zhu et al., 2019; Jiang et al., 2019; Cheng et al., 2019; Liu et al., 2020a; Pereira et al., 2020; Cheng et al., 2020).", "It works by augmenting the input with a small perturbation to steer the current model prediction away from the correct label, thus forcing subsequent training to make the model more robust and generalizable.", "Aside from some prior work in computer vision (Dong et al., 2018; Tramr et al., 2017), most adversarial training approaches adopt non-targeted attacks, where the model prediction is not driven towards a specific incorrect label.", "In NLP, the cutting-edge research in adversarial training tends to focus on making adversarial training less expensive (e.g., by reusing backward steps in FreeLB (Zhu et al., 2019)) or regularizing rather than replacing the standard training objective (e.g., in virtual adversarial training (VAT) (Jiang et al., 2019)).", "By contrast, in this paper, we investigate an orthogonal direction by augmenting adversarial training with introspection capability and adopting targeted attacks to focus on where the model errs the Equal contribution.", "most.", "We observe that in many NLP applications, the error patterns are non-uniform.", "For example, in the MNLI development set (in-domain), standard fine-tuned BERT model tends to misclassify a non-neutral instance as neutral more often than the opposite label (Figure 1 top).", "We thus propose Targeted Adversarial Training (TAT), a simple yet effective algorithm for adversarial training.", "For each instance, instead of taking adversarial steps away from the gold label, TAT samples an incorrect label proportional to how often the current model makes the same error in general, and takes adversarial steps towards the chosen incorrect label.", "To our knowledge, this is the first attempt to apply targeted adversarial training to NLP tasks.", "In our experiments, this leads to significant improvement over standard non-adversarial and adversarial training alike.", "For example, in the MNLI development set, TAT produced an accuracy gain of 1.7 absolute points (Figure 1 bottom).", "On the overall GLUE benchmark, TAT outperforms state-of-the-art non-targeted adversarial training methods such as FreeLB and VAT, and enables the BERTBASE model to perform comparably to the BERTLARGE model with standard training.", "The benefit of TAT is particularly pronounced in out-domain settings, such as in zero-shot learning in natural language inference, attaining new state-of-the-art cross-lingual results on XNLI.", "In this paper, we focus on fine-tuning BERT models (Devlin et al., 2018) in our investigation of targeted adversarial training, as this approach has proven very effective for a wide range of NLP tasks.", "The training algorithm seeks to learn a function f ( x ; ) : x C as parametrized by , where C is the class label set.", "Given a training dataset D of input-output pairs ( x, y ) and the loss function l ( ., . ) (e.g., cross entropy), the standard training objective would minimize the empirical risk: min E ( x,y ) D [ l ( f ( x ; ) , y )] .", "By contrast, in adversarial training, as pioneered in computer vision (Goodfellow et al., 2014; Hsieh et al., 2019; Madry et al., 2017; Jin et al., 2019), the input would be augmented with a small perturbation that maximize the adversarial loss: min E ( x,y ) D [max l ( f ( x + ; ) , y )] , where the inner maximization can be solved by projected gradient descent (Madry et al., 2017).", "Recently, adversarial training has been successfully applied to NLP as well (Zhu et al., 2019; Jiang et al., 2019; Pereira et al., 2020).", "In particular, FreeLB (Zhu et al., 2019) leverages the free adversarial training idea (Shafahi et al., 2019) by reusing the backward pass in gradient computation to carry out inner ascent and outer descent steps simultaneously.", "SMART (Jiang et al., 2019) instead Algorithm 1 TAT Input: T : the total number of iterations, X = { ( x 1 , y 1 ) , ..., ( x n , y n ) } : the dataset, f ( x ; ) : the machine learning model parametrized by , 2 : the variance of the random initialization of perturbation , (cid:15) : perturbation bound, K : the number of iterations for perturbation estimation, : the step size for updating perturbation, : the global learning rate, : the smoothing proportion of adversarial training in the augmented learning objective, : the projection operation and C : the classes.", "1: for t = 1 ,", ".., T do 2: for ( x, y ) X do 3: N (0 , 2 I ) 4: y t = sample ( C \\ y ) 5: for m = 1 ,", ".., K do 6: g adv l ( f ( x + ; ) , y t ) 7: (cid:107) (cid:107) (cid:15) ( g adv ) 8: end for 9: g l ( f ( x ; ) , y ) + l ( f ( x ; ) , f ( x + ; )) 10: g 11: end for 12: end for Output: regularizes the standard training objective using virtual adversarial training (Miyato et al., 2018): min E ( x,y ) D [ l ( f ( x ; ) , y )+ max l ( f ( x + ; ) , f ( x ; ))] (1) Effectively, the adversarial term encourages smoothness in the input neighborhood, and is a hyperparameter that controls the trade-off between standard errors and adversarial errors.", "In standard adversarial training, the algorithm simply tries to perturb the input x away from the gold label y given the current parameters .", "It is agnostic to which incorrect label f ( x ) might be steered towards.", "By contrast, in Targeted Adversarial Training (TAT), we would explicitly pick a target y t (cid:54) = y and try to steer the model towards y t .", "Intuitively, we would like to focus training on where the model currently errs the most.", "We accomplish this by keeping a running tally of e ( y, y t ) , which is the current expected error of predicting y t when the gold label is y , and sample y t from C \\ y = C { y } in proportion to e ( y, y t ) .", "See Algorithm 1 for details.", "TAT can be applied to the Methods MNLI-m/mm QQP RTE QNLI MRPC CoLA SST STS-B Average Acc Acc/F1 Acc Acc Acc/F1 Mcc Acc P/S Corr Score Standard ( BERTLARGE ) dev 86.3/86.2 91.3/88.4 71.1 92.4 85.8/89.5 61.8 93.5 89.6/89.3 84.0 Standard ( BERTLARGE ) test 86.7/85.9 72.1/89.3 70.1 92.7 85.4/89.3 60.5 94.9 87.6/86.5 82.4 Standard dev 84.5/84.4 90.9/88.3 63.5 91.1 84.1/89.0 54.7 92.9 89.2/88.8 81.5 FreeLB dev 85.4/85.5 91.4/88.4 70.4 91.5 86.2/90.3 59.1 93.2 89.7/89.1 83.5 VAT dev 85.5/85.7 91.5/88.5 71.2 91.7 87.7/91.3 58.2 93.3 90.0/89.4 83.7 TAT dev 86.2/85.9 91.8/89.1 72.6 92.2 88.2/91.5 58.5 93.6 90.8/89.6 84.2 Standard test 84.6/83.4 71.2/89.2 66.4 90.5 84.8/88.9 52.1 93.5 87.1/85.8 80.0 TAT test 85.8/84.8 72.8/89.6 69.7 92.4 88.2/91.1 59.8 94.5 89.7/89.0 82.8 Table 1: Comparison of standard and adversarial training methods on GLUE.", "original adversarial training or virtual adversarial training alike.", "In this paper, we focus on adapting virtual adversarial training (VAT) (Jiang et al., 2019).", "The two lines in blue color are the only change from VAT.", "We initialize e ( y, y t ) with uniform distribution and update them in each epoch.", "We conducted an oracle experiment where e ( y, y t ) was taken from the confusion matrix from standard training and found that it performed similarly as our online version.", "It is more challenging to apply TAT to regression tasks, as we would need to keep track of a continuous error distribution.", "To address this problem, we quantize the value range into ten bins and apply TAT similarly as in the classification setting (once a bin is chosen, a value is sampled uniformly within).", "We compare targeted adversarial training (TAT) with standard training and state-of-the-art adversarial training methods such as FreeLB (Zhu et al., 2019) and VAT (Miyato et al., 2018; Jiang et al., 2019).", "We use the standard uncased BERTBASE model (Devlin et al., 2018), unless noted otherwise.", "Due to the additional overhead incurred during training, adversarial methods are somewhat slower than standard training.", "Like VAT, TAT requires an additional K adversarial steps compared to standard training.", "In practice, K = 1 suffices for TAT and VAT, so they are just slightly slower (roughly 2 times compared to standard training).", "FreeLB, by contrast, typically requires 2-5 steps to attain good performance, so is significantly slower.", "Our implementation is based on the MT-DNN toolkit (Liu et al., 2020b).", "We follow the default hyperparameters used for fine-tuning the uncased BERT base model (Devlin et al., 2018; Liu et al., 2020b).", "Specifically, we use 0 .", "1 for the dropout rate except 0.2 for MNLI, 0 .", "01 for the weight de-cay rate and the Adamax (Kingma and Ba, 2014) optimizer with the default Lookahead (Zhang et al., 2019) to stabilize training.", "We select the learning rate from { 5e 5 , 1e 4 } for all the models.", "The maximum training epoch is set to 6, and the we follow (Jiang et al., 2019) to set adversarial training hyperparameters: (cid:15) = 1e 5 and = 1e 4 .", "In our experiments, we simply set = 1 in Eq 1.", "We first compare adversarial training methods on the standard GLUE benchmark (Wang et al., 2018).", "See Table 1 for the results 1 .", "TAT consistently outperforms both standard training and the state-of-the-art adversarial training methods of FreeLB and VAT.", "Remarkably, BERTBASE with targeted adversarial training performs on par with BERTLARGE with standard training overall, and outperforms the latter by a large margin on tasks with smaller datasets such as RTE, MRPC and STS-B, which illustrates the benefit of TAT in improving model generalizability.", "1 Due to restriction on the number of submissions by the GLUE organizers, we only compared TAT with the published results from (Devlin et al., 2018) on the test set.", "Next, we compare standard and adversarial training in generalizability to out-domain datasets.", "Specifi-cally, we fine-tune BERTBASE on the MNLI training data and evaluate it on various natural language inference test sets: HANS (McCoy et al., 2019), SNLI (Bowman et al., 2015), SciTail (Khot et al., 2018), MeNLI (Romanov and Shivade, 2018).", "See Table 2 for the results.", "TAT substantially outperforms standard training and state-of-the-art adversarial training methods.", "Interestingly, the gains are particularly pronounced on the two hardest datasets, HANS and MedNLI.", "HANS used heuristic rules to identify easy instances for MNLI-trained BERT models and introduced modifications to make them harder.", "MedNLI is from the biomedical domain, which is substantially different from the general do-main of MNLI.", "This provides additional evidence that targeted adversarial training is especially effective in enhancing generalizability in out domains.", "We also conducted zero-shot evaluation in the cross-lingual setting by comparing standard and adversarial training on XNLI (Conneau et al., 2018).", "Specifically, a cross-lingual language model is fine-tuned using the English NLI dataset and then tested on datasets of other languages.", "Following Conneau et al. (2019), we used the pre-trained XLM-R large model in our experiments, and compare targeted adversarial training (XLM-R+TAT) with state-of-the-art systems that use standard training (XLM-R) and adversarial training (XLM-R+R3F/R4F) (Aghajanyan et al., 2020), as well as another state-of-the-art language model InfoXLM (Chi et al., 2020).", "To ensure fair comparison, we also report the results from our reimplementation of XLM-R (Conneau et al., 2018) (XLM-R Reprod ).", "See Table 3 for the results.", "Targeted adversarial training (TAT) demonstrates a clear advantage in improving zero-shot transfer learning across languages, especially for languages most different from English, such as Urdu.", "Overall, TAT produces a new state-of-the-art result of 81.7% over 15 languages on XNLI.", "As we have seen in Figure 1 earlier, TAT reduces the errors across the board on MNLI development set.", "To understand how TAT improves performance, we conducted a more detailed analysis by subdividing the dataset based on the degree of human agreement.", "Here, there are three label classes and each sample instance has 5 human annotations.", "The samples can be divided into four categories: 5-0-0, 4-1-0, 3-2-0, 3-1-1.", "E.g., 3-1-1 signifies that there are three votes for one label and one for each of the other two labels.", "In Figure 2, we see that TAT outperforms the baseline consistently over all categories, with higher improvement on the more ambiguous samples, especially for out-domain samples.", "This suggests that TAT is most helpful for the challenging instances that exhibit higher ambiguity and are more different from training examples.", "Model en fr es de el bg ru tr ar vi th zh hi sw ur Avg.", "XLM-R 89.1 84.1 85.1 83.9 82.9 84.0 81.2 79.6 79.8 80.8 78.1 80.2 76.9 73.9 73.8 80.9 XLM-R Reprod 88.1 83.6 84.1 83.0 82.6 83.8 81.7 80.7 80.4 80.7 78.9 80.1 77.8 74.2 74.0 80.9 XLM-R+R3F 89.4 84.2 85.1 83.7 83.6 84.6 82.3 80.7 80.6 81.1 79.4 80.1 77.3 72.6 74.2 81.2 XLM-R+R4F 89.6 84.7 85.2 84.2 83.6 84.6 82.5 80.3 80.5 80.9 79.2 80.6 78.2 72.7 73.9 81.4 InfoXLM 89.7 84.5 85.5 84.1 83.4 84.2 81.3 80.9 80.4 80.8 78.9 80.9 77.9 74.8 73.7 81.4 XLM-R+TAT 89.3 84.2 85.7 83.9 83.7 85.0 82.1 81.0 80.7 81.3 79.7 81.0 78.4 74.1 75.1 81.7 Table 3: Comparison of targeted adversarial training (TAT) and prior state of the art in zero-shot cross-lingual learning on the XNLI test set.", "has a wider and flatter loss surface, which generally indicates better generalization (Hochreiter and Schmidhuber, 1997; Hao et al., 2019; Li et al., 2018).", "We present the first study to apply targeted attacks in adversarial training for natural language understanding.", "understanding.", "Our TAT algorithm is simple yet effective in improving model generalizability for various NLP tasks, especially in zero-shot learning and for out-domain data.", "Future directions include: applying TAT in pretraining and other NLP tasks e.g., sequence labeling, exploring alternative approaches for target sampling.", "We thank Microsoft Research Technology Engineering team for setting up GPU machines.", "We also thank the anonymous reviewers for valuable discussions." ]
[ "result", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "objective", "abstain", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "objective", "abstain", "result", "abstain", "other", "other" ]
[ "Many joint entity relation extraction models setup two separated label spaces for the two sub-tasks (i.e., entity detection and relation classification).", "We argue that this setting may hinder the information interaction between entities and relations.", "In this work, we propose to eliminate the different treatment on the two sub-tasks' label spaces.", "The input of our model is a table containing all word pairs from a sentence.", "Entities and relations are represented by squares and rectangles in the table.", "We apply a unified classifier to predict each cell's label, which unifies the learning of two sub-tasks.", "For testing, an effective (yet fast) approximate decoder is proposed for finding squares and rectangles from tables.", "Experiments on three benchmarks (ACE04, ACE05, SciERC) show that, using only half the number of parameters, our model achieves competitive accuracy with the best extractor, and is faster.", "Extracting structured information from plain texts is a long-lasting research topic in NLP.", "Typically, it aims to recognize specific entities and relations for profiling the semantic of sentences.", "An example is shown in Figure 1, where a person entity David Perkins and a geography entity California have a physical location relation PHYS .", "Methods for detecting entities and relations can be categorized into pipeline models or joint models.", "In the pipeline setting, entity models and relation models are independent with disentangled feature spaces and output label spaces.", "In the joint setting, on the other hand, some parameter sharing of feature spaces (Miwa and Bansal, 2016; Katiyar and Equal contribution.", "Cardie, 2017) or decoding interactions (Yang and Cardie, 2013; Sun et al., 2019) are imposed to explore the common structure of the two tasks.", "It was believed that joint models could be better since they can alleviate error propagations among sub-models, have more compact parameter sets, and uniformly encode prior knowledge (e.g., constraints) on both tasks.", "However, Zhong and Chen (2020) recently show that with the help of modern pre-training tools (e.g., BERT), separating the entity and relation model (with independent encoders and pipeline decoding) could surpass existing joint models.", "They argue that, since the output label spaces of entity and relation models are different, comparing with shared encoders, separate encoders could better capture distinct contextual information, avoid potential con-flicts among them, and help decoders making a more accurate prediction, that is, separate label spaces deserve separate encoders .", "In this paper, we pursue a better joint model for entity relation extraction.", "After revisiting existing methods, we find that though entity models and relation models share encoders, usually their label spaces are still separate (even in models with joint decoders).", "Therefore, parallel to (Zhong and Chen, 2020), we would ask whether joint encoders (decoders) deserve joint label spaces ?", "The challenge of developing a unified entity-relation label space is that the two sub-tasks are usually formulated into different learning problems (e.g., entity detection as sequence labeling, relation classification as multi-class classification), and their labels are placed on different things (e.g., words v.s. words pairs).", "One prior attempt (Zheng et al., 2017) is to handle both sub-tasks with one sequence labeling model.", "A compound label set was devised to encode both entities and relations.", "However, the model's expressiveness is sacrificed: it can detect neither overlapping relations (i.e., entities participating in multiple relation) nor isolated entities (i.e., entities not appearing in any relation).", "Our key idea of defining a new unified label space is that, if we think Zheng et al. (2017)'s solution is to perform relation classification during entity labeling, we could also consider the reverse direction by seeing entity detection as a special case of relation classification.", "Our new input space is a two-dimensional table with each entry corresponding to a word pair in sentences (Figure 1).", "The joint model assign labels to each cell from a unified label space (union of entity type set and relation type set).", "Graphically, entities are squares on the diagonal, and relations are rectangles off the diagonal.", "This formulation retains full model expressiveness regarding existing entity-relation extraction scenarios (e.g., overlapped relations, directed relations, undirected relations).", "It is also different from the current table filling settings for entity relation extraction (Miwa and Sasaki, 2014; Gupta et al., 2016; Zhang et al., 2017; Wang and Lu, 2020), which still have separate label space for entities and relations, and treat on/off-diagonal entries differently.", "Based on the tabular formulation, our joint entity relation extractor performs two actions, filling and decoding .", "First, filling the table is to predict each word pair's label, which is similar to arc prediction task in dependency parsing.", "We adopt the biaffine attention mechanism (Dozat and Manning, 2016) to learn interactions between word pairs.", "We also impose two structural constraints on the table through structural regularizations.", "Next, given the table filling with label logits, we devise an approximate joint decoding algorithm to output the final extracted entities and relations.", "Basically, it efficiently finds split points in the table to identify squares and rectangles (which is also different with existing table filling models which still apply certain sequential decoding and fill tables incre-mentally).", "Experimental results on three benchmarks (ACE04, ACE05, SciERC) show that the proposed joint method achieves competitive performances comparing with the current state-of-the-art extractors (Zhong and Chen, 2020): it is better on ACE04 and SciERC, and competitive on ACE05.", "1 Meanwhile, our new joint model is fast on decoding ( 10 x faster than the exact pipeline implementation, and comparable to an approximate pipeline, which attains lower performance).", "It also has a more compact parameter set: the shared encoder uses only half the number of parameters comparing with the separate encoder (Zhong and Chen, 2020).", "Given an input sentence s = x 1 , x 2 , . . . , x | s | ( x i is a word), this task is to extract a set of entities E and a set of relations R .", "An entity e is a span ( e. span ) with a pre-defined type e.", "type Y e (e.g., PER , GPE ).", "The span is a continuous sequence of words.", "A relation r is a triplet ( e 1 , e 2 , l ) , where e 1 , e 2 are two entities and l Y r is a pre-defined relation type describing the semantic relation among two entities (e.g., the PHYS relation between PER and GPE mentioned before).", "Here Y e , Y r denote the set of possible entity types and relation types respectively.", "as a table filling task (multi-class classification between each word pair in sentence s ), as shown in Figure", "1. For the sentence s , we maintain a table T | s || s | .", "For each cell ( i, j ) in table T , we assign a label y i,j Y , where Y = Y e Y r {} ( denotes no relation).", "For each entity e , the label of corresponding cells y i,j ( x i e. span , x j e. span ) should be filled in e.", "type .", "For each relation r = ( e 1 , e 2 , l ) , the label of corresponding cells y i,j ( x i e 1 . span , x j e 2 . span ) should be filled in l .", "2 While others should be filled in .", "In the test phase, decoding entities and relations becomes a rectangle finding problem.", "Note that solving this problem is not trivial, and we propose a simple but effective joint decoding algorithm to tackle this challenge.", "In this section, we first introduce our biaffine model for table filling task based on pre-trained language models (Section 3.1).", "Then we detail the main objective function of the table filling task (Section 3.2) and some constraints which are imposed on the table in training stage (Section 3.3).", "Finally we present the joint decoding algorithm to extract entities and relations (Section 3.4).", "Figure 2 shows an overview of our model architecture.", "3 3.1 Biaffine Model Given an input sentence s , to obtain the contextual representation h i for each word, we use a pre-trained language model (PLM) as our sentence encoder (e.g., BERT).", "The output of the encoder is { h 1 , . . . , h | s | } = PLM ( { x 1 , . . . , x | s | } ) , where x i is the input representation of each word x i .", "Taking BERT as an example, x i sums the corresponding token, segment and position embeddings.", "To capture long-range dependencies, we also employ cross-sentence context following (Zhong and Chen, 2020), which extends the sentence to a fixed window size W ( W = 200 in our default settings).", "To better encode direction information of words in table T , we use the deep biaffine attention mechanism (Dozat and Manning, 2016), which achieves impressive results in the dependency parsing task.", "Specifically, we employ two dimension-reducing 2 Assuming no overlapping entities in one sentence.", "3 We only show three labels of Y in Figure 2 for simplicity and clarity.", "MLPs (multi-layer perceptron), i.e., a head MLP and a tail MLP, on each h i as h head i = MLP head ( h i ) , h tail i = MLP tail ( h i ) , where h head i R d and h tail i R d are projection representations, allowing the model to identify the head or tail role of each word.", "Next, we calculate the scoring vector g i,j R |Y| of each word pair with biaffine model, g i,j = Biaff ( h head i , h tail j ) , Biaff ( h 1 , h 2 ) = h T 1 U 1 h 2 + U 2 ( h 1 h 2 ) + b , where U 1 R |Y| d d and U 2 R |Y| 2 d are weight parameters, b R |Y| is the bias, denotes concatenation.", "After obtaining the scoring vector g i,j , we feed g i,j into the softmax function to predict corresponding label, yielding a categorical probability distribution over the label space Y as", "In our experiments, we observe that applying dropout in g i,j , similar to de-noising auto-encoding, can further improve the performance.", "4 .", "We refer this trick to logit dropout And the training objective is to minimize L entry = 1 | s | 2 | s | (cid:88) i =1 | s | (cid:88) j =1 log P ( y i,j = y i,j | s ) , (1) where the gold label y i,j can be read from annotations, as shown in Figure", "In fact, Equation 1 is based on the assumption that each label is independent.", "This assumption sim-plifies the training procedure, but ignores some structural constraints.", "For example, entities and relations correspond to squares and rectangles in the table.", "Equation 1 does not encode this constraint explicitly.", "To enhance our model, we propose two intuitive constraints, symmetry and implication , which are detailed in this section.", "Here we introduce a new notation P R | s || s ||Y| , de-noting the stack of P ( y i,j | s ) for all word pairs in sentence s .", "5 4 We set dropout rate p = 0 .", "Symmetry We have several observations from the table in the tag level. Firstly, the squares corresponding to entities must be symmetrical about the diagonal. Secondly, for symmetrical relations, the relation triples ( e 1 , e 2 , l ) and ( e 2 , e 1 , l ) are equivalent, thus the rectangles corresponding to two counterpart relation triples are also symmetrical about the diagonal. As shown in Figure 1, the rectangles corresponding to (his, wife, PER-SOC ) and (wife, his, PER-SOC ) are symmetrical about the diagonal. We divide the set of labels Y into a symmetrical label set Y sym and an asymmetrical label set Y asym . The matrix P : , : ,t should be symmetrical about the diagonal for each label t Y sym . We formulate this tag-level constraint as symmetrical loss,", "We list all Y sym in Table 1 for our adopted datasets.", "Implication A key intuition is that if a relation exists, then its two argument entities must also exist. In other words, it is impossible for a relation to exist without two corresponding entities. From the", "perspective of probability, it implies that the probability of relation is not greater than the probability of each argument entity. Since we model entity and relation labels in a unified probability space, this idea can be easily used in our model as the implication constraint. We impose this constraint on P : for each word in the diagonal, its maximum possibility over the entity type space Y e must not be lower than the maximum possibility for other words in the same row or column over the relation type space Y r . We formulate this table-level constraint as implication loss,", "where [ u ] = max( u, 0) is the hinge loss. It is worth noting that we do not add margin in this loss function. Since the value of each item is a probability and might be relatively small, it is meaningless to set a large margin.", "Finally, we jointly optimize the three objectives in the training stage as L entry + L sym + L imp . 6", "In the testing stage, given the probability tensor P R | s || s ||Y| of the sentence s , 7 how to decode all rectangles (including squares) corresponding to entities or relations remains a non-trivial problem. Since brute force enumeration of all rectangles is intractable, a new joint decoding algorithm is needed. We expect our decoder to have,", "6 We directly sum the three losses to avoid introducing more hyper-parameters. 7 For the symmetrical label t Y sym , we set P i,j,t = P j,i,t = ( P i,j,t + P j,i,t ) / 2 .", "Simple implementation and fast decoding.", "We permit slight decoding accuracy drops for scalability.", "Strong interactions between entities and relations.", "When decoding entities, it should take the relation information into account, and vice versa.", "Inspired by the procedures of (Sun et al., 2019), We propose a three-steps decoding algorithm: decode span first (entity spans or spans between enti-ties), and then decode entity type of each span, and at last decode relation type of each entity pair (Fig-ure 3).", "We consider each cell's probability scores on all labels (including entity labels and relation labels) and predict spans according to a threshold.", "Then, we predict entities and relations with the highest score.", "Our heuristic decoding algorithm could be very efficient.", "Next we will detail the entire decoding process, and give a formal description in the Appendix A. Span Decoding One crucial observation of a ground-truth table is that, for an arbitrary entity, its corresponding rows (or columns) are exactly the same in the table (e.g., row 1 and row 2 of Figure 1 are identical), not only for the diagonal entries (entities are squares), but also for the off-diagonal entries (if it participates in a relation with another entity, all its rows (columns) will spot that relation label in the same way).", "In other words, if the adjacent rows/columns are different, there must be an entity boundary (i.e., one belonging to the entity and the other not belonging to the entity).", "Therefore, if our biaffine model is reasonably trained, given a model predicted table, we could use this property to find split positions of entity boundary.", "As expected, experiments (Figure 4) verify our assumption.", "We adapt this idea to the 3-dimensional probability tensor P .", "Specifically, we flatten P R | s || s ||Y| as a matrix P row R | s | ( | s ||Y| ) from row perspective, and then calculate the Euclidean distances ( l 2 distances) of adjacent rows.", "Similarly, we calculate the other Euclidean distances of adjacent columns according to a matrix P col R ( | s ||Y| ) | s | from column perspective, and then average the two distances as the final distance.", "If the distance is larger than the threshold ( = 1 . 4 in our default settings), this position is a split position.", "In this way, we can decode all the spans in O ( | s | ) time complexity.", "Entity Type Decoding Given a span ( i, j ) by span decoding, 8 we decode the entity type t according to the corresponding square symmetric about the diagonal: t = arg max t Y e {} Avg ( P i : j,i : j,t ) .", "If t Y e , we decode an entity.", "If t = , the span ( i, j ) is not an entity.", "Relation Type Decoding After entity type decoding, given an entity e 1 with the span ( i, j ) and another entity e 2 with the span ( m, n ) , we decode the relation type l between e 1 and e 2 according to the corresponding rectangle.", "Formally, l = arg max l Y r {} Avg ( P i : j,m : n,l ) .", "If l Y r , we decode a relation ( e 1 , e 2 , l ) .", "If l = , e 1 and e 2 have no relation.", "Datasets We conduct experiments on three entity relation extraction benchmarks: ACE04 (Dodding-ton et al., 2004), 9 ACE05 (Walker et al., 2006), 10 and SciERC (Luan et al., 2018).", "11 Table 2 shows the dataset statistics.", "Besides, we provide detailed dataset specifications in the Appendix B. Evaluation Following suggestions in (Taille et al., 2020), we evaluate Precision (P), Recall (R), and F1 scores with micro-averaging and adopt the Strict Evaluation criterion.", "Specifically, a predicted entity is correct if its type and boundaries are correct, and a predicted relation is correct if its 8 i and j denote start and end indices of the span.", "9 https://catalog.ldc.upenn.edu/LDC2005T09 10 https://catalog.ldc.upenn.edu/LDC2006T06 11 http://nlp.cs.washington.edu/sciIE/ Dataset Model Encoder Entity Relation P R F1 P R F1 ACE04 Li and Ji (2014) -83.5 76.2 79.7 60.8 36.1 45.3 Miwa and Bansal (2016) LSTM 80.8 82.9 81.8 48.7 48.1 48.4 Katiyar and Cardie (2017) LSTM 81.2 78.1 79.6 46.4 45.3 45.7 Li et al. (2019) BERTLARGE 84.4 82.9 83.6 50.1 48.7 49.4 Wang and Lu (2020) ALBERTXXLARGE-88.6 -59.6 Zhong and Chen (2020) (cid:5) BERTBASE-89.2 -60.1 Zhong and Chen (2020) (cid:5) ALBERTXXLARGE-90.3 -62.2 UNIRE (cid:5) BERTBASE 87.4 88.0 87.7 62.1 58.0 60.0 UNIRE (cid:5) ALBERTXXLARGE 88.9 90.0 89.5 67.3 59.3 63.0 ACE05 Li and Ji (2014) -85.2 76.9 80.8 65.4 39.8 49.5 Miwa and Bansal (2016) LSTM 82.9 83.9 83.4 57.2 54.0 55.6 Katiyar and Cardie (2017) LSTM 84.0 81.3 82.6 55.5 51.8 53.6 Sun et al. (2019) LSTM 86.1 82.4 84.2 68.1 52.3 59.1 Li et al. (2019) BERTLARGE 84.7 84.9 84.8 64.8 56.2 60.2 Wang et al. (2020) BERTBASE-87.2 -63.2 Wang and Lu (2020) ALBERTXXLARGE-89.5 -64.3 Zhong and Chen (2020) (cid:5) BERTBASE-90.2 -64.6 Zhong and Chen (2020) (cid:5) ALBERTXXLARGE-90.9 -67.8 UNIRE (cid:5) BERTBASE 88.8 88.9 88.8 67.1 61.8 64.3 UNIRE (cid:5) ALBERTXXLARGE 89.9 90.5 90.2 72.3 60.7 66.0 SciERC Wang et al. (2020) SciBERT -68.0 -34.6 Zhong and Chen (2020) (cid:5) SciBERT -68.2 -36.7 UNIRE (cid:5) SciBERT 65.8 71.1 68.4 37.3 36.6 36.9 Table 3: Overall evaluation.", "relation type is correct, as well as the boundaries and types of two argument entities are correct.", "Implementation Details We tune all hyper-parameters based on the averaged entity F1 and relation F1 on ACE05 development set, then keep the same settings on ACE04 and SciERC.", "For fair comparison with previous works, we use three pre-trained language models: bert-base-uncased (Devlin et al., 2019), albert-xxlarge-v1 (Lan et al., 2019) and scibert-scivocab-uncased (Beltagy et al., 2019) as the sentence encoder and fine-tune them in training stage.", "12 For the MLP layer, we set the hidden size as d = 150 and use GELU as the activation function.", "We use AdamW optimizer (Loshchilov and Hutter, 2017) with 1 = 0 .", "9 and 2 = 0 .", "9 , and observe a phenomenon similar to (Dozat and Manning, 2016) in that setting 2 from 0.9 to 0.999 causes a significant drop on final performance.", "The batch size is 32, and the learning rate is 5e-5 with weight decay 1e-5.", "We apply a linear warm-up learning rate scheduler with a warm-up ratio of 0.2.", "We train our model with a maximum of 200 epochs (300 epochs for SciERC) and employ an early stop strategy.", "We 12 The first two are for ACE04 and ACE05, and the last one is for SciERC.", "Table 3 summarizes previous works and our UNIRE on three datasets.", "13 In general, UNIRE achieves the best performance on ACE04 and SciERC and a comparable result on ACE05.", "Comparing with the previous best joint model (Wang and Lu, 2020), our model significantly advances both entity and relation performances, i.e., an abso-lute F1 of +0.9 and +0.7 for entity as well as +3.4 and +1.7 for relation, on ACE04 and ACE05 respectively.", "For the best pipeline model (Zhong and Chen, 2020) (current SOTA), our model achieves superior performance on ACE04 and SciERC and comparable performance on ACE05.", "Comparing with ACE04/ACE05, SciERC is much smaller, so entity performance on SciERC drops sharply.", "Since (Zhong and Chen, 2020) is a pipeline method, its relation performance is severely influenced by the poor entity performance.", "Nevertheless, our model is less influenced in this case and 13 Since (Luan et al., 2019a; Wadden et al., 2019) neglect the argument entity type in relation evaluation and underperform our baseline (Zhang et al., 2020), we do not compare their results here.", "achieves better performance.", "Besides, our model can achieve better relation performance even with worse entity results on ACE04.", "Actually, our base model (BERTBASE ) has achieved competitive relation performance, which even exceeds prior models based on BERTLARGE (Li et al., 2019) and ALBERTXXLARGE (Wang and Lu, 2020).", "These results confirm the proposed unified label space is effective for exploring the interaction between entities and relations.", "Note that all subsequent experiment results on ACE04 and ACE05 are based on BERTBASE for efficiency.", "In this section, we analyze the effects of components in UNIRE with different settings (Table 4).", "Particularly, we implement a naive decoding algorithm for comparison, namely hard decoding, which takes the intermediate table as input.", "The intermediate table is the hard form of probability tensor P output by the biaffine model, i.e., choosing the class with the highest probability as the label of each cell.", "To find entity squares on the diagonal, it first tries to judge whether the largest square ( | s | | s | ) is an entity.", "The criterion is simply counting the number of different entity labels appearing in the square and choosing the most frequent one.", "If the most frequent label is , we shrink the size of square by 1 and do the same work on two ( | s | 1) ( | s | 1) squares and so on.", "To avoid entity overlapping, an entity will be discarded if it overlaps with identified entities.", "To find relations, each entity pair is labeled by the most frequent relation label in the corresponding rectangle.", "From the ablation study, we get the following observations.", "grees (line 2-3).", "Specifically, the symmetrical loss has a significant impact on SciERC (de-crease 1.1 points and 1.4 points for entity and relation performance).", "While removing the implication loss will obviously harm the relation performance on ACE05 (1.0 point).", "It demonstrates that the structural information incorporated by both losses is useful for this task.", "Comparing with the Default, the performance of w/o logit dropout and w/o cross-sentence context drop more sharply (line 4-5).", "Logit dropout prevents the model from overfit-ting, and cross-sentence context provides more contextual information for this task, especially for small datasets like SciERC.", "The hard decoding has the worst performance (its relation performance is almost half of the Default) (line 6).", "The major reason is that hard decoding separately decodes entities and relations.", "It shows the proposed decoding algorithm jointly considers entities and relations, which is important for decoding.", "Following (Zhong and Chen, 2020), we evaluate the inference speed of our model (Table 5) on ACE05 and SciERC with the same batch size and pre-trained encoders (BERTBASE for ACE05 and SciBERT for SciERC).", "Comparing with the pipeline method (Zhong and Chen, 2020), we obtain a more than 10 speedup and achieve a comparable or even better relation performance with W = 200 .", "As for their approximate version, our inference speed is still competitive but with better performance.", "If the context window size is set the same as (Zhong and Chen, 2020) ( W = 100 ), we can further accelerate model inference with slight performance drops.", "Besides, hard decoding is 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Euclidean distance 0 2 4 6 8 D e n s i t y Types Ent-Bound Non-Ent-Bound Figure 4: Distributions of adjacent rows' distances for two categories with respect to the threshold on ACE05 dev set.", "In Figure 4, the distance between adjacent rows not at entity boundary (Non-Ent-Bound) mainly concentrates at 0, while that at entity boundary (Ent-Bound) is usually greater than", "1. This phenomenon verifies the correctness of our span decoding method.", "Then we evaluate the performances, with regard to the threshold in Figure", "5. 14 Both span and entity performances sharply decrease when increases from 1.4 to 1.5, while the relation performance starts to decline slowly from = 1 .", "5 .", "The major reason is that relations are so sparse that many entities do not participate in any relation, so the threshold of relation is much higher than that of entity.", "Moreover, we observe a similar phenomenon on ACE04 and SciERC, and = 1 .", "4 is a general best setting on three datasets.", "It shows the stability and generalization of our model.", "In Table 4, both cross-sentence context and logit dropout can improve the entity and relation performance.", "Table 6 shows the effect of different context window size W and logit dropout rate p .", "The entity and relation performances are significantly improved from W = 100 to W = 200 , and drop sharply from W = 200 to W = 300 .", "Similarly, we achieve the best entity and relation performances when p = 0 .", "2 .", "So we use W = 200 and p = 0 .", "2 in our final model.", "We further analyze the remaining errors for relation extraction and present the distribution of five errors: span splitting error (SSE), entity not found (ENF), entity type error (ETE), relation not found (RNF), and relation type error (RTE) in Figure", "6. The proportion of SSE is relatively small, which proves the effectiveness of our span decoding method.", "Moreover, the proportion of not found error is significantly larger than that of type error for both entity and relation.", "The primary reason is that the table filling suffers from the class imbalance issue, i.e., the number of is much larger than that of other classes.", "We reserve this imbalanced classification problem in the future.", "Finally, we give some concrete examples in Figure 7 to verify the robustness of our decoding algorithm.", "There are some errors in the biaffine model's prediction, such as cells in the upper left corner (first example) and upper right corner (second example) in the intermediate table.", "However, these errors are corrected after decoding, which demonstrates that our decoding algorithm not only recover all entities and relations but also corrects errors leveraging table structure and neighbor cells' information.", "Entity relation extraction has been extensively studied over the decades.", "Existing methods can be roughly divided into two categories according to the adopted label space.", "Separate Label Spaces This category study this task as two separate sub-tasks: entity recognition and relation classification, which are defined in two separate label spaces.", "One early paradigm is the pipeline method (Zelenko et al., 2003; Miwa et al., 2009) that uses two independent models for two sub-tasks respectively.", "Then joint method handles this task with an end-to-end model to explore more interaction between entities and relations.", "The most basic joint paradigm, parameter sharing (Miwa and Bansal, 2016; Katiyar and Cardie, 2017), adopts two independent decoders based on a shared encoder.", "Recent span-based models (Luan et al., 2019b; Wadden et al., 2019) also use this paradigm.", "To enhance the connection of two decoders, many joint decoding algorithms are proposed, such as ILP-based joint decoder (Yang and Cardie, 2013), joint MRT (Sun et al., 2018), GCN-based joint inference (Sun et al., 2019).", "Actually, table filling method (Miwa and Sasaki, 2014; Gupta et al., 2016; Zhang et al., 2017; Wang et al., 2020) is a special case of parameter sharing in table structure.", "These joint models all focus on various joint algorithms but ignore the fact that they are essentially based on separate label spaces.", "Unified Label Space This family of methods aims to unify two sub-tasks and tackle this task in a unified label space.", "Entity relation extraction has been converted into a tagging problem (Zheng et al., 2017), a transition-based parsing problem (Wang et al., 2018), and a generation problem with Gold Table Intermediate Table Decoded Table wings were off of it 903 American airlines flight VEHORGVEHART Figure 7: Examples showing the robustness of our decoding algorithm.", "Seq2Seq framework (Zeng et al., 2018; Nayak and Ng, 2020).", "We follow this trend and propose a new unified label space.", "We introduce a 2D table to tackle the overlapping relation problem in (Zheng et al., 2017).", "Also, our model is more versatile as not relying on complex expertise like (Wang et al., 2018), which requires external expert knowledge to design a complex transition system.", "In this work, we extract entities and relations in a unified label space to better mine the interaction between both sub-tasks.", "We propose a novel table that presents entities and relations as squares and rectangles.", "Then this task can be performed in two simple steps: filling the table with our biaffine model and decoding entities and relations with our joint decoding algorithm.", "Experiments on three benchmarks show the proposed method achieves not only state-of-the-art performance but also promising efficiency.", "The authors wish to thank the reviewers for their helpful comments and suggestions.", "This work was (partially) supported by National Key Research and Development Program of China (2018AAA0100704), NSFC (61972250, 62076097), STCSM (18ZR1411500), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), and the Fundamental Research Funds for the Central Universities." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "objective", "method", "method", "method", "objective", "method", "abstain", "other", "other" ]
[ "Undermining the impact of hateful content with informed and non-aggressive responses, called counter narratives, has emerged as a possible solution for having healthier online communities.", "Thus, some NLP studies have started addressing the task of counter narrative generation.", "Although such studies have made an effort to build hate speech / counter narrative (HS/CN) datasets for neural generation, they fall short in reaching either high-quality and/or high-quantity.", "In this paper, we propose a novel human-in-the-loop data collection methodology in which a generative language model is refined iteratively by using its own data from the previous loops to generate new training samples that experts review and/or post-edit.", "Our experiments comprised several loops including dynamic variations.", "Results show that the methodology is scalable and facilitates diverse, novel, and cost-effective data collection.", "To our knowledge, the resulting dataset is the only expert-based multi-target HS/CN dataset available to the community.", "The proliferation of online hatred has became an alarming issue (Williams, 2019) threatening not only the well-being of target individuals and groups, but also of society as a whole.", "While authorities establish regulations and policies, social media platforms take actions against hate speech mostly through moderation activities, such as content removal, account suspension, or shadow-banning, at the risk of hindering the freedom of expression.", "Meanwhile, Non-Governmental Organizations are qualifying volunteers for responding to online hate to promote human dignity and understanding in society.", "Such responses, i.e., Counter-Narratives (CN), are non-aggressive textual feedback using credible evidence, factual arguments, alternative viewpoints, and are considered as an effective strategy (Benesch, 2014; Schieb and Preuss, 2016) to confront hate speech while respecting the human rights (Kiritchenko et al., 2020).", "However, the vast amount of online hate speech makes an effective manual intervention impossible, which motivates a line of NLP research focusing on semi or fully automatized CN generation solutions 1 .", "In recent years, several CN collection strategies and datasets have been proposed addressing the data-hungry nature of current state of the art generation technologies (Mathew et al., 2018; Qian et al., 2019; Chung et al., 2019).", "Considering the shortcomings of the existing collection strategies (that grant either quality or quantity, but not both), we present an approach to produce high quality CNs for multiple hate targets while reducing the need for expert intervention.", "To this end, we build on top of the previous hybrid data collection strategies, aiming to increase efficiency while maintaining the requirements of data quality, novelty and diversity.", "In particular, we start from the work by Tekiroglu et al. (2020) that uses an author-reviewer framework in which the author a generative language model is tasked with generating HS/CN pairs while a pool of human reviewers filter and possibly post-edit the produced output.", "In the present work we propose to further reduce the data collection effort by closing the pipeline and feeding the post-edited output back to the language model in order to regularly update it and improve 1 In our view the generation process can be fully automatic but generation systems need human supervision and should not be fully autonomous, at least for delicate tasks such as hate countering on social media platforms.", "For this reason we advocate that generation systems should be used as suggesting tool for NGO operators, to make their countering work more effective.", "In this way there is always a human moderator taking the final decision (Chung et al., 2019).", "Furthermore, this approach is also in line with de Lima Salge and Berente (2017)'s Ethical framework, since this suggesting tool configuration grants compliance with their rules.", "the quality of the generated pairs.", "Our experiments comprised of two sessions, spanning a period of 6 months.", "In the first session we set up a simple' human-in-the-loop (HITL henceforth) procedure and iterated it several times, measuring at each loop the performance of the whole framework according to relevant metrics.", "In the second session we run several additional loops in which we test different strategies (i.e. author configurations) to improve the data collection according to the given metrics.", "Findings show that the HITL framework is scalable, allowing to obtain datasets that are adequate in terms of diversity, novelty, and quantity.", "Moreover, this framework improves on previous hybrid data collection strategies, reducing at each loop the post-editing effort of the human reviewers or the number of discarded examples (session one).", "On the other hand, with dynamic adaptation, possible unwanted behaviors or flaws of the data collection can be handled at each loop by simply varying the author configuration (session 2).", "The final dataset contains 5000 HS/CN pairs in English Language, covering multiple hate targets, in terms of race, religion, country of origin, sexual orientation, disability, or gender.", "To the best of our knowledge, this is the first multi-target expert-based HS/CN dataset constructed through a semi-automatic mechanism and can be downloaded at the following link: https://github.com/marcoguerini/CONAN .", "With regard to hatred countering, we will focus on three research aspects relevant for the present work, i.e.", "(i) publicly available datasets for detection,", "(ii) publicly available datasets for countering,", "(iii) approaches for hybrid data collection.", "Hate detection datasets.", "Several datasets for hate detection have been presented, most of which rely on material collected from SMPs, such as Twitter (Waseem and Hovy, 2016; Waseem, 2016; Ross et al., 2017), Facebook (Kumar et al., 2018), WhatsApp (Sprugnoli et al., 2018), and forums (de Gibert et al., 2018).", "While the above datasets focus on a classification task, Mathew et al. (2020) released a dataset annotated with rationales to improve hate speech interpretability and Sap et al. (2020) proposed the Social Bias Inference Corpus (SBIC) annotated with the description of the biases implicitly present in the language.", "For a more extensive review, we refer the reader to Poletto et al. (2020) and Vidgen and Derczynski (2020).", "Hate countering datasets.", "While several social studies proved that counter-narratives are effective in hate countering (Benesch, 2014; Silverman et al., 2016; Schieb and Preuss, 2016; Stroud and Cox, 2018; Mathew et al., 2019), only few works have focused on data collection for CN generation.", "Mathew et al. (2018) focus on crawling, following the intuition that CNs can be found on SMPs as responses to hateful expressions.", "Qian et al. (2019) propose a crowdsourcing methodology where crowd-workers (non-expert) are instructed to write responses to hate content collected from SMPs.", "The study by Chung et al. (2019) also relies on outsourcing CNs writing, but via nichesourcing, using NGO operators expert in CN production.", "Hybrid models for data collection.", "Given the data-hungry nature of current NLP technologies, one line of research has recently focused on advanced hybrid models for data collection.", "Wallace et al. (2019) proposed using model interpretation to guide humans in the creation of adversarial examples for factoid question-answering systems.", "Dinan et al. (2019) and Vidgen et al. (2020) perform a data collection with HITL for detecting offensive language.", "In both studies, the dynamic procedure is shown to be successful in reducing model error rate across rounds.", "Vidgen et al. (2020) point out that the HITL approach has multiple advantages over the static data collection: design flaws can be addressed during the construction of the dataset and annotators' work is optimized, since it is guided by the feedback from the model.", "Finally Tekiroglu et al. (2020) propose a hybrid approach where an LM is trained on a seed datasets of HS/CN pairs to generate new pairs that are then validated and post-edited by annotators.", "In Figure 1 we present the pipeline of our methodology.", "Following the idea presented by Tekiroglu et al. (2020), we have an author module built using GPT-2 language model (Radford et al., 2019) and fine-tuned on a seed dataset of HS/CN pairs.", "The author produces novel HS/CN candidates while the reviewer(s) filter and eventually post-edit them.", "We iterate this data collection several times, at each loop reviewed examples are added to training data and the author is fine-tuned from scratch again on all available data.", "In the following sections we describe the main elements used in our procedures.", "To start the process, we built a seed dataset of 880 HS/CN pairs by nichesourcing its collection to 20 experts from two different NGOs.", "We named this dataset V 1 .", "The methodology for collecting V 1 closely replicates the one presented by Chung et al. (2019).", "In particular we first created a list of prototypical hate texts with the help of an NGO expert for the following hate targets: DISABLED , JEWS , OVERWEIGHT , LGBT+ , MUSLIM , WOMEN , PEOPLE OF COLOR , ROMANI , MIGRANTS .", "We then prepared two online data collection forms: in the first, NGO operators were asked to respond to examples selected from the prototypical hate text list, in the second they were asked to write their own HS/CN pairs.", "This data collection session lasted roughly one month.", "Our experiments were run in two separate and subsequent sessions, meant to explore different aspects of the HITL approach.", "In the first session , after using V 1 for the initial fine-tuning of GPT-2, we iterated the data collection 4 times, keeping the author-reviewer configuration as close as possible to the original one presented by Tekiroglu et al. (2020).", "Loops are numbered sequentially as V 2 ...V n .", "At each loop, we acquired 500 examples of accepted and eventually post-edited HS/CN pairs 2 .", "To obtain a new set of 500 pairs ( V i ) we fine-tuned GPT-2 every time from scratch using 2 The only exception is V 2 that accounts for 620 pairs to have a round number of examples by reaching 1500.", "V 1 ...V i 1 as training data and administered the generated samples to reviewers until the target number was reached.", "In total we iterated the procedure 4 times reaching V 5 for a total of 3000 pairs.", "In the second session , we tested several alternative author configurations to ameliorate some unwanted behaviors/trends that emerged during the first session.", "We ran 4 additional data collection loops, this time in parallel (i.e. all starting from V 5 dataset) instead of an iteration.", "For each loop, represented as V 6 , { config name } , we collected 500 HS/CN pairs reaching a total of 5000 examples.", "In our experiments all models are variants of the author (GPT-2), obtained by changing the way it is fine-tuned or conditioned.", "For consistency, each model is trained using the same hyperparam-eter configurations.", "In particular, we used GPT-2 medium model, fine-tuned for 3 epochs with a batch size of 1024 tokens and a learning rate of 2e-5.", "Each pair has been represented as < | startofhs | >HS< | endofhs | > < | startofcn | > CN< | endofcn | > for the training.", "At the generation time, Nucleus Sampling (Holtzman et al., 2019) has been utilized with a p value of 0.9.", "For the standard configurations we use only < | startofhs | > for conditioning.", "Given an HS tag, the models produce a chunk of text, which is a list of HS/CN pairs.", "These pairs are then cleaned from the special tokens and administered to the reviewers for evaluation and possible post-editing.", "We recruited 3 annotators, from a pool of internship students, as reviewers over a period of 18 weeks to filter and post-edit the generated pairs after an extensive training procedure.", "Training.", "Annotators underwent a training for 2 weeks, so that they became experts on HS/CN post-editing.", "The training included:", "(i) reading and discussing NGO guidelines and public documentation describing the activity of CN writing for hate countering,", "(ii) reading all V 1 pairs to better comprehend the attributes of counter narratives,", "(iii) reading a sample of 100 HS/CN pairs that have been post-edited by an expert to see concrete examples of post-editing activity,", "(iv) performing a practice session of CN post-editing and discussing it with an expert NGO operator.", "Instructions.", "We adapted the reviewing instructions from Tekiroglu et al. (2020).", "In particular, for each pair, we asked the operators:", "(a) to approve it without any modifications if it was a valid pair,", "(b) if the pair was not perfect, but easily amendable, to modify it,", "(c) if the CN is completely irrelevant, or does not follow NGO's guidelines, to discard the pair regardless of HS quality,", "(d) whenever there are facts or statistics in the CN, check veracity of the information to avoid possible LM hallucination effects.", "We further instructed the annotators to provide a hate target label for each accepted pair.", "The labels were useful both for analysis and for the subsequent label-based generation strategies present in V 6 .", "In Table 7 we give an example of GPT-2 output and its post-edited version.", "HS: Transgenders should rape our children CN: This is not true.", "Maybe they are worried because of the rise in hate crimes, incidents of which are down to 28 percent, since 2014.", "HS pe : Transgenders want to rape our children.", "CN pe : This is not true.", "Maybe you should be worried about the rise in hate crimes against queers, incidents of which are almost doubled since 2014.", "TARGET: LGBT+ Table 1: An HS/CN example generated by GPT-2 and the post-edited version with hate target annotation.", "Mitigation procedure.", "We applied an adapted version of the guidelines by Vidgen et al. (2019) to safeguard the annotators' well-being against the risk of harmful consequences of working with abusive content (present in the HSs and possibly in generated, not well-formed CNs).", "To this end we first made sure that annotators understood the pro-social aspects of the research and explained them the purpose of their annotation activity in details.", "Then we instructed the annotators to work no more than 2/3 hours per day and take regular breaks, by adjusting their workload as needed.", "Finally, we had meetings and feedback from the annotators on a weekly basis to let possible problems or distress emerge.", "This procedure was repeated throughout the whole data collection campaign.", "Imbalance degree measures the difference between a perfectly-balanced distribution of the hate target categories and the actual unbalanced datasets; we use Imbalance Degree (ID) since it is specifically devoted to the multi-class scenario (Ortigosa-Hernandez et al., 2017).", "Datasets that are balanced over multiple hate targets could allow building more representative CN generation models.", "Acceptance Rate is the percentage of pairs accepted by the reviewers (either untouched or post-edited) over the total number they scrutinised.", "It represents an overall estimate of the ability of the framework to produce reasonable-quality material.", "HTER is originally a measure of post-editing effort at sentence level translations (Specia and Farzindar, 2010).", "We adopted it to the measure reviewers' effort in terms of the average number of edits over the accepted pairs.", "An upper-bound threshold value of 0.4 is used to account for easily post-editable pairs (Turchi et al., 2013).", "Novelty measures how different two collections of texts are from each other, and it is grounded on Jaccard similarity.", "We utilized it to compute the originality present in V i with respect to the training data collected in previous loops (Dziri et al., 2019; Wang and Wan, 2018).", "Repetition Rate measures the intra-corpora quality in terms of language diversity by considering the rate of non-singleton ngram types it contains (Cettolo et al., 2014; Bertoldi et al., 2013).", "We use it to measure the ability of the framework to provide diverse and varied examples.", "Repetition Rate (RR) has the advantage of being independent from corpus size, so it can be used to directly compare different versions of our dataset.", "Vocabulary Expansion is a measure we introduce to serve two main objectives:", "(i) quantifying the contribution of the author and the reviewers, by focusing on new tokens appeared at each loop (e.g. the term peace was introduced for the first time by annotators in V 2 ),", "(ii) quantifying the presence of cross-fertilization, i.e. tokens that appear for the first time in version V n for a particular target, but they were present in a version antecedent to V n for the other targets (e.g. the term peace for the target JEWS appears at V 4 but it was already present for the target MUSLIM in V 2 ).", "In session one, all the versions of the dataset V 2 ...V 5 are generated using GPT-2 V i , where the fine-tuning is performed on all previous versions of the dataset V 1 ...V i 1 as explained earlier.", "To produce HS/CN pairs, the author conditioning is performed using only <|startofhs|> tag and collecting all the generated material provided that each pair is encapsulated with the proper tags.", "For the analysis, we computed the metrics described in Section 4 on the HS/CN pairs obtained in each loop using micro-averaging (in Appendix A.4, Table 5 we report all results in detail).", "To isolate the possible effect of target-class imbalance, macro averages were also calculated; similarly, to account for element-wise differences we calculated micro averages for HS and CN sets separately 3 .", "Discussion.", "Considering our objective of collecting quality material in an efficient way, we first focus on the ratio of accepted pairs and the post-editing effort in each loop.", "As shown in Figure 2, the percentage of accepted pairs tends to increase across the loops, for both the pairs that are post-edited (modified) from 35.8 in V 2 to 50.1 in V 5 and the ones accepted without post-editing (un-touched) from 1.5 in V 2 to 10.9 in V 5 .", "At the same time, the average post-editing effort of the reviewers tend to decrease across the versions, as depicted in Figure 3.", "To ensure that the decrease in HTER is not due to the increasing ratio of untouched pairs to the total number of accepted 3 These results are in line with the ones showed in the paper, and do not change the discussion.", "They are reported in Appendix A.4, Table 6 Figure 3: On the left: evolution of the post-editing effort in terms of HTER across loops both for all pairs and modified only.", "pairs, we computed the HTER for the modified pairs alone.", "Consistently with the overall trend, HTER for modified pairs also declines, indicating that the data collection loops succeeded not only in reducing the reviewer effort, but also in improving the quality of the generated material to be post-edited.", "Notably, after V 3 the HTER falls below the 0.4 acceptability threshold as defined in (Turchi et al., 2013) for the AMT scenario (Figure 3).", "In view of this analysis, we can conclude that the efficiency of data collection is increased by HITL as compared to a static approach that does not retrain the author module (that can be represented by V 2 ).", "Regarding the evaluations with the quality metric Repetition Rate (Figure 3), it increases from V 2 on signifying a decrease in the lexical diversity of the generated data.", "Moreover, we observed a consistent trend for the scores of the second quality metric, i.e. Novelty (Figure 4).", "Similar to the diversity, novelty of the collected data also decreases across the versions, regardless of the dataset against which the novelty is computed.", "Particularly, the change in the cumulative novelty represents how the vocabulary becomes less and less enrichable as the loop number increases, indicating a possible saturation point where novel material is highly difficult to obtain.", "Finally, the distribution of hate targets shows a worsening also in terms of ID that increases from a score of 2.2 in V 1 to 4.5 in V 5 (see Figure 2) with some targets becoming predominant while others slowly disappearing.", "More details on each target distribution per loop are given in Appendix A.2, Figure 11.", "As for pair length, throughout the loops we found that untouched pairs are usually shorter (30.7 tokens on average) than the other accepted pairs (37.3 tokens on average before post-editing).", "During the discussion sessions, annotators reported that the untouched pairs are not only shorter but also somewhat stereotypical, with a small novelty added to the overall dataset (e.g. you cannot say this about an entire religion , It's unfair to say this about an entire religion ).", "Given the problems emerged during the loops of the first session (i.e. higher efficiency but lower quality at each loop), we organized an additional session to test several parallel methodologies to ameliorate them.", "The description of the V 6 configurations are as follows: V 6 ,SBF : The model GPT-2 V 5 is conditioned with novel offensive speeches extracted from SBIC corpus (Sap et al., 2020).", "We chose this resource since:", "(i) it contains several thousand of social media posts containing biases and stereotypes spanning the same target categories with our study,", "(ii) for each post it provides an implied statement' that closely resembles a prototypical hate speech' on which we trained our system.", "We sampled the same number of implied statements' for each target that maps to our labels 4 among the ones annotated with the intent behind the statement was to offend' and/or 'the post could be offensive to someone'.", "We provide the statements as conditions by appending them to < | startofhs | > .", "V 6 ,LAB : The model is conditioned specifying on which hate target it should focus on.", "In this configuration, we trained a variant of GPT-2 V 5 that takes into account the target label, and modified the original representation of our training data accordingly.", "In particular we accommodate hate target information within the starting token: < | startofhs : target label | > .", "4 In Table 4 in Appendix we provide the mapping we used.", "V 6 ,ARG : We fine-tuned GPT-2 on a dataset of argumentative pairs collected from Kialo 5 , an online debate platform for constructive and rational discussions among peers that has been exploited recently by the NLP community (Durmus et al., 2019a,b; Scialom et al., 2020).", "Each discussion in Kialo is represented as a tree of arguments in which a child node is connected to its parent via a pro or con relation.", "Extracting all the claims connected by a con relation, we obtained a dataset of 128178 argument pairs covering a broader domain as compared to HS/CN pairs.", "We then fine-tuned GPT-2 for 1 epoch over the argumentation dataset with the standard hyperparameters.", "Preliminary experiments showed that the best strategy was to represent these pairs with the same format as ours to facilitate transfer of task characteristics and argumentative knowledge.", "Then this model was again fine-tuned using the standard V 1 ...V 5 data.", "At inference time, conditioning has been performed using lists of unique HSs from the V 1 ...V 5 data.", "V 6 ,MIX : The last model is obtained by blending the three previous versions together, i.e. first fine-tuning on Kialo dataset, second fine-tuning using target label notation on V 1 ...V 5 data, conditioning using SBIC offensive speeches.", "Bearing in mind the problems emerged during Session One, our first goal in Session Two was to balance the dataset with respect to the hate targets (i.e. reducing ID score).", "To this end the conditioning always takes into account the hate target label (with respect to 7 targets: JEWS , LGBT+ , MUSLIM , WOMEN , DISABLED , PEOPLEOF COLOR , MIGRANTS ) either explicitly as in V 6 ,LAB or V 6 ,MIX , or implicitly as in V 6 ,SBF and V 6 ,ARG .", "In addition, to better balance the number of pairs for each target, we administered only the first 5 pairs of each generated chunk to the reviewers.", "Discussion.", "All the applied methodologies allow for a better balancing of data in terms of hate targets, yielding an average ID score of 2.3 for the V 6 configurations in comparison to the ID score of 4.5 for V 5 6 .", "As shown in Figure 5 left, all V 6 configurations have a slightly higher acceptance rate than V 5 7 .", "Thus introducing novel material or data 5 www.kialo.com 6 In Appendix, Table 3, we provide the target distribution over the final dataset.", "7 In order to estimate the trend of each metric after V 5 , we calculated also V 6 ,PREDICTED , shown as a dashed line in representation in fine-tuning stages has no strong perturbation effect.", "Second, and more interestingly, we observe a significant variation in the ratio of untouched and modified pairs to all the reviewed samples: for all V 6 approaches while there is a strong decrease in ratio of untouched pairs (Fig-ure 5, right), there is a significant increase in those modified (see Figure 5, left).", "In other words these models were able to produce a higher amount of suitable, albeit non perfect, pairs.", "In particular, comparing V 6 configurations we can observe that for the untouched pairs the highest acceptance rate is achieved via V 6 ,ARG with 6.37% accepted pairs, whereas for the modified pairs V 6 ,MIX yields the highest percentage, with 66.15% of the pairs accepted.", "Concerning the reviewer's effort, we see that the overall HTER increases for the all V 6 approaches (Figure 6, left).", "Considering that we had a lower number of untouched and a higher number of modified pairs this was expected, and if we turn to the HTER of modified pairs alone we see that there is a smaller difference between V 5 and V 6 HTER.", "Even more interestingly, the HTER scores of all V 6 configurations, even if higher than V 5 , are still below the acceptability threshold value of 0.4 defined earlier.", "Going into details, amongst the V 6 configurations, HTER reaches its lowest value in V 6 ,ARG , for both the modified and untouched pairs: since it was conditioned using gold HS material, this result is expected.", "As opposed to the other models, V 6 ,LAB is conditioned only with a label representation and not with actual HSs.", "This affected negatively the post-editing effort, as we can notice a higher HTER for this configuration.", "Moreover, V 6 ,LAB has a smaller amount of untouched pairs, so we expected HTER to spike up.", "With regard to data quality (see Figure 7), we see that all V 6 strategies succeed in increasing the nov-the plots, using a linear regression model over V 1 ...V 5 .", "elty both with respect to V 5 and expected V 6 (the dashed line) , except for V 6 ,ARG , possibly due to its conditioning with HSs from V 1 ...", "V 5 .", "Therefore, we also computed the novelty for CN set alone to discard the effect of HS on the metric.", "In this setting, all V 6 configurations reach a novelty between 0.741 and 0.745, as compared to a CN novelty in V 5 of 0.737 (as in Appendix A.3).", "The effect of gold HS conditioning in V 6 ,ARG can also be spotted in the lowest HTER results in Figure 6.", "The highest increase in novelty is recorded for V 6 ,MIX , reaching a score of 0.76; also novelty scores computed with respect to V 5 and V 1 confirm the result.", "All V 6 configurations succeeded in reaching an RR lower than both V 5 and expected V 6 (the dashed line).", "It is interesting that V 6 ,LAB has the highest RR among the V 6 configurations, possibly because it was not built using any external knowledge, but only with a different label representation.", "On the other hand, V 6 ,ARG configuration, for which an initial argumentation fine-tuning has been performed, has the lowest RR (5.474).", "From this analysis we can conclude that V 6 configurations are better at producing sub-optimal material but worse at producing perfect material.", "Still the general quality of the pairs (in terms of novelty and RR) in Session Two is much higher than before, exhibiting the desired behavior for which these strategies were introduced.", "We report vocabulary expansion findings in Figure 8.", "For each loop V 2 ...V 5 the average percentage of new words injected into the dataset by the author model (GPT-2) is higher than the average percentage of new words inserted by the three reviewers during post-editing.", "Both trend-lines, even if slightly decreasing are not converging, implying that fine-tuned GPT-2 is not reaching a saturation point and is continuously adding new material.", "This trend is in line with the decrease in novelty.", "On the other hand, instructions asked for a minimal post-edit, so the reviewers have less opportunity to inject new material than the author and the decrease is consistent with the decreasing HTER.", "As for the percentage of words generated by the author model pertaining to the same target, we see an increasing trend throughout the generations due to the cumulative nature of the metric.", "Still, the presence of words first observed in other targets throughout the loops, shows that the cross-fertilization phenomenon generated by GPT-2 persists.", "This desired feature shows the ability of GPT-2 to learn the domain' vocabulary of other targets and to use it effectively for the target under analysis, in a way that is approved by the humans reviewers 8 .", "Finally, we can remark that V 6 versions are able to increase both the injection of novel words from GPT-2 and also its cross-fertilization ability.", "During our exploratory experiments and the discussion sessions with the annotators, several", "8 Even though we opted for a distinction in terms of source (same target, other target) for the not-new' words generated by the author module, we chose not to have the same distinction for the reviewers' not-new' words, since we cannot assess if the reviewer was (or not) aware of the presence of a word in previous versions of the dataset.", "HS: All homosexuals are trying to undermine the family by trying to impose their sexual pref-erences.", "CN: Thus far, people of various sexual orientations have attempted suicide twice as often as heterosexuals. Among youth, those that seriously contemplate suicide are at almost three times the rate of heterosexual. LGB youth are almost five times as likely to have attempted suicide compared to heterosexual youth. Tell me honestly: who do you think is imposing sexual orientation to whom?", "interesting subjects have emerged, which can initiate future work.", "Argumentation and Counter Narratives.", "In order to obtain even more novelty in produced pairs, V 6 ,ARG model could be used without fine-tuning on the HS/CN dataset under the assumption that a counter argument is the same as a counter narrative.", "Still, the ability to argument on a variety of topics is not enough to provide a meaningful CN when prompted with an HS.", "A CN also presuppose values, so for example a logically valid argument is not necessarily an acceptable CN, as the first example in Table 2 shows (produced by GPT-2 fine-tuned only on Kialo arguments).", "New arguments or new paraphrases.", "One question that emerged is whether GPT-2 is able to produce novel arguments or it is just a very sophisticated paraphrasing tool.", "During the discussion sessions with annotators and also by manual analysis, we could find CNs that contained genuinely novel arguments, which were not present in the training data but produced by GPT-2.", "In the second example in Table 2, the novel argument is about capsizing the imposing the homosexual agenda argument by providing data on suicidal attempts among homosexual youth.", "Novel hate targets and general knowledge.", "GPT-2 proved to be able to generate HS/CN pairs also for unseen targets, including intersectional ones (e.g. black women).", "Still the lack of a commonsense knowledge can produce funny results that are beyond the scope of hallucination (Zellers et al., 2019; Solaiman et al., 2019), such as the third example in Table 2, where GPT-2 addresses muggleborns (target of hate in Harry Potter books).", "In this paper we presented a novel HITL methodology for data collection based on an author-reviewer framework.", "This methodology puts together an LM and a set of human reviewers, where the LM is refined iteratively, using data from previous loops that have been validated by experts.", "Experiments show that as loops are iterated, efficiency in data collection increases (acceptance rate and HTER metrics) while the dataset quality decreases in terms of novelty and diversity metrics.", "For this reason we experimented with additional dynamic loop adaptation that are able to increase the overall quality of the dataset without hindering the efficiency significantly.", "This work was partly supported by the HATEME-TER project within the EU Rights, Equality and Citizenship Programme 2014-2020.", "We are deeply grateful to Stop Hate UK and its volunteers for their help and effort in preparing the seed dataset (version V 1 ) necessary for this work." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "other", "other" ]
[ "We propose the task of updated headline generation , in which a system generates a headline for an updated article, considering both the previous article and headline.", "The system must identify the novel information in the article update, and modify the existing headline accordingly.", "We create data for this task using the NewsEdits corpus (Spangher and May, 2021) by automatically identifying contiguous article versions that are likely to require a substantive headline update.", "We find that models conditioned on the prior headline and body revisions produce headlines judged by humans to be as factual as gold headlines while making fewer unnecessary edits compared to a standard headline generation model.", "Our experiments establish benchmarks for this new contextual summarization task.", "Automatic text summarization condenses the most important and salient information from a large quantity of text.", "The task takes many different forms depending on the type of information being summarized, the modality of the information, the type of summary desired and the needs of the end user.", "Examples include news headline generation (Banko et al., 2000; Zajic et al., 2002; Dorr et al., 2003; Takase et al., 2016; Matsumaru et al., 2020), summarization of social media (Liu et al., 2012; Ding and Jiang, 2015; Kim et al., 2019), and medical documents (Schulze and Neves, 2016; Liang et al., 2019; Adams et al., 2021).", "news stories are revised as events unfold (Tannier and Moriceau, 2013), social media streams evolve as people post content (Tarnpradab et al., 2021), and biomedical texts are revised as clinical trial results emerge (uptodate, 2021).", "In such dynamic settings, existing summaries should be updated as new information becomes available.", "To address this, we could in principle leverage static summarization systems for generating a summary of the underlying content at any given point in time.", "However, a more natural approach would be to produce a new summary based on what the reader already knows and what content changed .", "Consider the case of a news article being updated as events unfold (Figure 1).", "The article first reports that a man is charged with stealing an ice cream van, and the article is later updated when the man admits to the crime.", "By the time the article is updated, the reader already knows what was stolen, who was charged, and where it happened.", "At this point, the reader is most interested in what changed, namely the admission of guilt.", "In the case of news articles, the new headline must both convey critical new information and provide a holistic overview for readers unfamiliar with the story.", "Updating a summary instead of wholesale replacement falls outside the scope of static summarization systems.", "To address these shortcomings, we envision a summarization system that combines an existing summary with information updates.", "More concretely, following prior work of using headlines as article summaries (Graff et al., 2003), we consider the task of news headline generation.", "We instead propose updated headline generation , which entails updating headlines based on changes to the content of the article.", "In this work, we make the following contributions: 6438 Figure 1: Example of a news story where both the body and headline are revised after publication.", "1 Evaluate the contribution of different types of information previous headline, edits to the article body to a model that makes updates to an existing news headline.", "Conduct a human evaluation demonstrating that leveraging this additional context leads to headlines which are as factual as standard headline generation models, while applying fewer unnecessary edits.", "Perform an error analysis to determine which types of headline updates are addressed by our model, and what challenges remain.", "A news article consists of a body ( B ) and a headline ( H ).", "Headline generation (Banko et al., 2000; Zajic et al., 2002; Dorr et al., 2003; Takase et al., 2016; Matsumaru et al., 2020) asks a system to consider B and produce H .", "We propose updated headline generation as a modification of this task.", "A system receives an existing article ( B 1 , H 1 ) and an updated version of the article body ( B 2 ).", "The goal is to update H 1 to produce a new headline ( H 2 ) that reflects important new information in B 2 .", "This task introduces several challenges.", "First, a system must identify the most critical new information in B 2 .", "Changes to the article can be small or very significant, and it must determine 1 Available at: https://github.com/panthap2/ updated-headline-generation which of these changes, if any, should be reflected in the headline.", "Second, it needs to consider how to modify H 1 .", "Oftentimes a revision to an article will preserve most of the structure of H 1 , even if a completely rewritten headline might convey the same information.", "New information should be reflected in an updated headline with minimal edits, for the sake of continuity and minimizing cognitive load on a reader who is following an evolving story.", "Third, there are different types of updated stories that each require a different style of headline update.", "Stories can be updated as the underlying event progresses (e.g., criminal investigations, natural disasters, voting on legislation or appointments, live events), new or corrected information becomes available (e.g., number of people injured following an accident), or public figures react to the event (e.g., political figure commenting on a situation).", "See Table 1 for examples.", "The NewsEdits (Spangher and May, 2021) corpus contains articles with revision histories derived from 22 wires: 5 from News Sniffer 2 and the remainder from Twitter accounts powered by Diff-Engine .", "3 It consists of over one million articles with 4.6 million revisions.", "In this work, we focus on the 5 English language wires from News Sniffer (Washington Post, NY Times, Independent, BBC, Guardian), as we found them to have cleaner revision histories.", "From the revision history of a given article, we extract body-headline pairs by examining consecutive versions, ( B k , H k ), ( B k +1 , H k +1 ), resulting in examples of the form {( B 1 , H 1 ), ( B 2 , H 2 )}.", "We 2 https://www.newssniffer.co.uk/ 3 https://github.com/DocNow/diffengine 6439 B 1 H 1 B 2 H 2 Nearly a million people in southern Vietnam face evacuation from the path of a deadly tropical storm ...", "exclude cases without a change in the body, and group examples into two different classes: positive examples where the headline is updated (i.e., H 1 = H 2 ) and negative the headline remains unchanged (i.e., H 1 = H 2 ).", "We observed that the headline change associated with a particular body change sometimes occurred in the subsequent revision (not contemporaneous).", "So, we also include positive examples which have the following property across three consecutive revisions: only the body is changed between the first and second versions and only the headline is changed between the second and third versions, i.e., ( B 1 , H 1 ) ( B 2 , H 1 ) ( B 2 , H 2 ).", "We do not include ( B 1 , H 1 ) ( B 2 , H 1 ) as a negative example for such cases.", "To avoid spurious positive examples, we tried removing versions that were incorrectly paired together, 4 or where the headline change was trivial.", "5 This process produced a dataset of 144,218 positive 4 B 1 and B 2 are sometimes completely unrelated, likely due to an error in the News Sniffer collection.", "We removed examples in which B 2 was published more than a week after B 1 , and we exclude articles that yield more than 8 version pairs (95th percentile).", "5 Trivial headline changes included modifications limited to spacing and punctuation, as well as simple rephrasing (i.e., changes to stopwords or the surface form of a lemma).", "and 794,372 negative examples.", "Even after filtering by heuristic, we found that many of the remaining headline changes still do not reflect a substantive update to the article.", "These include purely stylistic changes, embellishments, and rephrasings.", "To filter such cases, we develop a classifier which is trained to determine whether H 1 needs to be updated based on the changes between B 1 and B 2 .", "The classifier achieves 51.9 F1, indicating that this is a challenging problem; the training and evaluation data are silver-labeled, and noisy.", "We filter the remaining positive examples with this classifier.", "Empirically, we find that training on this filtered subset leads to improved performance.", "We provide a complete description of the classifier and attendant experiments in Appendix A. 6440 3.1 The HREN Dataset After data cleaning and filtering, we obtain the H eadline R evision for E volving N ews dataset (HREN), which contains 69,243 examples with meaningful headline edits.", "Descriptive statistics for each fold are listed in Table 2. Average number of tokens per document are broken by source text type, with B edits and B edits (change only) described in Section 4.2.", "We partition the data into 80/10/10 training, validation, and test splits.", "While constructing the data, we took care to ensure that the underlying articles from which examples are drawn are disjoint for partitions, and that the timestamps corresponding to examples in the training set strictly precede those in the validation set, which in turn precede those in the test set.", "6 This ensures that we train on strictly historical data.", "Our main experiments use HREN, though we include negative examples and filtered positive examples in some later experiments.", "COPYH 1 : Updated headlines usually copy parts of the original headline and the overall structure.", "For instance, in Figure 1, 8 of the 9 tokens in the updated headline come from the original one.", "So, we consider copying H 1 as the prediction.", "LEAD -1: Newsroom style guides dictate that the most significant information should appear first (Siegal and Connolly, 1999).", "Consequently, the lead sentence typically includes information that is mentioned in the headline, as shown in Figure 1. This baseline uses the lead sentence of B 2 as the prediction for H 2 .", "SUBSTITUTION : Many headlines can be correctly updated by a simple token replacement, reflecting an analogous replacement in the body.", "Table 1: H 1 is At least 19 Hurt in Tractor-Trailer and Bus crash on I-64 in Virginia\" and a sentence in B 1 At least 19 people...\" is updated to At least 24 people\" in B 2 , prompting a similar change in the headline H 2 . So, if a single token ( t 1 = 19 ) appearing in both H 1 and B 1 is replaced with a new 6 Similar time-based partitioning was done for the classifier. See Appendix B for date cutoffs for each fold. token ( t 2 = 24 ) in B 2 , we form H 2 by substituting t 1 with t 2 in H 1 . We only consider single-token replacements and copy H 1 if a substitution cannot be made. Note that this is a high precision baseline, with 10.8% of headlines able to be updated by this heuristic. 4.2 Context Representations We study various configurations for representing the input context for training the models. H 1 : Many headline updates follow a natural progression of events (e.g., Lori Loughlin Expected to Plead Guilty via Zoom in College Admissions Case\" Lori Loughlin Pleads Guilty via Zoom in College Admissions Case\").", "In these cases, knowing the old headline may be sufficient to predict the subsequent headline.", "Therefore, we consider providing only H 1 to a statistically trained model.", "H 1 + B 2 : We provide both H 1 and B 2 .", "Faithfulness to the article body is paramount for automatic headline generation (Matsumaru et al., 2020), and leveraging the original headline removes some of the burden of generating a headline from scratch.", "H 1 + B 2 + B 1 : We provide all available context to the model, so that the model can compare story versions and consider the old headline during decoding.", "H 1 + B edits : Asking the model to compare two full articles may be unrealistic.", "Instead, we provide the sequence of edits between B 1 and B 2 : < KEEP > A 22-year old man has < KEEP_END > < REPLACE_OLD > been charged after < REPLACE_NEW > admitted stealing < REPLACE_END > < KEEP > an ice cream...", "This sequence consists of edit actions: insert , delete , replace , and keep , and are represented in the format proposed by Panthaplackel et al. (2020).", "We study whether providing explicit body edits helps a model learn to apply analogous headline edits.", "H 1 + B edits (change only): Rather than feeding in the full edit sequence, we discard keep spans.", "While this removes information about where the edits are made, it significantly reduces the amount of context a model must reason about (Table 2).", "We evaluate two encoder-decoder models that utilize each of the representations described in Section 4.2.", "Note that we first preprocess all representations using the Penn Treebank tokenizer 7 to tokenize and split text into sentences and words, prior to model-specific preprocessing.", "Pointer Networks consist of separate LSTM encoders for body and headline text, and these are concatenated to form the initial states for an LSTM decoder, equipped with attention (Vinyals et al., 2015; Wang et al., 2016).", "The hidden states are concatenated for both attention and copy mechanisms.", "We posit that this model might be effective at headline updating, as this task benefits from copying tokens from the input context (especially H 1 ).", "We initialize embeddings for the model with GloVe (Pennington et al., 2014).", "BART (Lewis et al., 2020) is a pretrained transformer network considered state-of-the-art for summarization.", "Because we focus on the news domain, we consider a version of BART already fine-tuned for summarization on news articles from CNN-Daily Mail (Hermann et al., 2015).", "8 We further fine-tune on our data, by concatenating inputs into a single sequence, separated by special tokens (e.g., < OLD_HEADLINE > , < NEW_BODY > ).", "We evaluate all context representations with both of these architectures, with the exception of H 1 + B 2 + B 1 for BART, due to limitations in fitting the entire input context within BART's 1024 token limit.", "We use beam search with a beam size of 20 to decode for all models along with bigram blocking (Paulus et al., 2018).", "9 These decoding hyperparameters were found to work well across models during preliminary experimentation based on an unweighted average across automated metrics.", "We evaluate with common text-generation metrics: METEOR (Banerjee and Lavie, 2005), ROUGE-L (Lin and Och, 2004), and BLEU-4 (Pa-pineni et al., 2002).", "Given the editing nature of our task, we also use two edit-specific metrics: GLEU (Napoles et al., 2015) and SARI (Xu et al., 7 https://nlp.stanford.edu/nlp/javadoc/ javanlp/edu/stanford/nlp/process/PTBTokenizer.html 8 https://huggingface.co/facebook/ bart-large-cnn 9 We chose bigram instead of the more typical trigram blocking as headlines tend to be short. 2016).", "SARI measures the average n-gram F1 scores corresponding to edit operations (add, delete, and keep).", "GLEU closely follows BLEU except that it places more importance on n-grams which have been correctly changed.", "We compute statistical significance at the p < 0 .", "05 level using bootstrap tests (Berg-Kirkpatrick et al., 2012).", "Rule-Based Baselines: Our results (Table 3) show that rule-based baselines achieve relatively high performance, even beating the headline generation setting ( B 2 ) for the pointer network and BART in some cases.", "Due to the high lexical overlap between H 1 and H 2 , the COPYH 1 baseline can perform well on automated metrics, specifically the three text-generation metrics.", "The SUBSTITUTION baseline performs slightly better than simply copying H 1 by making simple substitutions in 10.8% of examples, demonstrating improvements in the two edit-based metrics.", "The LEAD -1 baseline performs lower than the other baselines on most metrics due to the discrepancy between the structure and style of the lead sentence and headlines, with the average lead sentence length being 36.7 tokens.", "However, the SARI score is substantially higher.", "10 Using H 1 : For both the pointer network and BART, providing only H 1 results in lower performance than COPYH 1 for most metrics, except for SARI, which is designed to evaluate edits.", "Higher SARI suggests that these models are able to make the necessary edits in some cases by guessing the natural progression of events, without the news body, such as forecasting the order of events following a police investigation (e.g., suspect is arrested, charged, and then appeared in court).", "However, the SARI score is still much lower than if only B 2 is provided, as in standard headline generation.", "This highlights the importance of the latest version of the article body in updated headline generation.", "Nonetheless, by comparing performance of B 2 and H 1 + B 2 across both architectures, we see the extent to which H 1 can guide headline generation models in selecting important content and determining structure for the output.", "This demonstrates the inadequacy of framing this as a static headline generation task.", "The improvements on edit metrics are more limited because a model which has access 10 SARI is calculated as the average of N-gram F1 scores of add, delete, and keep edit operations.", "Because the lead sentence will not contain many n-grams in H 1 that should be deleted, and will contain some n-grams that should be inserted into H 2 , the SARI score is high for this baseline.", "Using Body Edits: To investigate whether providing body edits can further improve performance by helping a model learn to correlate them with H 1 and apply analogous updates, we consider different ways of incorporating B 1 .", "First, in the pointer network, we evaluate performance when just feeding it in as another input ( H 1 + B 2 + B 1 ).", "We observe no improvement in performance, suggesting that the model fails to implicitly learn the edits.", "Next, we consider collapsing B 1 and B 2 into a sequence of edits ( H 1 + B edits ), with which we see a slight improvement in performance over H 1 + B 2 for the pointer network but a reduction in performance for BART.", "We believe this is because BART struggles to model longer input sequences.", "When we reduce the context length and provide only the changes in the edit sequence ( H 1 + B edits (change only)), we see an improvement in BART.", "Note that the performance of H 1 + B edits (change only) is lower for the pointer network across most metrics.", "This may be due to a lack of pretraining, whereas BART is already equipped with a strong language model.", "Pointer Network vs. BART: While both model classes perform well, BART models tend to perform better overall, demonstrating the value of BART's larger transformer-based architecture and pretraining.", "Nonetheless, the benefits of using H 1 and body edits generalize across both architectures.", "We expect that the performance of more recent summarization models such as PEGASUS (Zhang et al., 2020) or SimCLS (Liu and Liu, 2021) will exhibit a similar trend as BART, but we welcome evaluation of other large pretrained summarization Fact Focs MnEd Hdln Grm Cpy H 1 4 .", "Design We conduct a human evaluation of the (more performant) BART models with the following configurations: B 2 , H 1 + B 2 , H 1 + B edits (change only).", "As points of reference, we also evaluate the gold headline ( H 2 ) and the output of the COPYH 1 baseline.", "Annotators were presented with a visual diff between B 1 and B 2 along with H 1 , and were asked to judge a candidate updated headline according to five dimensions on a Likert scale whether the updated headline was factual , grammatical , appears to be written in headlinese , focus es on important changes/information in the updated body (similar to the relevance criterion commonly used to evaluate natural language generation models (Sai et al., 2020)), and makes only minimal edits to the original headline.", "We introduce the last dimension since we frame our task as an editing task.", "The underlying idea behind editing is that change should only be made to be parts that warrant it; all other 6443 parts that do not need to be changed should be preserved, which is consistent with how humans edit text (Panthaplackel et al., 2020).", "Additionally, this is consistent with the task motivation, in which we expect a reader to interpret the important changes in a minimally edited headline with less cognitive effort.", "We sampled 200 test examples, 143 from HREN and 57 from the unfiltered sample, 11 resulting in 806 unique annotation tasks.", "12 Each task was independently annotated by three paid annotators who were trained on this task native English speakers, two of whom were journalism majors.", "See Appendices C and D for more details on the annotation procedure.", "Results We present average annotator ratings for each dimension in Table 4. Following the human evaluation analyses in Reiter and Belz (2009) and Wiseman et al. (2021), we compute statistical significance using multi-way ANOVA tests, followed by Tukey's post hoc HSD tests for pairwise statistical significance (at the p < 0 . 05 level).", "For headlinese and grammatical , we find no significant difference between the approaches; all achieve relatively high scores.", "With respect to factual and focus , all approaches perform similarly except for COPYH 1 which significantly underper-forms the others, by inaccurately reflecting the state of matters after the story is updated and failing to highlight important changes in B 2 .", "On the other hand, COPYH 1 achieves the best performance on minimal edits by definition (i.e., H 1 has minimal edits with itself).", "As expected, without access to H 1 , the headline generation model ( B 2 ) achieves the lowest performance on this dimension.", "Overall, we find that the two BART models which also include H 1 as context performed better, even beating gold headlines on this dimension.", "This is unsurprising as gold headlines often undergo stylistic rewrites, in addition to reflecting changes to the facts of evolving news stories.", "For example, Byron Burger Menu Reassured' Allergy Death Owen Carey is rewritten as Byron Burger Death: Owen Carey's Family Demand Law Change the form of the headline changes in addition to the release of new information.", "Although H 1 + B edits (change only) performs slightly better than H 1 + B 2 on automated metrics in Table 3, we find that they 11 Results on the unfiltered examples are in Appendix A.3.", "perform similarly on the five dimensions.", "In summary, incorporating H 1 leads to predictions which make fewer unnecessary edits to the original headline, while simultaneously maintaining performance with respect to factuality, focus, headlinese, and grammaticality of headline generation models (on par with gold headlines).", "Case Study Table 5 presents BART predictions for the example in Figure 1 under different context representations.", "13 Given only H 1 , the model predicts an updated headline by speculating about what might follow a person being charged with a crime.", "Using only B 2 , the model generates a headline which reflects that a person has admitted to the crime, but it deviates from the form of the original headline by inserting the name of the person and altering terminology.", "These aspects of the story have not changed, and should not be changed in the headline.", "With H 1 + B 2 , the prediction captures the major change in the article and better retains the form of H 1 , but it still makes an unnecessary change by inserting the person's age into the headline.", "Given H 1 + B edits , the model learns to only edit the part which is relevant to the body changes, but the terminology used to perform the edit (i.e., pleads guilty ) varies from the article ( admits to the crime.) In contrast, H 1 + B edits (change only) is able to simultaneously perform minimal edits and correlate edits between the article and headline.", "Performance by Edit Level Headlines require more extensive edits when there are more substantial changes to the article.", "We perform a fine-grained analysis to better understand how various context representations fare for these different types of examples.", "We group examples based on 13 Additional examples are provided in Appendix F. 6444 Figure 2: Absolute difference in GLEU between various BART models and standard headline generation ( B 2 ) for each Jaccard headline similarity bucket.", "the Jaccard similarity between H 1 and the gold H 2 ; low similarity means significant edits, high similarity means minimal edits.", "For the BART-based model we calculate the GLEU score for each bucket because we find that it is better suited for simultaneously evaluating whether appropriate edits were made along with generation quality.", "Figure 2 shows the change in performance attributed to each of the context representations relative to headline generation ( B 2 ).", "For low similarity values, none of the specialized context representations outperform standard headline generation.", "This suggests that when more substantial edits are needed, starting from scratch may be best.", "As the similarity increases, models which utilize H 1 perform substantially better.", "For moderate similarity, having B 2 instead of body edits performs marginally better, but this changes as the similarity score increases, with B edits (change only) leading to drastic improvements.", "Analyzing Attention To better understand how models explicitly make use of the old headline, we analyze how the H 1 + B edits (change only) BART model's decoder attends to H 1 .", "For this, we follow the methodology of Vig and Belinkov (2019).", "Namely, we label each context token by which span it belongs to: H 1 , one of the edit spans, or a span delimiter token ( Other ).", "We compute the average attention paid to each context token class by the BART decoder across all examples and layers.", "We find that even though only 1.5% of the context tokens are from H 1 , they attract over 17% of the BART decoder's attention.", "Interestingly, the attention paid to added content and headline tokens increases in later layers at the expense of Other tokens (Figure 3).", "We posit that this is because the initial layers need to attend to special tag tokens in order to understand which type of span each enclosed token belongs to.", "This may also arise from the fact that initially the decoder attends to all tokens relatively uniformly (49.9% of tokens are Other on average).", "However, even in the initial layers, the tokens in H 1 are attended to more than would be expected by a uniform attention distribution, likely because H 1 text always appears near the start of the context.", "Because of this, locating the H 1 tokens is less dependent on identifying enclosing tags absolute position also helps.", "We also find that the decoder attends to tokens in H 1 more often than would be expected under a uniform attention model, until it needs to refer to a new piece of information that was added to the article body.", "Figure 4 displays the relative attention paid to each token type for a decoded headline exemplifying this phenomenon.", "See Appendix E for additional detail on the decoder attention analysis.", "Error Cases Finally, we inspected cases where annotators assigned very low or high scores.", "We observe with B 2 alone, the headline generation model makes factual errors by mixing up important details when two similar types of entities are discussed in the article (e.g., mixing up the victim and suspect of a crime, mixing up locations and dates).", "Additionally, it makes factual errors by omitting something important, which drastically changes the meaning (e.g., missing a letter in the acronym for an organization).", "On the other hand, because H 1 often includes important background that can be directly copied, we find fewer such factual errors caused by omission for the H 1 + B 2 and H 1 + B edits (change only) models.", "Having H 1 also helps in maintaining important details (e.g., event location) and specifying the level of detail that is needed.", "In general, H 1 is most useful when there is high lexical overlap with the lead sentences of B 2 .", "If the content is significantly different (e.g., the focus of the article changes), it becomes less useful and can even hurt performance in some cases, since H 2 is likely very different from H 1 .", "Body edits are most useful when there are few edits and these edits can be easily grounded in H 1 .", "For H 1 + B edits (change only), we also noticed errors where the model incorrectly correlates body edits with H 1 , resulting in it erroneously inserting body tokens that are edited into the headline.", "Decoded Token Figure 4: Difference between average attention placed on each span type during decoding of H 2 and that expected under a model that attended uniformly to all tokens.", "X-axis: words in the decoded headline; Y-axis: types of context tokens.", "Red cells indicate that a token type is being attended to more than would be expected under uniform attention, whereas blue cells indicate the opposite.", "H 1 : White House to Ask for $12 Billion Down Payment for Harvey Relief , Source: https: //www.newssniffer.co.uk/articles/1447256 .", "subtopics relevant to our task.", "For instance, multidocument summarization (Barzilay and McKeown, 2005) pertains to generating a unified summary by synthesizing non-redundant content from multiple related documents.", "In our setting, we consider multiple documents (i.e., the old and new versions of an article) as well, but we also have an existing summary, and our task requires reasoning about how the non-redundant content from the newer version of the article affects this existing summary.", "With update summarization (Dang et al., 2008), there is an older set of documents as well as a newer set of documents, and the goal is to generate a summary which captures only added and changed information.", "In contrast, our task aims to incorporate these changes into an already existing holistic summary.", "Natural language edits: Our work focuses on learning from edits in news articles to apply update and existing headline.", "Prior work studies the nature of edits in various texts including news (Faigley and Witte, 1981; Tamori et al., 2017) and Wikipedia (Yang et al., 2017; Faruqui et al., 2018).", "There has also been extensive work on generating edits for tasks such as grammatical error correction (Bryant et al., 2019), sentence simplification (Zhu et al., 2010), style transfer (Fu et al., 2018), fact-based sentence editing (Shah et al., 2020; Iso et al., 2020), text improvement (Tanaka et al., 2009), and comment updating based on source code changes (Panthaplackel et al., 2020).", "In this work, we show that headline generation models can benefit from access to the past state of the article.", "Our proposed model, H 1 + B edits (change only), can generate headline predictions that are statistically tied with gold headlines in terms of factuality, while making fewer unnecessary edits.", "By releasing the HREN dataset, we hope to encourage the community to produce higher quality tools for aiding journalists, as well as encourage research in NLP over dynamic texts.", "Sheena Panthaplackel receives support from Bloomberg's Data Science Ph.D.", "Fellowship Program.", "We would like to thank Alex Spangher for sharing an early version of the NewsEdits corpus.", "We would also like to thank the Bloomberg AI group and Lina Vourgidou for early feedback on this project, and illuminating conversations on the application of machine learning and natural language processing to the newsroom.", "Finally, we thank the reviewers for their constructive feedback." ]
[ "objective", "abstain", "method", "result", "objective", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "abstain", "other", "objective", "other", "objective", "method", "other", "other", "result", "objective", "method", "other", "other", "other", "other", "other" ]
[ "While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task , before fine-tuning it on a target task.", "However, it is still poorly understood when and why intermediate-task training is beneficial for a given target task.", "To investigate this, we perform a large-scale study on the pretrained RoBERTa model with 110 intermediatetarget task combinations.", "We further evaluate all trained models with 25 probing tasks meant to reveal the specific skills that drive transfer.", "We observe that intermediate tasks requiring high-level inference and reasoning abilities tend to work best.", "We also observe that target task performance is strongly correlated with higher-level abilities such as coreference resolution.", "However, we fail to observe more granular correlations between probing and target task performance, highlighting the need for further work on broad-coverage probing benchmarks.", "We also observe evidence that the forgetting of knowledge learned during pretraining may limit our analysis, highlighting the need for further work on transfer learning methods in these settings.", "Unsupervised pretraininge.g., BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019b)has recently pushed the state of the art on many natural language understanding tasks.", "One method of further improving pretrained models that has been shown to be broadly helpful is to first finetune a pretrained model on an intermediate task, before fine-tuning again on the target task of interest (Phang et al., 2018; Wang et al., 2019a; Clark et al., 2019a; Sap et al., 2019), also referred to as Equal contribution.", "STILTs.", "However, this approach does not always improve target task performance, and it is unclear under what conditions it does.", "This paper offers a large-scale empirical study aimed at addressing this open question.", "We perform a broad survey of intermediate and target task pairs, following an experimental pipeline similar to Phang et al. (2018) and Wang et al. (2019a).", "This differs from previous work in that we use a larger and more diverse set of intermediate and target tasks, introduce additional analysis-oriented probing tasks, and use a better-performing base model RoBERTa (Liu et al., 2019b).", "We aim to answer the following specific questions: What kind of tasks tend to make good intermediate tasks across a wide variety of target tasks?", "Which linguistic skills does a model learn from intermediate-task training?", "Which skills learned from intermediate tasks help the model succeed on which target tasks?", "The first question is the most straightforward: it can be answered by a sufficiently exhaustive search over possible intermediatetarget task pairs.", "The second and third questions address the why rather than the when , and differ in a crucial detail: A model might learn skills by training on an intermediate task, but those skills might not help it to succeed on a target task.", "Our search for intermediate tasks focuses on natural language understanding tasks in English.", "In particular, we run our experiments on 11 intermediate tasks and 10 target tasks, which results in a total of 110 intermediatetarget task pairs.", "We use 25 probing tasks tasks that each target a narrowly defined model behavior or linguistic phenomenon to shed light on which skills are learned from each intermediate task.", "Our findings include the following:", "(i) Natural language inference tasks as well as QA tasks which involve commonsense reasoning are generally useful as intermediate tasks.", "(ii) SocialIQA and QQP as intermediate tasks are not helpful as a means to teach the skills captured by our probing tasks, while finetuning first on MNLI and CosmosQA result in an increase in all skills.", "(iii) While a model's ability to learn skills relating to input-noising correlate with target task performance, low-level skills such as knowledge of a sentence's raw content preservation skills and ability to detect various attributes of input sentences such as tense of main verb and sentence length are less correlated with target task performance.", "This suggests that a model's ability to do well on the masked language modelling (MLM) task is important for downstream performance.", "Furthermore, we conjecture that a portion of our analysis is affected by catastrophic forgetting of knowledge learned during pretraining.", "Our experimental pipeline (Figure 1) consists of two steps, starting with a pretrained model: intermediate-task training , and fine-tuning on a target or probing task.", "Intermediate Task Training We fine-tune RoBERTa on each intermediate task.", "The training procedure follows the standard procedure of fine-tuning a pretrained model on a target task, as described in Devlin et al. (2019).", "We opt for single intermediate-task training as opposed to multi-task training (cf. Liu et al., 2019a) to isolate the effect of skills learned from individual intermediate tasks.", "on each target and probing task individually.", "Target tasks are tasks of interest to the general community, spanning various facets of natural language, domains, and sources.", "Probing tasks, while potentially similar in data source to target tasks such as with CoLA, are designed to isolate the presence of particular linguistic capabilities or skills.", "For instance, solving the target task BoolQ (Clark et al., 2019a) may require various skills including coreference and commonsense reasoning, while probing tasks like the SentEval probing suite (Conneau et al., 2018) target specific syntactic and metadata-level phenomena such as subject-verb agreement and sentence length detection.", "We curate a diverse set of tasks that either represent an especially large annotation effort or that have been shown to yield positive transfer in prior work.", "The resulting set of tasks cover question answering, commonsense reasoning, and natural language inference.", "QAMR The QuestionAnswer Meaning Representations dataset (Michael et al., 2018) is a crowdsourced QA task consisting of questionanswer pairs that correspond to predicateargument relationships.", "It is derived from Wikinews and Wikipedia sentences.", "For example, if the sentence is Ada Lovelace was a computer scientist. , a potential question is What is Ada's last name? , with the answer being Lovelace. CommonsenseQA CommonsenseQA (Talmor et al., 2019) is a multiple-choice QA task derived from ConceptNet (Speer et al., 2017) with the help of crowdworkers, that is designed to test a range of commonsense knowledge.", "SciTail SciTail (Khot et al., 2018) is a textual entailment task built from multiple-choice science questions from 4th grade and 8th grade exams, as well as crowdsourced questions (Welbl et al., 2017).", "The task is to determine whether a hypothesis, which is constructed from a science question and its corresponding answer, is entailed or not (neutral) by the premise.", "Cosmos QA Cosmos QA is a task for a commonsense-based reading comprehension task Name | Train | | Dev | task metrics genre/source CommonsenseQA 9,741 1,221 question answering acc.", "formulated as multiple-choice questions (Huang et al., 2019).", "The questions concern the causes or effects of events that require reasoning not only based on the exact text spans in the context, but also wide-range abstractive commonsense reasoning.", "It differs from CommonsenseQA in that it focuses on causal and deductive commensense reasoning and that it requires reading comprehension over an auxiliary passage, rather than simply answering a freestanding question.", "SocialIQA SocialIQA (Sap et al., 2019) is a task for multiple choice QA.", "It tests for reasoning surrounding emotional and social intelligence in everyday situations.", "CCG CCGbank (Hockenmaier and Steedman, 2007) is a task that is a translation of the Penn Treebank into a corpus of Combinatory Categorial Grammar (CCG) derivations.", "We use the CCG su-pertagging task, which is the task of assigning tags to individual word tokens that jointly determine the parse of the sentence.", "HellaSwag HellaSwag (Zellers et al., 2019) is a commonsense reasoning task that tests a model's ability to choose the most plausible continuation of a story.", "It is built using adversarial filtering (Zellers et al., 2018) with BERT to create challenging negative examples.", "QA-SRL The question-answer driven semantic role labeling dataset (QA-SRL; He et al., 2015) for a QA task that is derived from a semantic role labeling task.", "Each example, which consists of a set of questions and answers, corresponds to a predicate-argument relationship in the sentence it is derived from.", "Unlike QAMR, which focuses on all words in the sentence, QA-SRL is specifically focused on verbs.", "SST-2 The Stanford sentiment treebank (Socher et al., 2013) is a sentiment classification task based on movie reviews.", "We use the binary sentence classification version of the task.", "QQP The Quora Question Pairs dataset 1 is constructed based on questions posted on the community question-answering website Quora.", "The task is to determine if two questions are semantically equivalent.", "MNLI The Multi-Genre Natural Language Inference dataset (Williams et al., 2018) is a crowdsourced collection of sentence pairs with textual entailment annotations across a variety of genres.", "We use ten target tasks, eight of which are drawn from the SuperGLUE benchmark (Wang et al., 2019b).", "The tasks in the SuperGLUE benchmark 1 http://data.quora.com/First-Quora-DatasetRelease-Question-Pairs cover question answering, entailment, word sense disambiguation, and coreference resolution and have been shown to be easy for humans but difficult for models like BERT.", "Although we offer a brief description of the tasks below, we refer readers to the SuperGLUE paper for a more detailed description of the tasks.", "CommitmentBank ( CB ; de Marneffe et al., 2019) is a three-class entailment task that consists of texts and an embedded clause that appears in each text, in which models must determine whether that embedded clause is entailed by the text.", "Choice of Plausible Alternatives ( COPA ; Roemmele et al., 2011) is a classification task that consists of premises and a question that asks for the cause or effect of each premise, in which models must correctly pick between two possible choices.", "Winograd Schema Challenge ( WSC ; Levesque et al., 2012) is a sentence-level commonsense reasoning task that consists of texts, a pronoun from each text, and a list of possible noun phrases from each text.", "The dataset has been designed such that world knowledge is required to determine which of the possible noun phrases is the correct referent to the pronoun.", "We use the SuperGLUE binary classification cast of the task, where each example consists of a text, a pronoun, and a noun phrase from the text, which models must classify as being coreferent to the pronoun or not.", "Recognizing Textual Entailment ( RTE ; Dagan et al., 2005, et seq) is a textual entailment task.", "Multi-Sentence Reading Comprehension ( MultiRC ; Khashabi et al., 2018) is a multi-hop QA task that consists of paragraphs, a question on each paragraph, and a list of possible answers, in which models must distinguish which of the possible answers are true and which are false.", "Word-in-Context ( WiC ; Pilehvar and Camacho-Collados, 2019) is a binary classification word sense disambiguation task.", "Examples consist of two text snippets, with a polysemous word that appears in both.", "Models must determine whether the same sense of the word is used in both contexts.", "BoolQ (Clark et al., 2019a) is a QA task that consists of passages and a yes/no question associated with each passage.", "Reading Comprehension with Commonsense Reasoning ( ReCoRD ; Zhang et al., 2018) is a multiple-choice QA task that consists of news articles.", "For each article, models are given a question about each article with one entity masked out and a list of possible entities from the article, and the goal is to correctly identify the masked entity out of the list.", "Additionally, we use CommonsenseQA and Cosmos QA as target tasks, due to their unique combination of small dataset size and high level of difficulty for high-performing models like BERT from our set of intermediate tasks.", "We use well-established datasets for our probing tasks, including the edge-probing suite from Tenney et al. (2019b), function word oriented tasks from Kim et al. (2019), and sentence-level probing datasets (SentEval; Conneau et al., 2018).", "Acceptability Judgment Tasks This set of binary classifications tasks was designed to investigate if a model can judge the grammatical acceptability of a sentence.", "We use the following five datasets: AJ-CoLA is a task that tests for a model's understanding of general grammaticality using the Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019b), which is drawn from 22 theoretical linguistics publications.", "The other tasks concern the behaviors of specific classes of function words, using the dataset by Kim et al. (2019): AJ-WH is a task that tests a model's ability to detect if a wh-word in a sentence has been swapped with another wh-word, which tests a model's ability to identify the antecedent associated with the wh-word.", "AJ-Def is a task that tests a model's ability to detect if the definite/indefinite articles in a given sentence have been swapped.", "AJ-Coord is a task that tests a model's ability to detect if a coordinating conjunction has been swapped, which tests a model's ability to understand how ideas in the various clauses relate to each other.", "AJ-EOS is a task that tests a model's ability to identify grammatical sentences without indicators such as punctuation marks and capitalization, and consists of grammatical text that are removed of punctuation.", "Edge-Probing Tasks The edge probing (EP) tasks are a set of core NLP labeling tasks, collected by Tenney et al. (2019b) and cast into Boolean classification.", "These tasks focus on the syntactic and semantic relations between spans in a sentence.", "The first five tasks use the OntoNotes corpus (Hovy et al., 2006): Part-of-Speech tagging ( EP-POS ) is a task that tests a model's ability to predict the syntactic category (noun, verb, adjective, etc.) for each word in the sentence.", "Named entity recognition ( EP-NER ) is task that tests a model's ability to predict the category of an entity in a given span.", "Semantic Role Labeling ( EP-SRL ) is a task that tests a model's ability to assign a label to a given span of words that indicates its semantic role (agent, goal, etc.) in the sentence.", "Coreference ( EP-Coref ) is a task that tests a model's ability to classify if two spans of tokens refer to the same entity/event.", "The other datasets can be broken down into both syntactic and semantic probing tasks.", "Constituent labeling ( EP-Const ) is a task that tests a model's ability to classify a non-terminal label for a span of tokens (e.g., noun phrase, verb phrase, etc.).", "Dependency labeling ( EP-UD ) is a task that tests a model on the functional relationship of one token relative to another.", "We use the English Web Treebank portion of Universal Dependencies 2.2 release (Silveira et al., 2014) for this task.", "Semantic Proto-Role labeling is a task that tests a model's ability to predict the fine-grained non-exclusive semantic attributes of a given span.", "Edge probing uses two datasets for SPR: SPR1 ( EP-SPR1 ) (Teichert et al., 2017), derived from the Penn Treebank, and SPR2 ( EP-SPR2 ) (Rudinger et al., 2018), derived from the English Web Treebank.", "Relation classification ( EP-Rel ) is a task that tests a model's ability to predict the relation between two entities.", "We use the SemEval 2010 Task 8 dataset (Hendrickx et al., 2009) for this task.", "For example, the relation between Yeri and Korea in Yeri is from Ko-rea is ENTITY-ORIGIN.", "The Definite Pronoun Resolution dataset (Rahman and Ng, 2012) ( EP-DPR ) is a task that tests a model's ability to handle coreference, and differs from OntoNotes in that it focuses on difficult cases of definite pronouns.", "SentEval Tasks The SentEval probing tasks (SE) (Conneau et al., 2018) are cast in the form of single-sentence classification.", "Sentence Length ( SE-SentLen ) is a task that tests a model's ability to classify the length of a sentence.", "Word Content ( SE-WC ) is a task that tests a model's ability to identify which of a set of 1,000 potential words appear in a given sentence.", "Tree Depth ( SE-TreeDepth ) is a task that tests a model's ability to estimate the maximum depth of the constituency parse tree of the sentence.", "Top Constituents ( SE-TopConst ) is a task that tests a model's ability to identify the high-level syntactic structure of the sentence by choosing among 20 constituent sequences (the 19 most common, plus an other cat-egory).", "Bigram Shift ( SE-BShift ) is a task that tests a model's ability to classify if two consecutive tokens in the same sentence have been reordered.", "Coordination Inversion ( SE-CoordInv ) is a task that tests a model's ability to identify if two coordinating clausal conjoints are swapped ( ex: he knew it, and he deserved no answer.).", "Past-Present ( SE-Tense ) is a task that tests a model's ability to classify the tense of the main verb of the sentence.", "Subject Number ( SE-SubjNum ) and Object Number ( SE-ObjNum ) are tasks that test a model's ability to classify whether the subject or direct object of the main clause is singular or plural.", "Odd-Man-Out ( SE-SOMO ) is a task that tests the model's ability to predict whether a sentence has had one of its content words randomly replaced with another word of the same part of speech.", "Training and Optimization We use the large-scale pretrained model RoBERTa Large in all experiments.", "For each intermediate, target, and probing task, we perform a hyperparameter sweep, varying the peak learning rate { 2 10 5 , 1 10 5 , 5 10 6 , 3 10 6 } and the dropout rate { 0 .", "2 , 0 .", "1 } .", "After choosing the best learning rate and dropout rate, we apply the best configuration for each task for all runs.", "For each task, we use the batch size that maximizes GPU usage, and use a maximum sequence length of 256.", "Aside from these details, we follow the RoBERTa paper for all other training hyperparameters.", "We use NVIDIA P40 GPUs for our experiments.", "A complete pipeline with one intermediate task works as follows: First, we fine-tune RoBERTa on the intermediate task.", "We then fine-tune copies of the resulting model separately on each of the 10 target tasks and 25 probing tasks and test on their respective validation sets.", "We run the same pipeline three times for the 11 intermediate tasks, plus a set of baseline runs without intermediate training.", "This gives us 35 12 3 = 1260 observations.", "We train our models using the Adam optimizer (Kingma and Ba, 2015) with linear decay and early stopping.", "We run training for a maximum of 10 epochs when more than 1,500 training examples are available, and 40 epochs otherwise to ensure models are sufficiently trained on small datasets.", "We use the jiant (Wang et al., 2019c) NLP toolkit, based on PyTorch (Paszke et al., 2019), Hugging Face Transformers (Wolf et al., 2019), and AllenNLP (Gardner et al., 2017), for all of our QAMR CSenseQA SciTail CosmosQASocialIQA CCG HellaSwag QA-SRL SST-2 QQP MNLICB COPA WSC RTE MultiRC WiC BoolQ CSenseQA CosmosQA ReCoRD Avg.", "Figure 2 shows the differences in target and probing task performances (deltas) between the baselines and models trained with intermediate-task training, each averaged across three restarts.", "A positive delta indicates successful transfer.", "Target Task Performance We define good intermediate tasks as ones that lead to positive transfer in target task performance.", "We observe that tasks that require complex reasoning and inference tend to make good intermediate tasks.", "These include MNLI and commonsense-oriented tasks such as CommonsenseQA, HellaSWAG, and Cosmos QA (with our poor performance with the similar SocialIQA serving as a suprising exception).", "SocialIQA, CCG, and QQP as intermediate tasks lead to negative transfer on all target tasks and the majority of probing tasks.", "We investigate the role of dataset size in the intermediate tasks with downstream task performance by additionally running a set of experiments on varying amounts of data on five intermediate tasks, which is shown in the Appendix.", "We do not find differences in intermediate-task dataset size to have any substantial consistent impact on downstream target task performance.", "In addition, we find that smaller target tasks such as RTE, BoolQ, MultiRC, WiC, WSC benefit the most from intermediate-task training.", "2 There are no instances of positive transfer to CommitmentBank, since our baseline model achieves 100% accuracy.", "2 The deltas for experiments with the same intermediate and target tasks are not 0 as may be expected.", "This is because we perform both intermediate and target training phases in these cases, with reset optimizer states and stopping criteria in between intermediate and target training.", "on low-level syntactic probing tasks uniformly across intermediate tasks; we observe little to no improvement for the SentEval probing tasks and higher improvement for acceptability judgment probing tasks, except for AJ-CoLA.", "This is also consistent with Phang et al. (2018), who find negative transfer with CoLA in their experiments.", "Variation across Intermediate Tasks There is variable performance across higher-level syntactic or semantic tasks such as the Edge-Probing and SentEval tasks.", "SocialIQA and QQP have negative transfer for most of the Edge-Probing tasks, while CosmosQA and QA-SRL see drops in performance only for EP-Rel.", "While we do see that intermediate-task trained models improve performance on EP-SRL and EP-DPR across the board, there is little to no gain in SentEval probing tasks from any intermediate tasks.", "Additionally, tasks that increase performance in the most number of probing tasks perform well as intermediate tasks.", "Degenerate Runs We find that the model may not exceed chance performance in some training runs.", "This mostly affects the baseline (no intermediate training) runs on the acceptability judgment probing tasks, excluding AJ-CoLA, which all have very small training sets.", "We include these degenerate runs in our analysis to reflect this phenomenon.", "Consistent with Phang et al. (2018), we find that intermediate-task training reduces the likelihood of degenerate runs, leading to ostensibly positive transfer results on those four acceptability judgment tasks across most intermediate tasks.", "On the other hand, extremely negative transfer from intermediate-task training can also result in a higher frequency of degenerate runs in downstream tasks, as we observe in the cases of using QQP and SocialIQA as intermediate tasks.", "We also observe a number of degenerate runs on the EP-SRL task as well as the EP-Rel task.", "These degenerate runs decrease positive transfer in probing tasks, such as with SocialIQA and QQP probing performance, and also decrease the average amount of positive transfer we see in target task performance.", "Next, we investigate the relationship between target and probing tasks in an attempt to understand why certain intermediate-task models perform better on certain target tasks.", "We use probing task performance as an indicator of the acquisition of particular language skills.", "We compute the Spearman correlation between probing-task and target-task performances across training on different intermediate tasks and multiple restarts, as shown in Figure 3.", "We test for statistical significance at p = 0 .", "05 and apply Holm-Bonferroni correction for multiple testing.", "We omit correlations that are not statistically significant.", "We opt for Spearman and not Pearson correlation because of the wide variety of metrics used for the different tasks.", "3 We find that acceptability judgment probing task performance is generally uncorrelated with the target task performance, except for AJ-CoLA.", "Similarly, many of the SentEval tasks do not correlate with the target tasks, except for Bigram Shift (SE-BShift), Odd-Man-Out (SE-SOMO) and Coordination Inversion (SE-CoordInv).", "These three tasks are input noising taskstasks where a model has to predict if a given input sentence has been randomly modifiedwhich are, by far, the most similar tasks we study to the masked language modeling task that is used for training RoBERTa.", "This may explain the strong correlation with the performance of the target tasks.", "We also find that some of these strong correlations, such as with SE-SOMO and SE-CoordInv, are almost entirely driven by variation in the degree of negative transfer, rather than any positive transfer.", "Intuitively, fine-tuning RoBERTa on an intermediate task can cause the model to forget some of its ability to perform the MLM task.", "Thus, a future direction for potential improvement for intermediate-task training may be integrating the MLM objective into intermediate-task training or bounding network parameter changes to reduce catastrophic forgetting (Kirkpatrick et al., 2016; Chen et al., 2019).", "Interestingly, while intermediate tasks such as SocialIQA, CCG and QQP, which show negative transfer on target tasks, tend to have negative transfer on these three probing tasks, the intermediate tasks with positive transfer, such as CommonsenseQA tasks and MNLI, do not appear to adversely affect the performance on these probing tasks.", "This asymmetric impact may indicate that, beyond the similarity of intermediate and target tasks, avoiding catastrophic forgetting of pretrain-3 Full correlation tables across all target and probing tasks with both Spearman and Pearson correlations can be found in the Appendix.", "The remaining SentEval probing tasks have similar delta values (Figure 2), which may indicate that there is insufficient variation among transfer performance to derive significant correlations.", "Among the edge-probing tasks, the more semantic tasks such as coreference (EP-Coref and EP-DPR), semantic proto-role labeling (EP-SPR1 and EP-SPR2), and dependency labeling (EP-Rel) show the highest correlations with our target tasks.", "As our set of target tasks is also oriented towards semantics and reasoning, this is to be expected.", "On the other hand, among the target tasks, we find that ReCoRD, CommonsenseQA and Cosmos QAall commonsense-oriented tasks exhibit both high correlations with each other as well as a similar set of correlations with the probing tasks.", "Similarly, BoolQ, MultiRC, and RTE correlate strongly with each other and have similar patterns of probing-task performance.", "Within the paradigm of training large pretrained Transformer language representations via intermediate-stage training before fine-tuning on a target task, positive transfer has been shown in both sequential task-to-task (Phang et al., 2018) and multi-task-to-task (Liu et al., 2019a; Raffel et al., 2019) formats.", "Wang et al. (2019a) perform an extensive study on transfer with BERT, finding language modeling and NLI tasks to be among the most beneficial tasks for improving target-task performance.", "Talmor and Berant (2019) perform a similar cross-task transfer study on reading comprehension datasets, finding similar positive transfer in most cases, with the biggest gains stemming from a combination of multiple QA datasets.", "Our work consists of a larger, more diverse, set of intermediate tasktarget task pairs.", "We also use probing tasks to shed light on the skills learned by the intermediate tasks.", "Among the prior work on predicting transfer performance, Bingel and Sgaard (2017) is the most similar to ours.", "They do a regression analysis that predicts target-task performance on the basis of various features of the source and target tasks and task pairs.", "They focus on a multi-task training setting without self-supervised pretraining, as opposed to our single-intermediate task, three-step procedure.", "Similar work (Lin et al., 2019b) has been done on cross-lingual transferthe analogous challenge of transferring learned knowledge from a high-resource to a low-resource language.", "Many recent works have attempted to understand the knowledge and linguistic skills BERT learns, for instance by analyzing the language model surprisal for subjectverb agreements (Goldberg, 2018), identifying specific knowledge or phenomena encapsulated in the representations learned by BERT using probing tasks (Tenney et al., 2019b,a; Warstadt et al., 2019a; Lin et al., 2019a; Hewitt and Manning, 2019; Jawahar et al., 2019), analyzing the attention heads of BERT (Clark et al., 2019b; Coenen et al., 2019; Lin et al., 2019a; Htut et al., 2019), and testing the linguistic generalizations of BERT across runs (McCoy et al., 2019).", "However, relatively little work has been done to analyze fine-tuned BERT-style models (Wang et al., 2019a; Warstadt et al., 2019a).", "This paper presents a large-scale study on when and why intermediate-task training works with pretrained models.", "We perform experiments on RoBERTa with a total of 110 pairs of intermediate and target tasks, and perform an analysis using 25 probing tasks, covering different semantic and syntactic phenomena.", "Most directly, we observe that tasks like Cosmos QA and HellaSwag, which require complex reasoning and inference, tend to work best as intermediate tasks.", "Looking to our probing analysis, intermediate tasks that help RoBERTa improve across the board show the most positive transfer in downstream tasks.", "However, it is difficult to draw definite conclusions about the specific skills that drive positive transfer.", "Intermediate-task training may help improve the handling of syntax, but there is little to no correlation between target-task and probing-task performance for these skills.", "Probes for higher-level semantic abilities tend to have a higher correlation with the target-task performance, but these results are too diffuse to yield more specific conclusions.", "Future work in this area would benefit greatly from improvements to both the breadth and depth of available probing tasks.", "We also observe a worryingly high correlation between target-task performance and the two probing tasks which most closely resemble RoBERTa's masked language modeling pretraining objective.", "Thus, the results of our intermediate-task training analysis may be driven in part by forgetting of knowledge acquired during pretraining.", "Our results therefore suggest a need for further work on efficient transfer learning mechanisms.", "This project has benefited from support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), by Samsung Research (under the project Improving Deep Learning using Latent Structure ), by Intuit, Inc., and by NVIDIA Corporation (with the donation of a Titan V GPU)." ]
[ "abstain", "abstain", "objective", "result", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "method", "result", "method", "method", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "other", "other", "abstain", "method", "other", "other", "method", "other", "other", "other", "method", "method", "abstain", "abstain", "other", "other", "other", "other", "objective", "abstain", "abstain", "other" ]
[ "Knowledge graph embedding (KGE) models represent each entity and relation of a knowledge graph (KG) with low-dimensional embedding vectors.", "These methods have recently been applied to KG link prediction and question answering over incomplete KGs (KGQA).", "KGEs typically create an embedding for each entity in the graph, which results in large model sizes on real-world graphs with millions of entities.", "For downstream tasks these atomic entity representations often need to be integrated into a multi stage pipeline, limiting their utility.", "We show that an off-the-shelf encoder-decoder Transformer model can serve as a scalable and versatile KGE model obtaining state-of-the-art results for KG link prediction and incomplete KG question answering.", "We achieve this by posing KG link prediction as a sequence-to-sequence task and exchange the triple scoring approach taken by prior KGE methods with autoregressive decoding.", "Such a simple but powerful method reduces the model size up to 98% compared to conventional KGE models while keeping inference time tractable.", "After finetuning this model on the task of KGQA over incomplete KGs, our approach outperforms baselines on multiple large-scale datasets without extensive hyperparameter tuning.", "1 1 Introduction A knowledge graph (KG) is a multi-relational graph where the nodes are entities from the real world (e.g. Barack Obama, United States ) and the named edges represent the relationships between them (e.g. Barack Obama born in United States ).", "KGs can be either domain-specific such as WikiMovies (Miller et al., 2016) or public, cross-domain KGs encoding common knowledge such as Wikidata and DBpedia (Heist et al., 2020).", "These graph-structured databases play an important role 1 Resources are available at https://github.com/ apoorvumang/kgt5 in knowledge-intensive applications including web search, question answering and recommendation systems (Ji et al., 2020).", "Most real-world knowledge graphs are incomplete.", "However, some missing facts can be inferred using existing facts in the KG (Bordes et al., 2013).", "This task termed knowledge graph completion (KGC) 2 has become a popular area of research in recent years (Wang et al., 2017) and is often approached using knowledge graph embedding (KGE) models.", "KGE models represent each entity and relation of the KG by a dense vector embedding.", "Using these embeddings the model is trained to distinguish correct from incorrect facts.", "One of the main downstream applications of KGEs is question answering over incomplete KGs (KGQA) (Choudhary et al., 2021).", "Taking into account the large size of real world KGs (Wikidata contains 90M entities) and the applicability to downstream tasks, KGE models should fulfill the following desiderata:", "(i) scalability i.e. have model size and inference time independent of the number of entities", "(ii) quality reach good empirical performance", "(iii) versatility be applicable for multiple tasks such as KGC and QA, and", "(iv) simplicity consist of a single module with a standard architecture and training pipeline.", "Traditional KGE models fulfill quality and simplicity.", "They build upon a simple architecture and reach a high quality in terms of KGC.", "However, as they create a unique embedding per entity/relation, they scale linearly with the number of entities in the graph, both in model size and inference time, and offer limited versatility.", "Methods such as DKRL (Xie et al., 2016a) and KEPLER (Wang et al., 2021) attempt to tackle the scalability issue using compositional embeddings.", "However, they fail to achieve quality comparable to conventional KGEs.", "KG-BERT (Yao et al., 2019) utilizes pretrained BERT for link prediction and holds po-2 We use the term KGC for the task of KG link prediction.", "tential in terms of versatility as it is applicable to downstream NLP tasks.", "However, it is not scalable due to its underlying cross-encoder.", "3 QA methods which leverage KGEs outperform traditional KGQA approaches on incomplete KGs, but combining KGEs with the QA pipeline is a non-trivial task; models that attempt to do this often work on only limited query types (Huang et al. 2019; Sun et al. 2021; Saxena et al. 2020) or require multistage training and inference pipelines (Ren et al., 2021).", "Here, in order to achieve quality, these models have sacrificed versatility and simplicity.", "A comparison of approaches in terms of desiderata is summarized in Tab.", "9 in the appendix.", "Our paper shows that all of these desiderata can be fulfilled by a simple sequence-to-sequence (seq2seq) model.", "To this end, we pose KG link prediction as a seq2seq task and train an encoder-decoder Transformer model (Vaswani et al., 2017) on this task.", "We then use this model pretrained for link prediction and further finetune it for question answering; while finetuning for QA, we regularize with the link prediction objective.", "This simple but powerful approach, which we call KGT5, is visualised in Fig. 1. With such a unified seq2seq approach we achieve", "(i) scalability by using compositional entity representations and autoregressive decoding (rather than scoring all entities) for inference", "(ii) quality we obtain state-of-the-art performance on two tasks", "(iii) versatility the same model can be used for both KGC and KGQA on multiple datasets, and", "(iv) simplicity we obtain all results using an off-the-shelf model with no task or dataset-specific hyperparameter tuning.", "In summary, we make the following contributions: We show that KG link prediction and question answering can be treated as sequence-to-sequence tasks and tackled successfully with a single encoder-decoder Transformer (with the same architecture as T5-small (Raffel et al., 2020)).", "With this simple but powerful approach called 3 Shen et al. (2020) estimate it would take KG-BERT 3 days for an evaluation run on a KG with just 40k entities.", "KGT5, we reduce model size for KG link prediction up to 98% while outperforming conventional KGEs on a dataset with 90M entities.", "We show the versatility of this approach through the task of KGQA over incomplete graphs.", "By pretraining on KG link prediction and finetuning on QA, KGT5 performs similar to or better than much more complex methods on multiple large-scale KGQA benchmarks.", "Given a set of entities E and a set of relations R , a knowledge graph K E R E is a collection of subject-predicate-object ( s, p, o ) triples.", "Link prediction is the task of predicting missing triples in K by answering queries of the form of ( s, p, ?) and (? , p, o ) .", "This is typically accomplished using knowledge graph embedding (KGE) models.", "Conventional KGEs assign an embedding vector to each entity and relation in the KG.", "They model the plausibility of ( s, p, o ) triples via model specific scoring functions f ( e s , e p , e o ) using the subject ( e s ), predicate ( e p ) and object ( e o ) specific embeddings.", "Once trained, these embeddings are used for downstream tasks such as question answering.", "Knowledge graph question answering (KGQA) is the task of answering a natural language question using a KG as source of knowledge.", "The questions can be either simple factual questions that require single fact retrieval (e.g. Which languages are spoken in India? ), or they can be complex questions that require reasoning over multiple facts in the KG (e.g. What are the genres of movies, in which Leonardo DiCaprio was leading actor? ).", "KGEs can be utilized to perform KGQA when the background KGs are incomplete.", "the form of their scoring function f ( e s , e p , e o ) .", "A comprehensive survey of these models, their scoring functions, training regime and link prediction performance can be found in Wang et al. (2017) and Ruffinelli et al. (2020).", "It is important to note that although these models obtain superior performance in the link prediction task, they suffer from a linear scaling in model size with the number of entities in the KG, and applying them to question answering necessitates separate KGE and QA modules.", "Compositional KGE models.", "To combat the linear scaling of the model size with the number of entities in a KG, entity embeddings can be composed of token embeddings.", "DKRL (Xie et al., 2016b) embeds entities by combining word embeddings of entity descriptions with a CNN encoder, followed by the TransE scoring function.", "KEPLER (Wang et al., 2021) uses a Transformer-based encoder and combines the typical KGE training objective with a masked language modeling objective.", "Both of these approaches encode entities and relations separately which limits the transferability of these models to downstream tasks such as question answering.", "MLMLM (Clouatre et al., 2021) encodes the whole query with a RoBERTa-based model and uses [MASK] tokens to generate predictions.", "However, it performs significantly worse than atomic KGE models on link prediction on large KGs, and is yet to be applied to downstream text-based tasks.", "Knowledge Graph Question Answering (KGQA) has been traditionally solved using semantic parsing (Berant et al. 2013; Bast and Haussmann 2015; Das et al. 2021a) where a natural language (NL) question is converted to a symbolic query over the KG.", "This is problematic for incomplete KGs, where a single missing link can cause the query to fail.", "Recent work has focused on KGQA over incomplete KGs, which is also the focus of our work.", "These methods attempt to overcome KG incompleteness using KG embeddings (Huang et al. 2019; Saxena et al. 2020; Sun et al. 2021; Ren et al. 2021).", "In order to use KGEs for KGQA, these methods first train a KGE model on the background KG, and then integrate the learned entity and relation embeddings into the QA pipeline.", "This fragmented approach brings several disadvantages; for example Huang et al. (2019)'s method only works for single fact question answering, while EmQL (Sun et al., 2021) requires prior knowledge of the NL question's query structure.", "EmbedKGQA (Saxena et al., 2020) is capable of multi-hop question answering but is unable to deal with questions involving more than one entity.", "Hence, these methods are lacking in versatility.", "LEGO (Ren et al., 2021) can theoretically answer all first order logic based questions but requires multiple dataset dependent components including entity linking, relation pruning and branch pruning modules; here, to obtain versatility, LEGO has sacrificed simplicity.", "We pose both knowledge graph link prediction and question answering as sequence-to-sequence (seq2seq) tasks.", "We then train a simple encoder-decoder Transformer that has the same architecture as T5-small (Raffel et al., 2020) but without the pretrained weights on these tasks.", "While training for question answering, we regularize with the link prediction objective.", "This method, which we call KGT5, results in a scalable KG link prediction model with vastly fewer parameters than conventional KGE models for large KGs.", "This approach also confers simplicity and versatility to the model, whereby it can be easily adapted to KGQA on any dataset regardless of question complexity.", "Posing KG link prediction as a seq2seq task requires textual representations of entities and relations, and a verbalization scheme to convert link prediction queries to textual queries; these are detailed in 3.1.", "The link prediction training procedure is explained in 3.2 and inference in 3.3.", "The KGQA finetuning and inference pipeline is explained in 3.4.", "Text mapping.", "For link prediction we require a one-to-one mapping between an entity/relation and its textual representation.", "For Wikidata-based KGs, we use canonical mentions of entities and relations as their textual representation, followed by a disambiguation scheme that appends descriptions and unique ids to the name.", "4 For datasets used for QA only we do not enforce a one-to-one mapping as, in this case, unnecessary disambiguation can even harm model performance.", "5 4 Please see appendix A for details on textual representations.", "5 This is because QA systems consider surface forms during evaluation, not entity IDs.", "For example, it will be better to have the same mention for both the single and album version of a song rather than append a unique number to their mentions.", "Verbalization.", "We convert ( s, p, ?) query answering to a sequence-to-sequence task by verbalizing the query ( s, p, ?) to a textual representation.", "This is similar to the verbalization performed by Petroni et al. (2019), except there is no relation-specific template.", "For example, given a query (barack obama, born in, ?) , we first obtain the textual mentions of the entity and relation and then verbalize it as 'predict tail: barack obama | born in' .", "This sequence is input to the model, and output sequence is expected to be the answer to this query, 'united states' , which is the unique mention of entity United States .", "To train KGT5, we need a set of (input, output) sequences.", "For each triple ( s, p, o ) in the training graph, we verbalize the queries ( s, p, ?) and (? , p, o ) according to 3.1 to obtain two input sequences.", "The corresponding output sequences are the text mentions of o and s respectively.", "KGT5 is trained with teacher forcing (Williams and Zipser, 1989) and cross entropy loss.", "6 One thing to note is that unlike standard KGE models, we train without explicit negative sampling .", "At each step of decoding, the model produces a probability distribution over possible next tokens.", "While training, this distribution is penalised for 6 More details about training are available in Appendix B Dataset Entities Rels Edges Token.", "being different from the true' distribution (i.e. a probability of 1 for the true next token, 0 for all other tokens) using cross entropy loss.", "Hence, this training procedure is most similar to the 1vsAll + CE loss in Ruffinelli et al. (2020), except instead of scoring the true entity against all other entities, we are scoring the true token against all other tokens at each step, and the process is repeated as many times as the length of the tokenized true entity.", "This avoids the need for many negatives, and is independent of the number of entities.", "In conventional KGE models, we answer a query ( s, p, ?) by finding the score f ( s, p, o ) o E , where f is the model-specific scoring function.", "The entities o are then ranked according to the scores.", "In our approach, given query ( s, p, ?) , we first 2817 verbalize it (3.1) before feeding it to KGT5.", "We then sample a fixed number of sequences from the decoder, 7 which are then mapped to their entity ids.", "8 By using such a generative model, we are able to approximate (with high confidence) topm model predictions without having to score all entities in the KG, as is done by conventional KGE models.", "For each decoded entity we assign a score equal to the (log) probability of decoding its sequence.", "This gives us a set of (entity, score) pairs.", "To calculate the final ranking metrics comparable to traditional KGE models, we assign a score of for all entities not encountered during the sampling procedure.", "A comparison of inference strategy of conventional KGE models and KGT5 is shown in Figure 2. 3.4 KGQA Training and Inference For KGQA, we pretrain the model on the background KG using the link prediction task (3.2).", "This pretraining strategy is analogous to KGE module training' used in other KGQA works (Sun et al. 2021; Ren et al. 2021).", "The same model is then finetuned for question answering.", "Hereby, we employ the same strategy as Roberts et al. (2020): we concatenate a new task prefix ( predict answer: ) with the input question and define the mention string of the answer entity as output.", "This unified approach allows us to apply KGT5 to any KGQA dataset regardless of question complexity, and without the need for sub-modules such as entity linking.", "To combat overfitting during QA finetuning (es-pecially on tasks with small KGs) we devise a regularisation scheme: we add link prediction sequences sampled randomly from the background KG to each batch such that a batch consists of an equal number of QA and link prediction sequences.", "For inference, we use beam search followed by neighbourhood-based reranking (4.3) to obtain the model's prediction which is a single answer.", "We investigate whether KGT5i.e. a simple seq2seq Transformer modelcan be jointly trained", "8 The decoded sequence may or may not be an entity mention.", "We experimented with constrained decoding (Cao et al., 2021) to force the decoder to output only entity mentions; however, we found this unnecessary since the model almost always outputs an entity mention, and increasing the number of samples was enough to solve the issue.", "to perform both knowledge graph link prediction as well as question answering.", "Hereby, we first describe the used datasets (4.1), the baselines we compared to (4.2) and the experimental setup (4.3).", "The results of our experiments are analysed in 4.4-4.8.", "Before going into detail, we summarize our key findings: 1. For link prediction on large KGs, the text-based approach of KGT5 reduces model size to comparable KGE models by up to 98% and reaches or outperforms current state-of-the-art.", "2. On the task of KGQA over incomplete KGs, our simple seq2seq approach obtains better results than the current state-of-the-art across multiple datasets.", "3. KG link prediction training might be more beneficial than language modeling pretraining on knowledge intensive tasks such as KGQA.", "4. Although KGT5 is good at generalizing to unseen facts, it is rather poor at memorizing facts.", "This problem can be alleviated, if needed, by using an ensemble of KGT5 and conventional link prediction or KGQA systems.", "We evaluate the link prediction capability of KGT5 on Wikidata5M (Wang et al., 2021) and WikiKG90Mv2 (Hu et al., 2021), two of the largest publicly available benchmark KGs.", "Although KGT5 is designed for large problems, we evaluate on the smaller benchmark KGs FB15k-237 (Toutanova and Chen, 2015), WN18RR (Dettmers et al., 2018) and YAGO3-10 (Dettmers et al., 2018) for comparability.", "We evaluate the QA capabilities of KGT5 on three large-scale KGQA benchmark datasets: MetaQA (Zhang et al., 2018), WebQuestionsSP (WQSP) (Yih et al., 2016) and ComplexWebQuestions (CWQ) (Talmor and Berant, 2018).", "Questions in MetaQA span from 1-hop to 3-hop questions requiring path-based reasoning on a KG based on WikiMovies (Miller et al., 2016).", "WQSP contains both 1-hop and 2-hop path based questions while CWQ contains questions requiring steps such as compositional, conjunctive, comparative and superlative reasoning.", "Both WQSP and CWQ can be answered using Freebase (Google, 2015) as the background KG.", "We create subsets of Freebase using the scheme proposed by Ren et al. (2021) which results in KGs that are much smaller than Freebase but can still be used to answer all ques-2818 Model MRR Hits@1 Hits@3 Hits@10 Params TransE (Bordes et al., 2013) 0.253 0.170 0.311 0.392 2,400M DistMult (Yang et al., 2015) 0.253 0.209 0.278 0.334 2,400M SimplE (Kazemi and Poole, 2018) 0.296 0.252 0.317 0.377 2,400M RotatE (Sun et al., 2019b) 0.290 0.234 0.322 0.390 2,400M QuatE (Zhang et al., 2019) 0.276 0.227 0.301 0.359 2,400M ComplEx (Trouillon et al., 2016) $ 0.308 0.255 -0.398 614M KGT5 (Our method) 0.300 0.267 0.318 0.365 60M ComplEx 14-dim 0.201 0.161 0.211 0.275 67M ComplEx 26-dim 0.239 0.187 0.261 0.342 125M KEPLER (Wang et al., 2021) 0.210 0.173 0.224 0.277 125M DKRL (Xie et al., 2016a) 0.160 0.120 0.181 0.229 20M MLMLM (Clouatre et al., 2021) 0.223 0.201 0.232 0.264 355M KGT5-ComplEx Ensemble 0.336 0.286 0.362 0.426 674M Table 2: Link prediction results on Wikidata5M .", "Following prior work (Sun et al., 2019a) we randomly drop 50% of edges from all KGs to simulate KG incompleteness.", "This stochasticity causes different works to have different KGs, making it hard to compare results without re-implementing methods.", "Ren et al. (2021) implemented all comparison methods using their own KG splits which they have not yet published.", "9 Our KG split is available along with our implementation 1 and we encourage further studies to use it.", "We do not re-implement comparison methods but instead report the numbers for our methods and baselines separately.", "We also report the accuracy obtained by executing the 9 Through private communication with the authors we were able to obtain the same KG split for WQSP.", "ground truth SPARQL queries (GT query) for test questions.", "GT query serves as an estimate of the hardness of a KG split and helps us compare model performance across KG splits.", "Note that for training all models, we only use (NL question, answer entity) pairs no ground truth query information is used for training .", "Statistics of the KGs used in our experiments are shown in Tab.", "1. Statistics of the QA datasets are shown in Tab.", "11.", "For KG completion on Wikidata5M, we compared with several standard KGE models that have been shown to achieve good performance across multiple datasets (Ruffinelli et al., 2020) but with a large number of parameters.", "Among low-parameter models, we compared to the text based approaches KEPLER (Wang et al., 2021), DKRL (Xie et al., 2016a) and MLMLM (Clouatre et al., 2021).", "We also consider low-dimensional versions of the state-of-the-art method ComplEx.", "For the small benchmark KGs we compared with the currently best performing model NBFNet (Zhu et al., 2021).", "For KGQA, we compared against several methods that have been shown to achieve SOTA on QA over incomplete KGs.", "These include PullNet (Sun et al., 2019a), EmQL (Sun et al., 2021), EmbedKGQA (Saxena et al., 2020) and LEGO (Ren et al., 2021).", "Additionally, for the MetaQA datasets, we compared with a relation-path finding baseline, which we call PathPred.", "This simple 2819 Model CWQ WQSP GT query 25.2 56.9 Pullnet 26.8 (+1.6) 47.4 (-9.5) EmbedKGQA -42.5 (-14.4) LEGO 29.4 (+4.2) 48.5 (-8.4) GT query 24.5 56.9 KGT5 34.5 (+10.0) 50.5 (-6.4) Table 4: Hits@1 (gain vs GT query) on ComplexWebQuestions (CWQ) and WebQuestionsSP (WQSP) datasets in the 50% KG setting.", "method maps a NL question to a relation path using distantly supervised data obtained from QA pairs in the training set.", "10 4.3 Experimental Setup In all our main experiments we used a model with the same architecture as T5-small ( 60M parameters) but without the pretrained weights.", "For to-kenizing sequences, we trained a BPE tokenizer using the SentencePiece (Kudo and Richardson, 2018) library on the verbalised KGs (see Tab. 1 for tokenizer statistics).", "We used AdaFactor (Shazeer and Stern, 2018) with a learning rate warmup schedule for link prediction training, batch size 320 and 10% dropout.", "We adopted the same procedure as Roberts et al. (2020) for QA finetuning we halved the batch size and fixed the learning rate to 0.001.", "All experiments were performed using 4 Nvidia 1080Ti GPUs and models were implemented using the HuggingFace library (Wolf et al., 2019).", "We performed no dataset-specific hyperparameter tuning for KGT5 and used the same architecture, batch size, dropout and learning rate schedule throughout all experiments.", "11 All models were trained until validation accuracy did not significantly increase for 10k steps.", "12 For inference, we used sampling size = 500 for link prediction and beam size = 4 for KGQA.", "We further performed a neighbourhood-based reranking for KGQA: given question q , topic entity from 10 Please see Appendix D for details of PathPred.", "11 The vocabulary size for MetaQA is 10k, compared to 30k for other datasets.", "This was necessary in order to train a BPE tokenizer on such a small KG.", "12 5M steps for large KGs (WD5M, W90M), 500k steps for smaller KGs and 30k steps for QA finetuning Model 1-hop 2-hop 3-hop GT query 63.3 45.8 45.3 PullNet 65.1 (+1.8) 52.1 (+6.3) 59.7 (+14.4) EmbedKGQA 70.6 (+7.3) 54.3 (+8.5) 53.5 (+8.2) EmQL 63.8 (+0.5) 47.6 (+1.8) 48.1 (+2.8) LEGO 69.3 (+6.0) 57.8 (+12.0) 63.8 (+18.5) GT query 67.7 48.7 44.4 PathPred 67.7 (+0.0) 48.7 (+0.0) 44.4 (+0.0) KGT5 75.0 (+7.3) 36.2 (-8.2) 64.4 (+20.0) KGT5-PP-Ens.", "where is a constant hyperparameter and N ( e ) is the n -hop neighbourhood of the topic entity ( n = 1, 2 or 3).", "Re-ranking was only done on datasets where topic entity annotation is available as part of test questions.", "Tab.", "3 shows link prediction performance on WikiKG90Mv2, one of the largest benchmark KGs available.", "Here we compare against TransE, ComplEx and their variants.", "*-MPNet and *-concat methods use text embeddings as part of entity representations, and operate on the same textual data as KGT5.", "KGT5 achieves the highest MRR on validation set while having 98% fewer parameters than the next best performing model on the leaderboard.", "13 Tab.", "2 shows link prediction performance on Wikidata5M, a smaller but better studied KG.", "We see that KGT5 outperformed all low-parameter count models on all metrics.", "When compared to the large ComplEx model, there is a drop of 0.008 points in MRR and a gain of 0.012 points in hits@1.", "We performed a more fine-grained analysis of 13 The authors of OGB-LSC did not provide us with scores on the hidden test set because we used the entity mentions that were provided with the dataset.", "These entity mentions have now been removed; we provide them for reproducibility on our resource website.", "model predictions according to the type of query for Wikidata5M (Tab. 13 in the appendix).", "We found that KGT5 excelled at answering queries which have none or only a few correct answers in the train set; performance dropped when several entities can be correct for a query.", "This could be due to the nature of sampling: low probability sequences are harder to sample and also harder to rank correctly.", "Additionally, the limited sampling (3.3) may not even provide the correct answer if there exist more known positives than sampled answers.", "Based on these observations we created an ensemble of ComplEx and KGT5 which answers queries as follows: if the query does not have answers in the train KG, use KGT5; otherwise use ComplEx (614M).", "As shown in Tab.", "2, the ensemble created by this simple rule outperformed all other single models and achieved the state-of-the-art on Wikidata5M.", "14 , 15 Such an ensemble neither achieves the goal of scalability nor versatility but instead serves as an ablation to point out weak spots of KGT5.", "Tab.", "10 in the appendix shows link prediction performance on KGs with 150k entities.", "Here KGT5 sometimes falls behind the baselines; Transformer models are known to struggle when data is scarce, and this could be the reason for poor performance on these small datasets.", "Due to the lack of public KG splits, we compared KGQA methods using gain over ground truth query model , which is available for both the comparison methods (from Ren et al. 2021) as well as our methods.", "16 Tab.", "4 shows hits@1 performance on Freebase-based datasets ComplexWebQuestions and WebQuestionsSP.", "On both datasets, KGT5 outperformed all baselines.", "The gains were the largest on ComplexWebQuestions which is the hardest dataset in terms of complexity and KG size.", "Tab.", "5 shows hits@1 performance on the MetaQA datasets.", "On MetaQA 1and 3-hop, KGT5 was either equal or better than all baselines (in terms of gain).", "On MetaQA 2-hop however, the performance was significantly worse compared to 14 In this ensemble KGT5 was used to answer 42% of the queries; the rest were answered by ComplEx 15 To the best of our knowledge current state-of-the-art on Wikidata5M is ComplEx published with Kochsiek and Gemulla (2021) presented in Tab.", "the baselines, and even worse than ground truth querying.", "We did a more fine-grained analysis of the performance of KGT5 on different question types (Tab. 15-16 in the appendix).", "We found that KGT5 performance suffered most on questions where the head and answer entity were of the same type (for e.g. actor movie actor questions).", "These question types are absent in the 1-hop and 3-hop datasets.", "When head and answer entities had different types (for e.g. director movie language questions), KGT5 was able to answer them better than GT query.", "To remedy this issue and create a model more faithful towards the knowledge present in the incomplete KG, we devised an ensemble of KGT5 with the PathPred baseline.", "The ensemble works as follows: Given a question q , try to answer it using PathPred.", "If this returns an empty set, use KGT5.", "This ensemble outperformed all single models on all MetaQA datasets, often by large margins (Tab. 5).", "Additionally, we performed an ablation to study the effect of neighbourhood reranking on KGQA performance (Tab. 6).", "We found that reranking gave small but consistent gains on all datasets.", "Knowledge probing works such as LAMA (Petroni et al., 2019) aim to answer the following question: can models (e.g. BERT) which are pretrained on generic text corpora with a language modeling objective be used as knowledge bases?", "In our case, the model has been explicitly trained with the link prediction objective , and a knowledge probing experiment would be akin to checking train set performance of link prediction (which is discussed in 4.8).", "Furthermore, we do not claim that KGT5 is as general purpose as large LMs, or that it contains generic world knowledge.", "Hence we do not perform knowledge probing experiments on datasets such as T-Rex or Google-RE (Petroni et al., 2019).", "We analyzed how generic corpora pretraining performed compared to KG link prediction training for the task of KGQA.", "We compared with T5-small (Raffel et al., 2020), which has the same architecture as KGT5 but pretrained on a mixture of tasks, most notably language modeling on web text.", "From Tab.", "7 we see that KGT5 vastly outperformed T5-small.", "This is not surprising: the data for KGT5 pretraining was tailored towards the task performedKGQAwhich was not the case for T5-small.", "However, this shows that it is the link prediction pretraining that is responsible for the excellent KGQA performance of KGT5.", "Full-KG Question Answering.", "Tab.", "7 shows hits@1 performance in the full KG setting.", "KGT5 performance only marginally improves when pretrained on full KG compared to 50% KG, and lags far behind both EmbedKGQA (a ComplEx-based method) as well as CBR-KGQA (a semantic parsing method that uses (NL-query, SPARQL-query) parallel data).", "This indicates that although KGT5 excels at generalizing to unseen facts, it may not be good at memorizing facts.", "This is further supported by the train set link prediction performance of KGT5 (Tab. 8); although both ComplEx and KGT5 have comparable test MRR, train MRR of ComplEx is significantly better.", "One possible explanation could be that the reduced model capacity of KGT5 which has only 60M parameters does not allow it to memorize facts seen during pretraining, leading to poor train MRR and full-KG KGQA performance.", "Hence we recommend against using KGT5 as a standalone KGQA method, and it should be used only when query-parsing does not yield good results.", "Use of textual mentions.", "Since KGT5 requires textual representations for every entity, it cannot be directly applied to all KGs, and is especially unsuitable for KGs that contain CVT nodes as entities (e.g. full Freebase).", "Also, care must be taken when comparing models that make use of entity names/descriptions with those that do not.", "In our experiments, we noticed a significant proportion of validation triples in WikiKG90Mv2 required just text processing (eg. <Giovanni Bensi, family name, Bensi> ) and we found a few cases of potential data leakage when definitions are used in WN18RR (eg. <hylidae the amphibian family of tree frogs, hypernym, amphibian family> ).", "However, from a practical perspective, models which can leverage text data could be more advantageous, and one must assess the pros and cons of a technique before applying it.", "We have shown that KG link prediction and question answering can be treated as seq2seq tasks and tackled successfully with a single encoder-decoder Transformer model.", "We did this by training a Transformer model with the same architecture as T5-small on the link prediction task, and then finetuning it on the QA task.", "This simple but powerful approach, which we call KGT5, performed competitively with the state-of-the-art methods for KG completion on large KGs while using upto 98% fewer parameters.", "On the task of KGQA on incomplete KGs, we found that our unified approach outperformed baselines on multiple large-scale benchmark datasets.", "Additionally, we compared language modeling pretraining with KG link prediction training and found that for knowledge-intensive tasks such as KGQA, link prediction training could be more beneficial.", "One promising direction for future exploration would be to see whether KG link prediction training could be considered as an additional pretraining objective when training large seq2seq models.", "Furthermore, the impact of model size, and whether larger Transformer models can indeed store more relational information should be investigated." ]
[ "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "method", "method", "result", "result", "abstain", "abstain" ]
[ "Non-autoregressive neural machine translation (NAT) predicts the entire target sequence simultaneously and significantly accelerates inference process.", "However, NAT discards the dependency information in a sentence, and thus inevitably suffers from the multi-modality problem: the target tokens may be provided by different possible translations, often causing token repetitions or missing.", "To alleviate this problem, we propose a novel semi-autoregressive model RecoverSAT in this work, which generates a translation as a sequence of segments.", "The segments are generated simultaneously while each segment is predicted token-by-token.", "By dynamically determining segment length and deleting repetitive segments, RecoverSAT is capable of recovering from repetitive and missing token errors.", "Experimental results on three widely-used benchmark datasets show that our proposed model achieves more than 4 speedup while maintaining comparable performance compared with the corresponding autoregressive model.", "Although neural machine translation (NMT) has achieved state-of-the-art performance in recent years (Cho et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), most NMT models still suffer from the slow decoding speed problem due to their autoregressive property: the generation of a target token depends on all the previously generated target tokens, making the decoding process intrinsically nonparallelizable.", "Recently, non-autoregressive neural machine translation (NAT) models (Gu et al., 2018; Li et al., 2019; Wang et al., 2019; Guo et al., 2019a; Wei et al., 2019) have been investigated to mitigate the indicates equal contribution indicates corresponding author Src.", "slow decoding speed problem by generating all target tokens independently in parallel, speeding up the decoding process significantly.", "Unfortunately, these models suffer from the multi-modality problem (Gu et al., 2018), resulting in inferior translation quality compared with autoregressive NMT.", "To be specific, a source sentence may have multiple feasible translations, and each target token may be generated with respect to different feasible translations since NAT models discard the dependency among target tokens.", "This generally manifests as repetitive or missing tokens in the translations.", "Table 1 shows an example.", "The German phrase viele Farmer can be translated as either lots of farmers or a lot of farmers .", "In the first translation (Trans. 1), lots of are translated w.r.t. lots of farmers while of farmers are translated w.r.t. a lot of farmers such that two of are generated.", "Similarly, of is missing in the second translation (Trans. 2).", "Intuitively, the multi-modality problem has a significant negative effect on the translation quality of NAT.", "Intensive efforts have been devoted to alleviate the above problem, which can be roughly divided into two lines.", "The first line of work leverages the iterative decoding framework to break the independence assumption, which first generates an initial translation and then refines the translation BOS there there are are EOS BOS lots lots of of farmers BOS lots lots of BOS doing doing this this today Decoder of DEL farmers EOS Encoder es gibt Ansatz today EOS t =1 t =1 t =1 t =1 t =2 t =2 t =2 t =2 t =3 t =3 t =3 t =3 t =4 t =4 Segment 1 Segment 2 Segment 3 Segment 4 Final translation: there are lots of farmers doing this today Post-process Figure 1: An overview of our RecoverSAT model.", "iteratively by taking both the source sentence and the translation of last iteration as input (Lee et al., 2018; Ghazvininejad et al., 2019).", "Nevertheless, it requires to refine the translations for multiple times in order to achieve better translation quality, which hurts decoding speed significantly.", "The other line of work tries to improve the vanilla NAT model to better capture target-side dependency by leveraging extra autoregressive layers in the decoder (Shao et al., 2019a; Wang et al., 2018), introducing latent variables and/or more powerful probabilistic frameworks to model more complex distributions (Kaiser et al., 2018; Akoury et al., 2019; Shu et al., 2019; Ma et al., 2019), guiding the training process with an autoregressive model (Li et al., 2019; Wei et al., 2019), etc.", "However, these models cannot alter a target token once it has been generated, which means these models are not able to recover from an error caused by the multi-modality problem.", "To alleviate the multi-modality problem while maintaining a reasonable decoding speedup, we propose a novel semi-autoregressive model named RecoverSAT in this work.", "RecoverSAT features in three aspects: (1) To improve decoding speed, we assume that a translation can be divided into several segments which can be generated simultaneously.", "(2) To better capture target-side dependency, the tokens inside a segment is autoregressively generated conditioned not only on the previously generated tokens in this segment but also on those in other segments.", "On one hand, we observe that repetitive tokens are more likely to occur within a short context.", "Therefore, autoregressively generating a segment is beneficial for reducing repetitive tokens.", "On the other hand, by conditioning on previously generated tokens in other segments, the model is capable of guessing what feasible translation candidates have been chosen by each segment and adapts accordingly, e.g., recovering from missing token errors.", "As a result, our model captures more target-side dependency such that the multi-modality problem can be alleviated naturally.", "(3) To make the model capable of recovering from repetitive token errors, we introduce a segment deletion mechanism into our model.", "Informally speaking, our model will mark a segment to be deleted once it finds the content has been translated in other segments.", "We conduct experiments on three benchmark datasets for machine translation to evaluate the proposed method.", "The experimental results show that RecoverSAT is able to decode over 4 faster than the autoregressive counterpart while maintaining comparable performance.", "The source code of this work is released on https://github.com/ ranqiu92/RecoverSAT .", "Autoregressive neural machine translation (AT) generates the translation token-by-token conditioned on translation history.", "Denoting a source sentence as x = { x i } T (cid:48) i =1 and a target sentence as y = { y j } Tj =1 , AT models the joint probability as: P ( y | x ) = T (cid:89) t =1 P ( y t | y <t , x ) .", "where y <t denotes the generated tokens before y t .", "During decoding, the translation history dependency makes the AT model predict each token after all previous tokens have been generated, which makes the decoding process time-consuming.", "Non-autoregressive neural machine translation (NAT) (Gu et al., 2018) aims to accelerate the decoding process, which discards the dependency of translation history and models P ( y | x ) as a product of the conditionally independent probability of each token:", "The conditional independence enables the NAT models to generate all target tokens in parallel.", "However, independently predicting all target tokens is challenging as natural language often exhibits strong correlation across context.", "Since the model knows little information about surrounding target tokens, it may consider different possible translations when predicting different target tokens.", "The problem is known as the multi-modality problem (Gu et al., 2018) and significantly degrades the performance of NAT models.", "RecoverSAT extends the original Transformer (Vaswani et al., 2017) to enable the decoder to perform generation autoregressively in local and non-autoregressively in global.", "An overview of the architecture of our RecoverSAT model is shown in Figure", "1. As illustrated in the figure, RecoverSAT simultaneously predicts all segments there are EOS , lots of farmers EOS , a lot DEL and doing this today EOS .", "And at each time step, it generates a token for each incomplete segment.", "The special token DEL denotes the segment should be deleted and EOS denotes the end of a segment.", "Combining all the segments, we obtain the final translation there are lots of farmers doing this today .", "Formally, assuming a translation y is generated as K segments S 1 , S 2 , , SK , where S i is a subsequence of the translation 1 .", "For description simplicity, we assume that all the segments have the 1 Note that, by fixing segment length (token number of each segment) instead, the segment number K can be changed same length.", "RecoverSAT predicts a token for each segment conditioned on all previously generated tokens at each generation step, which can be formulated as: P ( y | x ) = L (cid:89) t =1 K (cid:89) i =1 P ( S it | S 1 <t S K<t ; x ) , (3) where S it denotes the t -th token in the i -th segment, S i<t = { S i 1 , , S it 1 } denotes the translation history in the i -th segment, and L is segment length.", "Here, two natural problems arise for the decoding process: How to determine the length of a segment?", "We address the two problems in a uniform way in this work.", "Suppose the original token vocabulary is V , we extend it with two extra tokens EOS and DEL .", "Then for the segment S i , the most probable token S it at time step t : S it = arg max S it V { EOS , DEL } P ( S it | S 1 <t S K<t ; x ) (4) has three possibilities: (1) S it V : the segment S i is incomplete and the decoding process for it should continue; (2) S it = EOS : the segment S i is complete and the decoding process for it should terminate; (3) S it = DEL : the segment S i is repetitive and should be deleted.", "Accordingly, the decoding process for it should terminate.", "The entire decoding process terminates when all the segments meet EOS / DEL or reach the maximum token number.", "It should be noticed that we do not explicitly delete a segment when DEL is encountered but do it via post-processing.", "In other words, the model is trained to ignore the segment to be deleted implicitly.", "As there is little target-side information available in the early stage of the decoding process, the errors caused by the multi-modality problem is inevitable.", "In this work, instead of reducing such errors directly, we propose two training mechanisms to teach our RecoverSAT model to recover dynamically according to the sentence length.", "In other words, we can predict the target sentence length to determine the segment number during inference.", "In this case, our model can also decode in constant time.", "from errors: (1) Dynamic Termination Mechanism: learning to determine segment length according to target-side context; (2) Segment Deletion Mechanism: learning to delete repetitive segments.", "As shown in Section 3.1, instead of pre-specifying the lengths of segments, we let the model determine the lengths by emitting the EOS token.", "This strategy helps our model recover from multi-modality related errors in two ways:", "1. The choice of the first few tokens is more flexible.", "Taking Figure 1 as an example, if the decoder decides the first token of the second segment is of instead of lots (i.e., lots is not generated in the second segment), it only needs to generate lots before EOS in the first segment in order to recover from missing token errors.", "In contrast, if the decoder decides the first token is are , it can avoid repetitive token error by not generating are in the first segment;", "2. As shown in Eq.", "3, a token is generated conditioned on all the previously generated tokens in all the segments .", "Therefore, the decoder has richer target-side information to detect and recover from such errors.", "However, it is non-trivial to train the model to learn such behaviour while maintaining a reasonable speedup.", "On one hand, as the decoding time of our RecoverSAT model is proportional to the maximum length of the segments, we should divide the target sentences of training instances into equal-length segments to encourage the model to generate segments with identical length.", "On the other hand, the model should be exposed to the multi-modality related errors to enhance its ability of recovering from such errors, which suggests that the target sentences of training instances should be divided randomly to simulate these errors.", "To alleviate the problem, we propose a mixed annealing dividing strategy.", "To be specific, we randomly decide whether to divide a target sentence equally or randomly at each training step and gradually anneal to the equally-dividing method at the end of training.", "Formally, given the target sentence y and the segment number K , we define the segment dividing indice set r as follows: s Bernoulli( p ) , (5) r = (cid:40) EQUAL( T, K 1) s = 0 RAND( T, K 1) s = 1 , (6) where Bernoulli( p ) is the Bernoulli distribution with parameter p , EQUAL( n, m ) = (cid:8) (cid:100) nm +1 (cid:101) , (cid:100) 2 nm +1 (cid:101) , , (cid:100) mnm +1 (cid:101) (cid:9) , RAND( n, m ) sampling m non-duplicate indices from [1 , n ] .", "A larger value of p leads to better error recovering ability while a smaller one encourages the model to generate segments with similar lengths (in other words, better speedup).", "To balance the two aspects, we gradually anneal p from 1 to 0 in the training process, which achieves better performance (Section 4.5).", "Although the dynamic termination mechanism makes the model capable of recovering from missing token errors and reducing repetitive tokens, the model still can not recover from errors where token repetition errors have already occurred.", "We find the major errors of our model occur when generating the first token of each segment since it cannot see any history and future.", "In this situation, two repetitive segments will be generated.", "To alleviate this problem, we propose a segment-wise deletion strategy, which uses a special token DEL to indicate a segment is repetitive and should be deleted 2 .", "A straightforward way to train the model to learn to delete a segment is to inject pseudo repetitive segments into the training data.", "The following is an example: Target Sentence there are lots of farmers doing this today + Pseudo Repetitive Segment there are lots of farmers lots of DEL doing this today Given the target sentence there are lots of farmers doing this today , we first divide it into 3 segments there are , lots of farmers and doing this today .", "Then we copy the first two tokens of the second segment and append the special token DEL to the end to construct a pseudo repetitive segment lots of DEL .", "Finally, we insert the repetitive segment to the right of the chosen segment, resulting in 4 segments.", "Formally, given the expected segment number K and the target sentence y , we first divide y into K 1 segments S 1 , S 2 , , SK 1 and then build a pseudo repetitive segment S irep by copying the first m tokens of a randomly chosen segment S i and appending DEL to the end, m is uniformly 2 It is more flexible to employ token-wise deletion strategy which could handle more complex cases.", "We will explore this in future.", "sampled from [1 , | S i | ] .", "Finally, S irep is inserted at the right side of S i .", "The final K segments are S 1 , S 2 , , S i , S irep , S i +1 , , SK 1 .", "However, injecting such pseudo repetitive segments to all training instances will mislead the model that generating then deleting a repetitive segment is a must-to-have behaviour, which is not desired.", "Therefore, we inject pseudo repetitive segment into a training instance with probability q in this work.", "We conduct experiments on three widely-used machine translation datasets: IWSLT16 En-De ( 196 k pairs), WMT14 En-De ( 4 . 5 M pairs) and WMT16 En-Ro ( 610 k pairs).", "For fair comparison, we use the preprocessed datasets in Lee et al. (2018), of which sentences are tokenized and segmented into subwords using byte-pair encoding (BPE) (Sen-nrich et al., 2016) to restrict the vocabulary size.", "We use a shared vocabulary of 40 k subwords for both source and target languages.", "For the WMT14 En-De dataset, we use newstest-2013 and newstest-2014 as validation and test sets respectively.", "For the WMT16 En-Ro dataset, we employ newsdev-2016 and newstest-2016 as validation and test sets respectively.", "For the IWSLT16 En-De dataset, we use test2013 as the validation set.", "For model hyperparameters, we follow most of the settings in (Gu et al., 2018; Lee et al., 2018; Wei et al., 2019).", "For the IWSLT16 En-De dataset, we use a small Transformer model ( d model = 278 , d hidden = 507 , n layer = 5 , n head = 2 , p dropout = 0 . 1 ).", "For the WMT14 En-De and WMT16 En-Ro datasets, we use a larger Transformer model ( d model = 512 , d hidden = 512 , n layer = 6 , n head = 8 , p dropout = 0 . 1 ).", "We linearly anneal the learning rate from 3 10 4 to 10 5 as in Lee et al. (2018) for the IWSLT16 En-De dataset, while employing the warm-up learning rate schedule (Vaswani et al., 2017) with t warmup = 4000 for the WMT14 En-De and WMT16 En-Ro datasets.", "We also use label smoothing of value (cid:15) ls = 0 .", "15 for all datasets.", "We utilize the sequence-level distillation (Kim and Rush, 2016), which replaces the target sentences in the training dataset with sentences generated by an autoregressive model, and set the beam size of the technique to 4 .", "We use the encoder of the corresponding autoregressive model to initialize the encoder of RecoverSAT, and share the parameters of source and target token embedding layers and the pre-softmax linear layer.", "We measure the speedup of model inference in each task on a single NVIDIA P40 GPU with the batch size 1 .", "We use the Transformer (Vaswani et al., 2017) as our AT baseline and fifteen latest strong NAT models as NAT baselines, including: (1) fertility-based model: NAT-FT (Gu et al., 2018); (2) iterative decoding based models: NAT-IR (Lee et al., 2018) and CMLM (Ghazvininejad et al., 2019); (3) models learning from AT teachers: imitate-NAT (Wei et al., 2019), NART (Li et al., 2019) and FCL-NAT (Guo et al., 2019b); (4) latent variable framework based models: LV NAR (Shu et al., 2019) and FlowSeq (Ma et al., 2019); (5) regularization framework based model: NAT-REG (Wang et al., 2019); (6) models introducing extra target-side dependencies: SAT (Wang et al., 2018), SynST (Ak-oury et al., 2019), NAT-FS (Shao et al., 2019a), PNAT (Bao et al., 2019), NART-DCRF (Sun et al., 2019) and ReorderNAT (Ran et al., 2019).", "The performance of our RecoverSAT model and the baselines is shown in Table", "2. Due to the space limitation, we only show the results corresponding to the settings of the best BLEU scores for the baselines 3 .", "From Table 2, we can observe that: (1) Our RecoverSAT model achieves comparable performance with the AT baseline (Transformer) while keeping significant speedup.", "When K = 2 , the BLEU score gap is moderate (from 0 . 06 to 0 . 4 , even better than Transformer on the WMT16 En Ro and Ro En tasks) and the speedup is about 2 .", "When K = 10 , the BLEU scores drop less than 5% relatively, and the speedup is considerably good (over 4 ).", "(2) Our RecoverSAT model outperforms all the strong NAT baselines except CMLM (on the WMT16 En Ro and Ro En tasks).", "However, the performance gap is negligible ( 0 . 16 and 0 . 12 respectively), and CMLM is a multi-step NAT method which is significantly slower than our model.", "(3) As K grows, the BLEU scores drop moderately and the speedup grows significantly, indicating that our RecoverSAT model has a good generalizability.", "For example, the BLEU scores drop less than 0 .", "45 when K grows from 2 to 5 , and drop no more than 0 .", "90 except on the WMT14 De En task when K further grows to 10 .", "Meanwhile, the speedup for K = 10 is larger than 4 , which is considerably good.", "(4) There are only 7 baselines (SynST, imitate-NAT+LPD, LV NAR, NART+LPD, FCL-NAT+NPD, ReorderNAT and NART-DCRF+LPD) achieving better speedup than our RecoverSAT model when K = 10 .", "However, only ReorderNAT and NART-DCRF+LPD achieve comparable BLEU scores with our model.The improvements of both ReorderNAT and NART-DCRF are complementary to our method.", "It is an interesting future work to join these works together.", "As discussed in Section 3.2.1, the dynamic termination mechanism is used to train our RecoverSAT model to learn to determine segment length dynamically conditioned on target-side context such that it is recoverable from multi-modality related errors.", "In this section, we investigate the effect of this mechanism and the results are shown in Table", "3. As multi-modality related errors generally manifest as repetitive or missing tokens in the translation, we propose two quantitative metrics Rep and Mis to measure these two phenomenons respectively.", "Rep is defined as the relative increment of repetitive token ratio w.r.t. to a reference AT model.", "And Mis is defined as the relative increment of missing token ratio given the references w.r.t. to a reference AT model.", "Formally, given the translations Y = { y 1 y k } produced by the model to be evaluated and the translations Y auto = { y 1 auto y kauto } produced by the reference AT model, Rep is defined as Rep = r ( Y ) r ( Y auto ) r ( Y auto ) , (7) r ( Y ) = (cid:80) k | y k | (cid:80) j =2 1 (cid:18) 9 (cid:80) i =1 1 ( y kj = y kj i ) 1 (cid:19) (cid:80) k | y k | , (8) where 1 ( cond ) = 1 if the condition cond holds otherwise 0 , and y kj is the j -th token of the translation sentence y k .", "where m ( , ) computes the missing token ratio and is defined as follows: c w ( y k , y k ) = max (cid:16) c ( y k , w ) c ( y k , w ) , 0 (cid:17) , m ( Y , Y ) = (cid:80) k (cid:80) w y k c w ( y k , y k ) (cid:80) k | y k | , (10) where c ( y , w ) is the occurrence number of a token w in the sentence y .", "From Table 3, we can observe that: (1) By using the dynamic termination mechanism ( p = 0 .", "5 , 1 .", "0 , 1 0 , where p is the parameter of Bernoulli distribution (Eq. 5)), both repetitive and missing token errors are reduced (Rep & Mis), and the BLEU scores are increased, indicating the effectiveness of the mechanism; (2) As p grows larger, the average number of decoding steps (Step) increases significantly.", "The reason is that more target sentences are divided into segments equally with smaller p during training and the model is biased to generate segments with similar lengths.", "However, if the model is not exposed to randomly divided segments ( p = 0 . 0 ), it fails to learn to recover from multi-modality related errors and the BLEU score drops significantly.", "(3) By using the annealing dividing strategy ( p = 1 0 , see Section 3.2.1), we achieve a good balance between decoding speed and translation quality.", "Therefore, we use it as the default setting in this paper.", "In this section, we investigate the effect of the segment deletion mechanism and the results are shown in Table 4, where q is the probability of injecting pseudo repetitive segments to each training instance.", "From the results we can observe that: (1) Without using the segment deletion mechanism q BLEU Rep Step NAT 24.57 50.09 1 0.0 28.56 26.24 4.4 0.1 29.73 5.11 4.7 RecoverSAT 0.3 29.61 7.71 5.1 ( K = 10 ) 0.5 29.90 7.09 5.1 0.7 29.76 11.47 5.2 0.9 29.25 21.38 5.3 1.0 29.13 20.55 5.2 Table 4: Effect of segment deletion mechanism.", "( q = 0 ), the BLEU score drops significantly and the repetitive token errors (Rep) increase drastically, indicating that the mechanism is effective for recovering from repetitive token errors.", "(2) As q grows larger, the average number of decoding steps (Step) increases steadily because the model is misled that to generate then delete a repetitive segment is expected.", "Thus, q should not be too large.", "(3) The repetitive token errors (Rep) increase drastically when q > 0 .", "7 .", "We believe that the reason is that the pseudo repetitive segments are constructed randomly, making it hard to learn the underlying mapping.", "(4) The model achieves the best performance with q = 0 .", "5 .", "Therefore, we set q = 0 .", "5 in our experiments.", "Figure 2 shows the translation quality of the Transformer, our RecoverSAT model with K = 10 and NAT on the IWSLT16 En-De validation set bucketed by different source sentence lengths.", "From the figure, we can observe that RecoverSAT surpasses NAT significantly and achieves comparable performance to the Transformer on all length buckets, which indicates the effectiveness of our model.", "We present translation examples of NAT and our RecoverSAT model on the WMT14 De En validation set in Table 5.", "From the table, we can observe that: (1) The multi-modality problem (repetitive and missing tokens) is severe in the sentence generated by NAT, while it is effectively alleviated by RecoverSAT (see translations A to D); (2) RecoverSAT can leverage target contexts to dynamically determine the segment length to reduce repetitive token errors (see translation B) or recover from missing token errors (see translations C and D); (3) RecoverSAT is capable of detecting and deleting the repetitive segments, even if there are multiple such segments (see translation D).", "There has been various work investigating to accelerate the decoding process of sequence generation models (Kalchbrenner et al., 2018; Gu et al., 2018).", "In the field of neural machine translation, which is the focus of this work, Gu et al. (2018) first propose non-autoregressive machine translation (NAT), which generates all target tokens simultaneously.", "Although accelerating the decoding process significantly, NAT suffers from the multimodality problem (Gu et al., 2018) which generally manifests as repetitive or missing tokens in translation.", "Therefore, intensive efforts have been devoted to alleviate the multi-modality problem in NAT.", "Wang et al. (2019) regularize the decoder hidden states of neighboring tokens to reduce repetitive tokens; Sun et al. (2019) utilize conditional random field to model target-side positional contexts; Shao et al. (2019a) and Shao et al. (2019b) introduce target-side information via specially designed training loss while Guo et al. (2019a) enhance the input of the decoder with target-side information; Kaiser et al. (2018), Akoury et al. (2019), Shu et al. (2019) and Ma et al. (2019) incorporate latent variables to guide generation; Li et al. (2019), Wei et al. (2019) and Guo et al. (2019b) use autoregressive models to guide the training process of NAT; Ran et al. (2019) and Bao et al. (2019) consider the reordering information in decoding.", "Wang et al. (2018) further propose a semi-autoregressive Transformer method, which generates segments autoregressively and predicts the tokens in a segment non-autoregressively.", "However, none of the above methods explicitly consider recovering from multi-modality related errors.", "Recently, multi-step NAT models have also been investigated to address this issue.", "Lee et al. (2018) and Ghazvininejad et al. (2019) adopt an iterative decoding methods which have the potential to recover from generation errors.", "Besides, Stern et al. and Gu et al. (2019) also propose to use dynamic insertion/deletion to alleviate the generation repeti-tion/missing.", "Different from these work, our model changes one-step NAT to a semi-autoregressive form, which maintains considerable speedup and enables the model to see the local history and future to avoid repetitive/missing words in decoding.", "Our work can further replace the one-step NAT to improve its performance.", "In this work, we propose a novel semi-autoregressive model RecoverSAT to alleviate the multi-modality problem, which performs translation by generating segments non-autoregressively and predicts the tokens in a segment autoregressively.", "By determining segment length dynamically, RecoverSAT is capable of recovering from missing token errors and reducing repetitive token errors.", "By explicitly detecting and deleting repetitive segments, RecoverSAT is able to recover from repetitive token errors.", "Experiments on three widely-used benchmark datasets show that our RecoverSAT model maintains comparable performance with more than 4 decoding speedup compared with the AT model.", "We would like to thank all anonymous reviewers for their insightful comments." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "objective", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "objective", "abstain", "abstain", "result", "other" ]
[ "Word sense disambiguation (WSD) is a crucial problem in the natural language processing (NLP) community.", "Current methods achieve decent performance by utilizing supervised learning and large pre-trained language models.", "However, the imbalanced training dataset leads to poor performance on rare senses and zero-shot senses.", "There are more training instances and senses for words with top frequency ranks than those with low frequency ranks in the training dataset.", "We investigate the statistical relation between word frequency rank and word sense number distribution.", "Based on the relation, we propose a Z-reweighting method on the word level to adjust the training on the imbalanced dataset.", "The experiments show that the Z-reweighting strategy achieves performance gain on the standard English all words WSD benchmark.", "Moreover, the strategy can help models generalize better on rare and zero-shot senses.", "Word sense disambiguation (WSD) has been a longstanding problem in natural language processing community.", "The task can benefit many downstream applications (Navigli, 2009), including but not limited to machine translation (Vickrey et al., 2005; Pu et al., 2018) and information retrieval (Stokoe et al., 2003; Zhong and Ng, 2012).", "The goal of the WSD task is to disambiguate word senses given contexts.", "For example, the word lift in the context Lift a load and The detective carefully lifted some fingerprints from the table has different meanings.", "The former one means raise from a lower to a higher position and the latter one means remove from a surface.", "From semantic recognition of human being, the former sense is easier to disambiguate as it is the most common sense of the word while the latter one is a relatively rare one.", "A skewed distribution exists in SemCor (Miller et al., 1993), a commonly used human-labeled dataset for the WSD task, where most common senses have many training examples while rare senses have much fewer examples.", "A large coverage of senses are not accompanied with training examples, which are called zero-shot senses.", "Many deep neural-networks-based methods are affected by this imbalanced training corpora (Luo et al., 2018; Huang et al., 2019b).", "Previous approaches attempt to address this problem by designing a new dataset or task specifically for the rare senses and zero-shot senses (Holla et al., 2020; Blevins et al., 2021; Barba et al., 2021) or enriching the sense embeddings by incorporating external lexical knowledge (Kumar et al., 2019; Scar-lini et al., 2020; Blevins and Zettlemoyer, 2020).", "Different from these methods, we address the unbalanced training issue from the perspective of adjusting the learning process.", "An interesting human language phenomenon is that it follows a statistical distribution described by Zipf's law (Zipf, 1949), which also exists in many corpora including SemCor.", "From the linguistic perspective, an explanation for Zipf's law is that people tend to use more common words to minimize the communication effort (Zipf, 1949).", "Inspired by this, we consider a word with top rank in frequency should be assigned high training weight.", "From the statistical perspective, two laws have been proposed to explain Zipf's law in word frequency, namely the meaning-frequency law (Zipf, 1945) and Zipf's law of abbreviation (Florence, 1950; Grzybek, 2006).", "The meaning-frequency law proposes that more frequent words have larger number of word senses, which we also denote as larger word #sense.", "Based on this, we calculate the word #sense distribution in SemCor and use a mathematical function to fit the relation between word rank and word #sense.", "Based on the relation, we design the Z-reweighting strategy on the word 4713 level to help models generalize better to rare and zero-shot senses.", "To the best of our knowledge, we are the first to leverage linguistic distribution to address the training bias on the WSD task.", "Our method improves the generalization ability of deep neural models on rare senses and zero-shot senses.", "Results on all English words WSD evaluation benchmarks show that our system achieves improvement on rare and zero-shot senses by 2.1% and 3.6% on F1 score.", "Furthermore, our strategy outperforms the system without any reweighting strategy and achieves a performance gain on the F1 score on all senses.", "We open source our code.", "1 2 Related works 2.1 Word Sense Disambiguation Word sense disambiguation is to distinguish the sense of a specific word given a context sentence.", "Current methods can be broadly classified into two streams, supervised-learning-based and knowledge-based.", "Supervised-learning-based approaches view the WSD task as a classification problem.", "For example, Zhong and Ng (2010) learn classifiers independently for each word.", "Knowledge-based methods, such as (Banerjee et al., 2003; Basile et al., 2014), mainly exploit two kinds of knowledge: 1) the gloss, usually in the form of a sentence defining the word sense; 2) graph structure of lexical resources.", "Recent researches integrate supervised learning and knowledge into a unified system and achieve better performance than systems relying on knowledge only.", "For utilizing gloss, GlossBert (Huang et al., 2019b) constructs context-gloss pairs and conducts sentence-pair classification training.", "Biencoder (Blevins and Zettlemoyer, 2020) proposes an end-to-end learning system to train the embedding space of context words and senses together.", "For utilizing structure properties, EWISE (Kumar et al., 2019) injects gloss and knowledge graph embedding into sense embeddings.", "EWISER (Bevilac-qua and Navigli, 2020) further injects relational knowledge as additional supervision.", "Different with the previous approaches, we focus on addressing training bias caused by the imbalanced distribution in the training dataset.", "In this paper, we analyze the formulation of the distribution and propose the Z-reweighting method to 1 Code is available: https://github.com/ suytingwan/WSD-Z-reweighting .", "improve performance on rare and unseen senses.", "Power law distribution widely exists in human language, where the word frequency can be described by Zipf's law (Zipf, 1949).", "Previous works show that the linguistic law exists in many corpora, including SemCor (Miller et al., 1993), CHILDES (MacWhinney, 2000), and Wikipedia (Grefenstette, 2016).", "SemCor is also one of the largest training datasets for the WSD task, which also includes Ontonotes (Marcus et al., 2011) and OMSTI (Taghipour and Ng, 2015).", "Manin (2008) argues from the semantic view and proposes that the word semantics are influenced by the expansion of word meanings and competition of synonyms results in the law.", "Zipf (1945) proposes that word frequency is related to its word #sense, in which more frequent words have larger word #sense.", "Recently, Casas et al. (2019) investigates the law from the perspective of both word #sense and word length.", "Similarly, our work takes consideration of word #sense distribution and utilizes it for balanced training on the WSD task.", "There are many approaches to address the influence on learning brought by imbalanced training data under a supervised setting.", "Most of the algorithms belong to re-weighting (Huang et al., 2016, 2019a) or re-sampling (Buda et al., 2018; Cui et al., 2019).", "Re-weighting methods adjust the weights of different classes.", "Re-sampling methods balance the learning by over-sampling minority classes or under-sampling the frequent classes.", "Another line of works incorporates the idea of angular margin, aiming to enlarge the intra-class margin (Liu et al., 2016; Wang et al., 2018; Cao et al., 2019).", "Our work follows the line of re-weighting.", "We take consideration of the word #sense distribution and propose Z-reweighting method for the WSD task, which is quite different with previous reweighting methods.", "In this section, we first show the overall word and sense distribution in SemCor 2 (Miller et al., 1993).", "Since we propose to utilize the word #sense distribution as the basis for the Z-reweighting strategy, 2 http://lcl.uniroma1.it/wsdeval/training-data 4714 Type Total num MCS LCS Instance 226,036 166,361 59,675 Sense 33,316 22,320 10,996 Avg.", "As mentioned in (Kilgarriff, 2004), a Zipfian distribution exists in the word senses of human language.", "In this part, we investigate the details of the distribution in training data of SemCor on both the word level and the sense level.", "Senses in WordNet are generally ordered from most to least frequently used 3 .", "The most common sense is ranked first, denoted as MCS .", "We denote other senses of a word as least common senses LCS .", "Following this definition, we calculate the distribution of training data in the SemCor corpus, and the resulting distribution is shown in Table", "1. SemCor contains 226,036 training instances, where each instance is a sentence with a labeled sense of one word.", "Among all the instances, 73.5% are training instances for MCS, belonging to 22,320 words and the rest are for LCS.", "LCS has 5.43 training instances for each sense, much lower than MCS which has 7.45 instances on average.", "We further investigate the word #sense distribution of training words labeled with MCS and LCS respectively.", "The word #sense defined in WordNet 4 is utilized to calculate the distribution.", "The average word #sense for training words labeled with LCS is 4.77, much greater than that of MCS.", "This shows that words labeled with LCS have a larger coverage of senses to distinguish.", "The words with LCS in training data SemCor has higher word #sense while with fewer training instances.", "Therefore, we can see that disambiguating LCS is much more challenging than MCS in the WSD task.", "To investigate the details of the Zipfian distribution in SemCor, we calculate the number of training instances and word #sense for each word and sort them by frequency in descending order.", "We apply a binning technique to reduce noise and get a better view of Zipf's law on word distribution.", "Specifically, every adjacent 300 words belong to a bin for clear analysis in this part.", "The distribution of instance number with sorted word rank by decreasing frequency is shown in Figure", "1. As we can see, top ranked words have much more training cases than low ranked words both for training words labeled with MCS and LCS.", "To get a deeper understanding of the statistical law in word frequency, we further analyze the relation between word #sense and sorted word rank.", "Similar to training instances, we calculate the average #sense of every 300 words in a bin and get the distribution of word #sense with the sorted word rank by decreasing frequency.", "As shown in Figure 2, the words with top rank have larger #sense than 4715 words with low rank.", "This shows that words with the top frequency rank have more senses to disambiguate.", "Moreover, words with LCS are mostly with top ranks.", "In this section, we first introduce the terminology for the WSD task.", "Then we illustrate our Z-reweighting strategy on adjusting the training loss for the imbalanced training dataset.", "The WSD task is to disambiguate the meanings of a set of words w = { w 1 , w 2 , ..., w n } given a context sentence S .", "Each context word w i , i [1 , n ] in a sentence S has several candidate senses { s 1 , s 2 , ..., s m } .", "Each sense is described by a definition sentence, also called gloss in WordNet (Miller, 1998).", "The candidate senses have a corresponding gloss set { g 1 , g 2 , ..., g m } .", "To alleviate the influence brought by the imbalanced training dataset, we propose the Z-reweighting strategy to balance the learning between MCS and LCS during training, resulting in a stronger capability of the model in disambiguating rare and zero-shot senses while maintaining comparable performance on MCS at the same time.", "Training words in SemCor are denoted in form W = { W 1 , W 2 , ..., WN } with descending order of frequency.", "The #sense of a word represents the number of senses belonging to the word.", "P = { p 1 , p 2 , ..., p N } is the #sense array of the words.", "p o = o ( K +1) (cid:88) d = oK p d K , o [1 , N K ] , d [1 , N ] , (1) P = { p 1 , ..., p NK } , (2)", "As analyzed in (Casas et al., 2019), a power law exists between word frequency and #sense in the corpora CHILDES (MacWhinney, 2000).", "Similarly, we utilize a function f ( x ) = a ln( x + b ) + c to fit the relation between word #sense P and word rank o = [1 , 2 , ..., NK ] in SemCor mathematically, where a, b, c are parameters.", "The fitting function is monotonic decreasing with word rank.", "An example of the fitting curve and original word #sense distribution with word frequency at K = 300 is shown in Figure", "With the same word ranks, a smoothed word #sense array can be calculated from the fitting curve as:", "The discrete fitting word #sense array is normalized for further processing:", "Since the number of words is too large to assign each word a weight, the NK bins of words are further split into M groups.", "For word in k -th bin, k [1 , NK ] , belonging to group j [1 , M ] , the regularized #sense satisfies: p tj +1 p rk < p tj , (5) where P t = { p t 1 , ..., p tM } is the threshold array to split the groups.", "Assume the predicted output probabilities from a model for candidate sense set as z = [ z 1 , z 2 , ..., z m ] , the standard cross entropy loss given true word sense label y is: loss ( w i , y ) = log( exp( z y ) (cid:80) ml =1 exp( z l )) .", "In Z-reweighting strategy, the weight j is used to adjust the training on word level.", "The new weighted training loss is: loss ( w i , j, y ) = j log( exp( z y ) (cid:80) ml =1 exp( z l )) , (8) where i [1 , N ] and j [1 , M ] , representing the word with rank i in group j has training weight j .", "In this section, we first introduce the training dataset and evaluation metrics.", "Then we show different baseline methods.", "Finally, details of the training process are presented.", "SemCor 3.0 is used as the training dataset.", "Five standard WSD datasets from Senseval and SemEval competitions are used as evaluation set.", "Among them, semeval 2007 (Pradhan et al., 2007) is used as development dataset for selecting the best model.", "Other four datasets including senseval-2 (Palmer et al., 2001), senseval-3 (Snyder and Palmer, 2004), semeval2012 (Navigli et al., 2013) and semeval2015 (Moro and Navigli, 2015) are used as test datasets.", "We select F1 as the evaluation metric.", "We also follow previous works (Raganato et al., 2017) to report the overall performance on all datasets.", "For further analysis, F1 scores on MCS, LCS, and zero-shot senses are also calculated.", "BEM framework (Blevins and Zettlemoyer, 2020) without any balanced strategy is a baseline system.", "In addition, different balanced training methods applied on BEM framework are used as three more baseline systems.", "The balanced methods are classified into two levels, namely, the sense-level (balanced reweighting and margin based method LDAW (Cao et al., 2019)), and the word-level (balanced resampling).", "Biencoder Model (BEM).", "The model utilizes the gloss knowledge from WordNet.", "The two encoders are initialized with the same pre-trained language model.", "The encoders take a context sentence and glosses as input, generating representations for word w i and corresponding gloss set { g 1 , g 2 , ..., g m } as E i and { G 1 , G 2 , ..., G m } separately.", "Based on the representations, the similarity score between words and glosses are calculated as: z j = E i G j , j [1 , m ] A standard cross-entropy loss is used in training as Equation 7.", "Balanced Reweighting Method (B-reweighting).", "The B-reweighting strategy is applied on the sense level.", "For each word, the weights of senses is proportional to the inverse of training instances.", "Balanced Resampling Method (B-resampling).", "The B-resampling method is applied on the word level.", "Firstly each word is sampled with the same probability.", "Then the training cases of the selected word are sampled randomly.", "Standard cross-entropy loss is used in this method.", "LDAM.", "The margin-based method adjusts the training on the sense level.", "The goal of LDAM is to solve the class-imbalance problem by utilizing a label-distribution-aware margin loss.", "We apply the LDAW loss on the sense level as another baseline.", "The smoothed relaxation of LDAM in the cross-entropy loss with enhanced margins is as follows: loss margin ( w i , y ) = log e z y y e z y y + (cid:80) l = y e z l , where l = C n 1 / 4 l , for l { 1 , . . . , m } and C is a constant.", "n l is the training instances number of sense j for word w i .", "Standard LDAW is trained in two stages.", "Firstly the label-distribution-aware margin loss is applied to train the model for three epochs and B-reweighting loss is used for further training.", "In both stages, the learning rate is always 1e-5.", "For first stage training, C is set as 0.5.", "Baseline Systems.", "Each system is trained for 20 epochs.", "AdamW (Kingma and Ba, 2015) is selected as the optimization algorithm.", "The learning rate is fixed at 1e-5 during training.", "The encoders in the biencoder framework both are initialized with bert-base (110M parameters) or bert-large (336M parameters) (Kenton and Toutanova, 2019).", "The experiments in which encoders initialized with bert-base are run on RTX 2080 and the experiments in which encoders initialized with Bert-large are run on RTX 3090.", "Average running hours is 30 hours for Bert-base and 40 hours for Bert-large.", "Z-reweighting.", "To simplify the mathematical function fitting of word #sense distribution, we first split the words into bins by setting a fixed group number K .", "For the second grouping stage, to simplify the reweighting strategy on word level, we use thresholds to group the smoothed values calculated by fitting curve given word rank.", "In the experiments, we use weights of 1 decimal place as thresholds.", "The defined threshold array is as 4717 Test Datasets model SE07 SE2 SE3 SE13 SE15 ALL WordNet S1 55.2 66.8 66.2 63.0 67.8 65.2 MFS 54.5 65.6 66.0 63.8 67.1 65.5 EWISE (Kumar et al., 2019) 67.3 73.8 71.1 69.4 74.5 71.8 BERT-base (Kenton and Toutanova, 2019) 68.6 75.9 74.4 70.6 75.2 73.7 GlossBERT (Huang et al., 2019b) 72.5 77.7 75.2 76.1 80.4 77.0 BEM (Blevins and Zettlemoyer, 2020) 72.8 1.2 78.8 0.1 77.2 0.4 77.8 1.1 81.4 0.5 78.1 0.1 B-resampling 60.4 0.6 71.5 0.3 68.8 1.1 72.8 1.2 74.7 0.9 70.9 0.1 B-reweighting 71.3 0.3 78.4 0.7 75.5 0.5 75.6 1.3 80.4 1.3 77.1 0.3 LDAW 71.3 0.3 78.6 0.3 75.4 0.6 76.6 0.6 80.3 0.2 77.1 0.1 Z-reweighting 71.9 0.5 79.6 0.2 76.5 0.2 78.9 0.5 82.5 0.9 78.6 0.2 Table 2: F1(%) score on English all-words WSD task.", "P t = [1 . 0 , 0 . 9 , ..., 0 . 1] , where the gap between thresholds is 0.1.", "For assigning weights, = 1 , 2 is used to adjust the value of weight.", "For example, words with regularized word #sense in [0 .", "3 , 0 .", "4) are in a group, assigning weight 0.16 when = 2 .", "The weight is further rounded with one decimal number as 0.2.", "If the rounding weight is less than 0.1, we use a weight of 0.1.", "For comparison with baselines, we set K = 300 , = 2 in the Z-reweighting strategy.", "In this section, we first analyze the overall performance on the test datasets using different training strategies.", "Then the details of improvement on MCS, LCS, and zero-shot senses for word groups are presented.", "Finally, we analyze the influences brought by hyper-parameters in Z-reweighting and influences brought by backbone models.", "The performance on test datasets by different systems are shown in Table", "2. WordNet S1 uses the most common sense in WordNet and MFS uses the most frequent sense in the training dataset.", "Both the baselines achieve much lower performances than previous learning based systems, including BERT-base (Kenton and Toutanova, 2019), GlossBERT (Huang et al., 2019b) and BEM 5 .", "The sense embeddings in EWISE is fixed during training, which explains its much lower performance than GlossBERT and BEM.", "The F1 score on MCS, LCS and zero-shot senses in ALL testset, with 4,603, 2,650, and 1,139 test instances respectively, are reported in Table", "3. Comparing BEM with systems with balancing strategies, only Z-reweighting achieves performance gain on LCS and zero-shot senses while maintaining comparable performance on overall performance at the same time.", "Details show that though Z-reweighting slightly drops on MCS, performance on LCS and zero-shot senses increases 2.1% and 3.6% separately.", "Besides the Z-reweighting method, B-reweighting and LDAW also show performance improvement on LCS and zero-shot senses, comparing with the BEM baseline.", "However, these balanced strategies deteriorate the system ability in 5 We use original open source code for paper (Blevins and Zettlemoyer, 2020): https://github.com/facebookresearch/wsdbiencoders.", "word rank Figure 3: F1(%) score on LCS on ALL test dataset.", "1-0.3k means words with rank from 1 to 0.3k belongs to first group.", "oov means the group of words not appearing in SemCor.", "disambiguating MCS, resulting in the drop of F1 score on the ALL dataset.", "Among all the balanced training strategies, B-resampling performs the worst.", "Equally sampling the words leads to insufficient training of top-ranked words, which results in poor performance.", "Our Z-reweighting strategy outperforms all the other balanced training strategies, indicating that our method is effective in improving the generalization ability of the model.", "Among all the balanced strategies, only Z-reweighting outperforms baseline system BEM on the F1 score of ALL dataset.", "To look into how the Z-reweighting strategy works, we analyze the details of performance in the word groups.", "Noting that according to the Z-reweighting strategy, words in each group are assigned the same weight.", "The hyper-parameters are K = 300 , = 2 for our results.", "Under the setting, there are six groups of words from training dataset.", "These six groups of words are sorted by decreasing frequency order.", "The left words belong to a oov group in which words are not shown in the training dataset.", "We calculate the F1 score of LCS and zero-shot senses of ALL test set and plot the results in Figure 3 and Figure 4 separately.", "In Figure 3, our system outperforms BEM on group one, in which the words are with highest frequency.", "The Z-reweighting strategy assigns the largest weight in this group and the F1 score improves 3.4%.", "For group 4 to group 7, our algorithm also shows consistent improvements.", "The performance gain drops 1 ~ 0 .", "in group", "2. The reason behind this is that we manage to improve the performance of all words on the WSD task and use semeval2007 as development set for model selection.", "For zero-shot senses shown in Figure 4, our system achieves improvement in five out of seven word groups, at most 24.3% for group five.", "The results show that the Z-reweighting strategy enables the model to generalize better to unseen senses.", "For words in group seven, the performance of zero-shot senses also improves, which shows that our method can further generalize better to senses of unseen words.", "Each different bin number K results in a set of distinct weights for training words.", "In our experiments, we set K = 50 , 100 , 200 , 300 , 400 and = 1 , 2 .", "The performances of our system under different hyper-parameter settings are shown in Figure 5 and Table 4.", "In Figure 5, we can see that = 2 achieves higher performance than = 1 in most settings.", "This shows that the training weights with larger disparity on top and low ranked words result in higher performance on the overall score.", "The best performance achieves at K = 300 , = 2 .", "It is interesting to see that with different hyper-parameters, the system has various overall scores.", "When K = 400 , the overall score achieves lowest both for = 1 and = 2 .", "It indicates that large K eliminates the weight distinctness between words during training, leading to drop on overall performance.", "hyper-parameters, we further show the F1 score of MCS, LCS, and zero-shot senses in Table 4.", "The accuracy of MCS varies from 92.8% to 93.3% under different settings.", "For most of the groups, MCS achieves higher performance with = 1 than = 2 .", "When the gap between weights becomes larger with = 1 changing to = 2 , the influences on LCS and zero-shot senses are greater than those on MCS.", "LCS achieves the best score 54.3%, 1.4% higher than the lowest score.", "Zero-shot senses achieve the best score of 72.3%, 1.4% higher than the lowest score.", "For all the combinations, we can see improvements on LCS and zero-shot senses compared to baseline BEM, demonstrating the effectiveness of our strategy.", "In this section, we show the influences brought by backbone models in BEM.", "The encoders of BEM are initialized by Bert-base and Bert-large models respectively for comparison.", "For Z-reweighting strategy, we use K = 300 , = 2 .", "Training parameter settings are the same with the two backbone models.", "We experiment with different balanced training strategies.", "The results are presented in Figure 6.", "From the figure we can see that Bert-large achieves better performance on BEM and LDAM systems.", "For B-reweighting and Z-reweighting systems, the overall scores remain almost the same.", "However, for the B-resampling strategy, the performance drop 1%.", "Since the performances on Bert-large are nearly the same or even worse than Bert-base, we use Bert-base as the backbone model for training efficiency.", "In this paper, we address the problem in learning imbalanced training dataset on the WSD task.", "Words with top frequency rank have more senses to disambiguate both for MCS and LCS.", "We assume these words should be assigned larger weights during training.", "Specifically, we use a mathematical function to fit the relation between word rank and word #sense, and utilize smoothed #sense to design the Z-reweighting strategy for all words English WSD task.", "The strategy leads to improvement on the performance of LCS and zero-shot senses on standard English WSD evaluation benchmarks.", "Furthermore, our method achieves performance gain on the F1 score for all senses.", "The results demonstrate the effectiveness of our methods.", "This research was partially supported by the NSFC Fund (U20B2053) from the NSFC of China, the RIF (R6020-19 and R6021-20) and the GRF (16211520) from RGC of Hong Kong, the MHKJFS (MHP/001/19) from ITC of Hong Kong with special thanks to HKMAAC and CUSBLT, and the Jiangsu Province Science and Technology Collaboration Fund (BZ2021065).", "We thank our colleague Tianqing Fang for providing insightful discussion and help in the research." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "method", "objective", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "result", "objective", "other", "other" ]
[ "Abstract Recent works have shown that supervised models often exploit data artifacts to achieve good test scores while their performance severely degrades on samples outside their training distribution.", "Contrast sets (Gardner et al., 2020) quantify this phenomenon by perturbing test samples in a minimal way such that the output label is modified.", "While most contrast sets were created manually, requiring intensive annotation effort, we present a novel method which leverages rich semantic input representation to automatically generate contrast sets for the visual question answering task.", "Our method computes the answer of perturbed questions, thus vastly reducing annotation cost and enabling thorough evaluation of models' performance on various semantic aspects (e.g., spatial or relational reasoning).", "We demonstrate the effectiveness of our approach on the popular GQA dataset (Hudson and Manning, 2019) and its semantic scene graph image representation.", "We find that, despite GQA's composi-tionality and carefully balanced label distribution, two strong models drop 1317% in accuracy on our automatically-constructed contrast set compared to the original validation set.", "Finally, we show that our method can be applied to the training set to mitigate the degradation in performance, opening the door to more robust models.", "1 1 Introduction NLP benchmarks typically evaluate in-distribution generalization, where test sets are drawn i.i.d from a distribution similar to the training set.", "Recent works showed that high performance on test sets sampled in this manner is often achieved by exploiting systematic gaps, annotation artifacts, lexical cues and other heuristics, rather than learning meaningful task-related signal.", "As a result, 1 Our contrast sets and code are available at https://github.com/yonatanbitton/AutoGenOfContrastSetsFromSceneGraphs .", "the out-of-domain performance of these models is often severely deteriorated (Jia and Liang, 2017; Ribeiro et al., 2018; Gururangan et al., 2018; Geva et al., 2019; McCoy et al., 2019; Feng et al., 2019; Stanovsky et al., 2019).", "Recently, Kaushik et al. (2019) and Gardner et al. (2020) introduced the contrast sets approach to probe out-of-domain generalization.", "Contrast sets are constructed via minimal modifications to test inputs, such that their label is modified.", "For example, in Fig. 1, replacing a fence with a wall, changes the answer from Yes to No.", "Since such perturbations introduce minimal additional semantic complexity, robust models are expected to perform similarly on the test and contrast sets.", "However, a range of NLP models severely degrade in performance on contrast sets, hinting that they do not generalize well (Gardner et al., 2020).", "Except two recent exceptions for textual datasets (Li et al., 2020; Rosen-man et al., 2020), contrast sets have so far been built manually, requiring extensive human effort and expertise.", "In this work, we propose a method for automatic generation of large contrast sets for visual question answering (VQA).", "We experiment with the GQA dataset (Hudson and Manning, 2019).", "GQA includes semantic scene graphs (Krishna et al., 2017) representing the spatial relations between objects in the image, as exemplified in Fig. 1.", "The scene graphs, along with functional programs that represent the questions, are used to balance the dataset, thus aiming to mitigate spurious dataset correlations.", "We leverage the GQA scene graphs to create contrast sets, by automatically computing the answers to question perturbations, e.g., verifying that there is no wall near the puddle in Fig. 1.", "We create automatic contrast sets for 29K samples or 22% of the validation set.", "We manually verify the correctness of 1,106 of these samples on Mechanical Turk.", "Following, we evaluate two leading models, LXMERT (Tan and Bansal, 2019) and MAC (Hudson and Manning, 2019) on our contrast sets, and find a 1317% reduction in performance compared to the original validation set.", "Finally, we show that our automatic method for contrast set construction can be used to improve performance by employing it during training .", "We augment the GQA training set with automatically constructed training contrast sets (adding 80K samples to the existing 943K in GQA), and observe that when trained with it, both LXMERT and MAC improve by about 14% on the contrast sets, while maintaining their original validation performance.", "Our key contributions are: (1) We present an automatic method for creating contrast sets for VQA datasets with structured input representations; (2) We automatically create contrast sets for GQA, and find that for two strong models, performance on the contrast sets is lower than on the original validation set; and (3) We apply our method to augment the training data, improving both models' performance on the contrast sets.", "To construct automatic contrast sets for GQA we first identify a large subset of questions requiring specific reasoning skills (2.1).", "Using the scene graph representation, we perturb each question in a manner which changes its gold answer (2.2).", "Finally, we validate the automatic process via crowdsourcing (2.3).", "The questions in the GQA dataset present a diverse set of modelling challenges, as exemplified in Table 1, including object identification and grounding, spatial reasoning and color identification.", "Following the contrast set approach, we create perturbations testing whether models are capable of solving questions which require this skill set, but that diverge from their training distribution.", "To achieve this, we identify commonly recurring question templates which specifically require such skills.", "For example, to answer the question Are there any cats near the boat ? a model needs to identify objects in the image ( cats , boat ), link them to the question, and identify their relative position.", "We identify six question templates, testing various skills (Table 1).", "We abstract each question template with a regular expression which identifies the question types as well as the physical objects, their attributes (e.g., colors), and spatial relations.", "Overall, these regular expressions match 29K questions in the validation set ( 22%), and 80K questions in the training set ( 8%).", "We design a perturbation method which guarantees a change in the gold answer for each question template.", "For example, looking at Fig. 2, for the question template are there X near the Y?", "(e.g., Is there any fence near the players?), we replace either X or Y with a probable distractor (e.g. replace fence with trees).", "We use the scene graph to ensure that the answer to the question is indeed changed.", "In our example, this would entail grounding players in the question to the scene graph (either via exact match or several other heuristics such as hard-coded lists of synonyms or co-hyponyms), locating its neighbors, and verifying that none of them are trees.", "We then apply heuristics to fix syntax (e.g., changing from singular to plural determiner, see Appendix A.3), and verify that the perturbed sample Question template Tested attributes Example On which side is the X ?", "does not already exist in GQA.", "The specific perturbation is performed per question template.", "In question templates with two objects ( X and Y ), we replace X with X' , such that X' is correlated with Y in other GQA scene graphs.", "In question templates with a single object X , we replace X with a textually-similar X' .", "For example in the first row in Table 1 we replace dishwasher with dishes .", "Our perturbation code is publicly available.", "This process may yield an arbitrarily large number of contrasting samples per question, as there are many candidates for replacing objects participating in questions.", "We report experiments with up to 1, 3 and 5 contrasting samples per question.", "Illustrating the perturbation process.", "Looking at Fig. 1, we see the scene-graph information: objects have bounding-boxes around them in the image (e.g., zebra ); Objects have attributes ( wood is an attribute of the fence object); and there are relationships between the objects (the puddle is to the right of the zebra, and it is near the fence).", "The original (question, answer) pair is (is there a fence near the puddle?, Yes).", "We first identify the question template by regular expressions: Is there X near the Y, and isolate X= fence , Y= puddle .", "The answer is Yes, so we know that X is indeed near Y. We then use the existing information given in the scene-graph.", "We search for X' that is not near Y. To achieve this, we sample a random object ( wall ), and verify that it doesn't exist in the set of scene-graph objects.", "This results in a perturbed example Is there a wall near the puddle?, and now the ground truth is computed to be No.", "Consider a different example: (Is the puddle to the left of the zebra?, Yes).", "We identify the question template Is the X Rel the Y, where X= puddle , Rel= to the left , Y= zebra .", "The answer is Yes.", "Now we can easily change Rel'= to the right , resulting in the (question, answer) pair (Is the puddle to the right of the zebra?, No).", "We highlight the following: (1) This process is done entirely automatically (we validate it in Section 2.3); (2) The answer is deterministic given the information in the scene-graph; (3) We do not produce unanswerable questions.", "If we couldn't find an alternative atom for which the presuppositions hold, we do not create the perturbed (question, answer) pair; (4) Grounding objects from the question to the scene-graph can be tricky.", "It can involve exact match, number match ( dogs in the question, and dog in the scene-graph), hyponyms ( animal in the question, and dog in the scene-graph), and synonyms ( motorbike in the question, and motorcycle in the scene-graph).", "The details are in the published code; (5) The only difference between the original and the perturbed instance is a single atom: an object, relationship, or attribute.", "To verify the correctness of our automatic process, we sampled 553 images, each one with an original and perturbed QA pair for a total of 1,106 instances ( 4% of the validation contrast pairs).", "The (im-age, question) pairs were answered independently by human annotators on Amazon Mechanical Turk (see Fig. 3 in Appendix A.4), oblivious to whether the question originated from GQA or from our automatic contrast set.", "We found that the workers were able to correctly answer 72.3% of the perturbed questions, slightly lower than their performance on the original questions (76.6%).", "2 We observed high agreement between annotators ( = 0 . 679 ).", "Our analysis shows that the human performance difference between the perturbed questions and the original questions can be attributed to the scene 2 The GQA paper reports higher human accuracy (around 90%) on their original questions.", "We attribute this difference to the selection of a subset of questions that match our templates, which are potentially more ambiguous than average GQA questions (see Section 3).", "The bat the batter is holding has what color?", "Brown The helmet has what color?", "Blue Is there any fence near the players ?", "graph annotation errors in the GQA dataset: 3.5% of the 4% difference is caused by a discrepancy between image and scene graph (objects appearing in the image and not in the graph, and vice versa).", "Examples are available in Fig. 5 in Appendix A.5.", "We experiment with two top-performing GQA models, MAC (Hudson and Manning, 2018) and LXMERT (Tan and Bansal, 2019), 3 to test their generalization on our automatic contrast sets, leading to various key observations.", "Models struggle with our contrast set.", "Table 2 shows that despite GQA's emphasis on dataset balance and compositionality, both MAC and LXMERT degraded on the contrast set: MAC 64.9% 51.5% and LXMERT 83.9% 67.2%, compared to only 4% degradation in human performance.", "Full breakdown of the results by template is shown in Table 3.", "As expected, question templates that reference two objects ( X and Y ) result in larger performance drop compared to those containing a single object ( X ).", "Questions about colors 3 MAC and LXMERT are the top two models in the GQA leaderboard with a public implementation as of the time of submission: https://github.com/airsplay/ lxmert and https://github.com/stanfordnlp/ mac-network/ .", "had the smallest performance drop, potentially because the models performance on such multi-class, subjective questions is relatively low to begin with.", "models.", "Previous works tried to mitigate spurious datasets biases by explicitly balancing labels during dataset construction (Goyal et al., 2017; Zhu et al., 2016; Zhang et al., 2016) or using adversarial filtering (Zellers et al., 2018, 2019).", "In this work we take an inoculation approach (Liu et al., 2019) and augment the original GQA training set with contrast training data, resulting in a total of 1,023,607 training samples.", "We retrain both models on the augmented training data, and observe in Table 2 that their performance on the contrast set almost matches that of the original validation set, with no loss (MAC) or only minor loss (LXMERT) to original validation accuracy.", "4 These results indicate that the perturbed training set is a valuable signal, which helps models recognize more patterns.", "Contrast Consistency.", "Our method can be used to generate many augmented questions by simply sampling more items for replacement (Section 2).", "4 To verify that this is not the result of training on more data, we repeated this experiment, removing the same amount of original training instances (so the final dataset size is the same as the original one), and observed very similar results.", "This allows us to measure the contrast consistency (Gardner et al., 2020) of our contrast set, defined as the percentage of the contrast sets for which a model's predictions are correct for all examples in the set (including the original example).", "For example, in Fig. 1 the set size is 4, and only 2/4 predictions are correct.", "We experiment with 1, 3, and 5 augmentations per question with the LXMERT model trained on the original GQA training set.", "Our results (Table", "4) show that sampling more objects leads to similar accuracy levels for the LXMERT model, indicating that quality of our contrast sets does not depend on the specific selection of replacements.", "However, we observe that consistency drops fast as the size of the contrast sets per QA instance grows, indicating that model success on a specific instance does not mean it can generalize robustly to perturbations.", "Our results suggest that both MAC and LXMERT under-perform when tested out of distribution.", "A remaining question is whether this is due to model architecture or dataset design.", "Bogin et al. (2020) claim that both of these models are prone to fail on compositional generalization because they do not decompose the problem into smaller sub-tasks.", "Our results support this claim.", "On the other hand, it is possible that a different dataset could prevent these models from finding shortcuts.", "Is there a dataset that can prevent all shortcuts?", "Our automatic method for creating contrast sets allows us to ask those questions, while we believe that future work in better training mechanisms, as suggested in Bogin et al. (2020) and Jin et al. (2020), could help in making more robust models.", "We proposed an automatic method for creating contrast sets for VQA datasets that use annotated scene graphs.", "We created contrast sets for the GQA dataset, which is designed to be compositional, balanced, and robust against statistical biases.", "We observed a large performance drop between the original and augmented sets.", "As our contrast sets can be generated cheaply, we further augmented the GQA training data with additional perturbed questions, and showed that this improves models' performance on the contrast set.", "Our proposed method can be extended to other VQA datasets.", "We thank the reviewers for the helpful comments and feedback.", "We thank the authors of GQA for building the dataset, and the authors of LXMERT and MAC for sharing their code and making it usable.", "This work was supported in part by the Center for Interdisciplinary Data Science Research at the Hebrew University of Jerusalem, and research gifts from the Allen Institute for AI." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "objective", "result", "objective", "objective", "other", "other", "other" ]
[ "This paper describes a language-independent model for fully unsupervised morphological analysis that exploits a universal framework leveraging morphological typology.", "By modeling morphological processes including suffixation, prefixation, infixation, and full and partial reduplication with constrained stem change rules, our system effectively constrains the search space and offers a wide coverage in terms of morphological typology.", "The system is tested on nine typologically and genetically diverse languages, and shows superior performance over leading systems.", "We also investigate the effect of an oracle that provides only a handful of bits per language to signal morphological type.", "Morphological analysis aims to identify languages' word-internal structures.", "Early approaches to the computational analysis of morphology modeled the structure of each language with hand-built rules, (e.g. Sproat, 1992).", "Such systems require a significant amount of work from domain experts, and while they tend to be very accurate, they also suffer from low coverage.", "Supervised and semi-supervised machine learning approaches require expert input and will suffer from out-of-vocabulary problems.", "This paper focuses primarily on fully unsupervised morphological learning, which offers the most flexibility and can be deployed for new languages with no data annotation.", "Concatenation-based morphological learning systems aim to identify morphemes or morpheme boundaries within words (Virpioja et al., 2013; Goldwater and Johnson, 2004; Creutz and Lagus, 2005, 2007; Lignos, 2010; Poon et al., 2009; Snyder and Barzilay, 2008).", "The Morpho-Challenge tasks 1 provide a set of morphologically annotated 1", "data for testing concatenation.", "However, systems designed directly for identifying morpheme boundaries are limited in that non-linear structures such as infixation cannot be well captured.", "Another approach exploits morphological relations between word pairs.", "Related words form morphological chains through processes of derivation.", "There are many such processes including affixation at the edges or middle of a word, reduplication, stem transformations, and so on.", "Of these, only edge-affixation is available to concatenation-based models, so leveraging derivation directly allows for wider cross-linguistic coverage (Schone and Jurafsky, 2001; Narasimhan et al., 2015; Soricut and Och, 2015; Luo et al., 2017; Xu et al., 2018).", "A more holistic line of work builds learning on the concept of morphological paradigms (Parkes et al., 1998; Goldsmith, 2001; Chan, 2006; Xu et al., 2018).", "Paradigms can be defined as sets of morphological processes applicable to homogeneous groups of words.", "For example, the paradigm ( NULL, -er, -est, -ly ) in English can be applied to adjectives (e.g., high, higher, highest, highly ), while ( NULL, -ing, -ed, -s, -er ) is defined over verbs (e.g, walk, walking, walked, walks, walker ).", "Paradigms have several merits.", "First, they provide a principled strategy for tackling the data sparsity problem.", "In morphologically rich languages, a single word can derive hundreds of forms most of which will be unattested in real data.", "This can be addressed by taking paradigms into account because if a word appears in part of the paradigm, it likely can appear in the rest too.", "The recent SIGMORPHON shared tasks in paradigm filling are along this line (Cot-terell et al., 2016, 2017, 2018).", "Second, paradigms can be used to identify spurious morphological analyses.", "For example, the words within, without, wither might be analyzed as applying suffixes -in, -out, -er to the word with , however, the paradigm ( -in, -out, -er ) is not reliable since it only applies to one single word, i.e. with .", "One thread common in previous work is the lack of consideration for characteristics of language-specific morphological typology.", "In this paper, we propose a new framework that incorporates typological awareness by explicitly modeling different morphological patterns including suffixation, prefixation, infixation, and reduplication.", "These patterns have covered most common morphological processes of the languages in the world, with the exception of templatic morphology which is not represented in the LDC-provided test sets.", "By building such universal linguistic knowledge, the model will benefit from both constraining the search space (without generating a large amount of spurious analyses) and providing a wider coverage especially for the non-linear morphological structures.", "The Morpho-Challenge tasks held between 2005 and 2010 motivated a large amount of work on unsupervised morphology learning including the Morfessor family of models.", "The Morfessor baseline system (Creutz and Lagus, 2002; Virpioja et al., 2013), an MDL model, is one of the most popular unsupervised systems for automatic morphological segmentation.", "Creutz and Lagus (2005, 2007) extend the model with the maximum a posteriori (MAP) on both observed data and the model.", "These systems only require word lists as input, which is an advantage for low-resource languages where there is no large corpus for training complex models.", "Various work has explored the idea of paradigms.", "Parkes et al. (1998) try to learn inflectional paradigms on English verbs, Goldsmith (2001, 2006) exploits the MDL principle to learn paradigms (referred to as signatures ) with a greedy search strategy, and Dreyer and Eisner (2011) adopt a semi-supervised log-linear model to identify paradigms, which requires a number of seed paradigms for training.", "However, in morphologically rich languages such as Turkish where a single paradigm can be extremely large, this method requires considerable human annotation effort.", "Ahlberg et al. (2014) use a semi-supervised approach to learn abstract paradigms from a given inflection table.", "However, the task is different from what we discuss here, which discovers inflection tables as an intermediate step.", "Xu et al. (2018) create paradigms from the results of a probabilistic model and use the reliable paradigms to prune unreliable ones and achieve promising results.", "Xu et al. (2018)'s model only deals with suffixation.", "The framework that we develop in this paper is most directly inspired by Xu et al. (2018).", "Schone and Jurafsky (2001) use semantic information to identify real morphological pairs from a set of orthographically similar word pairs.", "Similarly, Soricut and Och (2015) use orthographic information to generate candidate morphological rules, e.g., prefix : $ : in , and then use word embeddings to evaluate the qualities of the rules.", "Narasimhan et al. (2015) create morphological chains , e.g., (play, playful, playfully) , using both orthographic information and distributional semantics by maximizing the likelihood through a log-linear model.", "One drawback of using distributional information is that it requires large text corpora to train reliable semantic vectors.", "This is a major hurdle for applying such a system to low-resource languages.", "Based on the output of Narasimhan et al. (2015)'s model, Luo et al. (2017) adopt integer linear programming (ILP) to find globally optimal paradigms, which they call morphological forests, and achieve improved performance.", "This section surveys the morphological phenomena frequently observed among the world's languages which our system is able to account for.", "Affixation is the appending of a bound morpheme or affix onto either end of a word and is the most common kind of morphological operation (Dryer, 2013).", "Affixes postpended to a word are called suffixes such as -ed , -ing , -ness , or -est in English, while prefixes are prepended such as preor un, and infixes find their way into the middle of a root.", "Infixes are rarer cross-linguistically, but they do surface around the world, notably in languages like Tagalog (Malayo-Polynesian), dulot d -inulot or graduate gr -umaduate .", "Many languages stack or nest affixes.", "English derivational morphology does this occasionally as in anti-disestablish -ment-ari-an-ism or in Shona (S Bantu) inflectional morphology, for example, ha-mu-cha-mbo-nyatso-ndirov -es-i=wo You will not cause me to be beaten' (Mugari, 2013).", "A given affix may never appear on the edge of a word since it can be obligatorily followed or preceded by more affixes.", "This can be seen in Bantu verbs which necessarily end with a so-called final vowel morpheme (here, -a ).", "Most other suffixes have to appear before the final vowel, so they are never themselves suffixes in the string sense.", "For example, given the Shona kupig -a to strike,' one could form ku-pig -an-a to strike one another' or kupig -w-a to be stricken' but not * kupig -w or * kupig -an .", "We will refer to the disconnect between morphological suffixation and string suffixation as the final vowel problem .", "Reduplication , the doubling of all or a part of a word, is productive in many languages, especially outside modern Europe (Rubino, 2013).", "Full reduplication can indicate plural number, repeated actions, or progressive aspect in Austronesian languages such as Indonesian and Tagalog.", "In Indonesian, sometimes a whole word including its affixes is reduplicated ( bangun -anbangun -an ), while other times it is only the root ( deg deg -an or berbondong bondong ).", "Partial reduplication is exemplified in Pangasinan, an Austronesian relative of Tagalog, which has more productive partial reduplication for plurals.", "It can surface on the left ( plato paplto ), or it may be infixed ( amigo ami -mgo ) (Rubino, 2001).", "Some morphology is expressed through stem changes rather than string concatenation.", "English often expresses past tense, past participles, and plurals with changes to stem vowels, sometimes in conjunction with affixation ( sing sang sung , freeze froze froz -en , and goose geese ).", "Consonants can alternate as well, for example in Finnish luku luvu -t and etsi nt etsi nn -t .", "Some changes are morphophonological because they are related to the phonology of the language and thus are somewhat predictable.", "For example, the Latin root scrib becomes scrip -t-us in the past participle because /b/ is devoiced before /t/.", "These contrast with alternations like goose geese which are arbitrary there is no moose * meese .", "Vowel harmony is a kind of pervasive global morphophonological pattern which forces vowels in a word to share certain features.", "In the simplest case, this often results in affix allomorphy where each affix has alternate forms that agree with the features in the root or the root must agree with the affixes.", "Finnish presents a classic example of front-back vowel harmony: a word may contain front vowels ( , , ) or back vowels ( a , o , u ) but not both.", "Suffixes have front and back allomorphs in order to agree with the stem.", "For example, contrast the front-containing suffixes after front-containing root liity nt -j with the same suffixes after a back-containing root liiku nt -oja .", "In this section, we describe our framework for modeling language morphologies, including prefixation, suffixation, infixation, full and partial reduplication.", "We also model stem changes that typically occur at word boundaries except for vowel changes.", "Many theories of morphology such as paradigm-based morphology, e.g. Paradigm Function Morphology (Stump, 2001), cast morphology as a relation between word pairs.", "We adopt this perspective as the basis of our framework, except that we do not differentiate derivational morphology from inflection.", "In detail, the framework assumes morphology to be an operation that is applied to a word (root) to form another word and effects a change in meaning along some dimension, e.g., adding information such as case, number, gender, tense, or aspect.", "We denote such a morphological process with a function f .", "The function takes a root word r as input and forms a new word w , i.e. f ( r ) = w .", "Thus the task of morphology learning can be defined as searching for a function f and another word r , given a word w , such that f ( r ) = w .", "Here, we describe how we incorporate prefixation, suffixation, infixation, and full and partial reduplication to constrain the morphological function space.", "This improves over naive methods focusing on edit distance, which can be used to evaluate how good a morphological function is locally.", "Globally, a morphological function can be evaluated by observing its overall frequency, namely its corpus productivity in a language.", "Such a simple system would tend to hallucinate many spurious yet frequent morphological functions, which may not be possible morphologically from a richer linguistic perspective.", "Morphological patterns allow us to represent the derivation of complex words from root words.", "A prefixation pattern can be defined as <prefix>_x , where <prefix> is a specific prefix in a language, and x stands for the root.", "For example, the pattern <un->_x describes how the word unfold can be derived from fold with a prefix.", "A suffixation pattern can be defined as x_<suffix> and an infixation pattern can be defined similarly as bx_<infix>_ex , where bx and ex are the beginning and ending part of the root word x and x = bx + ex .", "Reduplication functions can be defined in the same way.", "A full reduplication pattern is defined as x_x .", "A partial reduplication can be defined as bx_x ( bx (cid:54) = x ) with the partial copy of x on the left or x_ex ( ex (cid:54) = x ) with the partial copy on the right.", "Table 1 shows all the morphological patterns associated with examples from different languages.", "Here, we define the stem change rules that are motivated by morphophonological observations on languages which we denote with the function g .", "We extend the capabilities of previous systems (Narasimhan et al., 2015; Xu et al., 2018) and model six transformation rules as follows: Insertion (INS) of a letter at the end of the root.", "E.g. the Spanish word quiera can be analyzed as ( quer, -a, INS-i ).", "Deletion (DEL) of the end letter of the root.", "E.g. using can be analyzed as ( use, -ing, DEL-e ).", "Gemination (GEM) of the end letter of the root.", "E.g. stopped can be analyzed as ( stop, -ed, GEM-p ).", "Degemination (DEG) of the end letter of the root if it is in a reduplication form.", "E.g. the Finnish word katot can be analyzed as ( katto, -t, DEG-t ).", "Substitution (SUB) of the end letter of the root with another.", "E.g. the word carries can be analyzed as ( carry, -es, SUB-y-i ).", "VowelChange (VOW) of the right or left most vowel of the root with another.", "A morphological function is defined as two parts: the morphological pattern, and the corresponding stem changes, f = [ <stem_change> , <morph_pat> ] , where <stem_change> is first applied to the root, with the output fed into the <morph_pat> to generate the derived word.", "A detailed definition can be denoted as f ( r ) = [ g ( x ) , <prefix> _ x ]( r ) , where r is the root word which can apply this rule to derive another word, and g is a stem change function.", "For example, a prefixation function f ( r ) = [$( x ) , <un-> _ x ]( r ) (where $( x ) means no stem change applies) can be applied to the verb fold to generate the verb unfold .", "Similarly, a suffixation function f ( r ) = [ SUB-y-i ( x ) , x _ <-ed> ]( r ) can be applied to the verb carry to generate the verb carri-ed .", "We can define an infixation function f ( r ) = [($( bx ) , $( ex )) , bx _ <-um-> _ ex ]( r ) ; when applied to the word kakain , it can generate the verb k-um-akain .", "A full reduplication function can be defined as f ( r ) = [$( x ) , $( x )) , x _ x ]( r ) ; when applied to the word kyer ', it can generate the verb kyer -kyer .", "A partial reduplication function f ( r ) = [($( bx ) , $( x )) , bx _ x ]( r ) , when applied to the word kain , can generate the verb ka-kain .", "The central phase of learning involves generating potential morphological functions.", "During this phase, no stem changes are allowed in order to limit spurious functions.", "Learning is done by comparing each word pair and postulating a function f that can explain the pair, where the function f is constrained through morphological typology as described in Section 3.", "For example, given the word pair ( fold , unfold ), we can postulate a prefixation function f ( r ) = [$( x ) , <un-> _ x ]( r ) ; given word pair ( kain , kakain ), we can postulate a left partial reduplication function f ( r ) = [($( bx ) , $( x )) , bx _ x ]( r ) .", "For affixation, including prefixation, infixation, and suffixation, a set of candidate affixes is needed before generating morphological functions.", "This can be done by comparing all possible word pairs, a similar method used by previous studies (e.g. Narasimhan et al., 2015; Xu et al., 2018).", "For prefixes, if w = s + w (cid:48) , where w and w (cid:48) are both attested words in the word list, then s is a candidate prefix.", "We use the cardinality of the set { ( w, w (cid:48) ) : w = s + w (cid:48) } to evaluate how good the candidate prefix s is.", "Similarly, for suffixes, if w = w (cid:48) + s , then s is a candidate suffix.", "For infixes, if w = bw (cid:48) + s + ew (cid:48) , where w and w (cid:48) = bw (cid:48) + ew (cid:48) are both attested words in the word list, then s is a candidate infix.", "Finally, only the top N most frequent candidates for each affix type are selected.", "After generating all morphological functions re-flecting each morphological type, searching for candidate analyses for individual words is conceptually straightforward.", "For a given word w , we find all possible morphological functions { f : f = [ g, m ] } associated with a root word r , such that w = f ( r ) .", "For example, the word reread can be analyzed as < re ->_X, bx_<re ->_ex, and bx_x.", "This is somewhat complicated by the need to find possible morphophonological (stem change) rules on the root words.", "The basic idea is that when checking a possible prefixation pattern, for example w = s + w (cid:48) , rather than assuming w (cid:48) is an attested word, we assume that if there is an attested word w (cid:48)(cid:48) and a potential stem change rule g , such that w (cid:48) = g ( w (cid:48)(cid:48) ) , then <s>_x is a potential prefixation pattern for w .", "We can easily create an index based on the attested words to accelerate the searching process.", "Searching for suffixation and infixation can be done is a similar way.", "For reduplication, we use a similar strategy.", "If w = bw (cid:48) + w (cid:48) , i.e. a word w can be decomposed into another word w (cid:48) plus a string prefix of w (cid:48) on the left, then we postulate a partial reduplication pattern for word w , i.e. bx _ x .", "If w = w (cid:48) + ew (cid:48) , then x _ ex can be generated.", "For example, given that the word reread = re + read and read is itself a word, we can hypothesize that the word is bx _ x .", "For full reduplication, if a word w = w (cid:48) + w (cid:48) , where w (cid:48) is another word, then a morphological pattern x _ x can be generated for w .", "For more complicated cases, we extend the search for reduplication of individual words with possible stem change rules.", "For partial reduplication, if a word w = s + w (cid:48) , and there is a stem change rule g , such that s = g ( bw (cid:48) ) , then we can also postulate a partial reduplication pattern for w , with a stem change rule on bw (cid:48) .", "Similarly, if a word w = s + s (cid:48) , and there is a stem change function g and an attested word w (cid:48) such that s (cid:48) = g ( w (cid:48) ) and s = bw (cid:48) , then we can also postulate a partial reduplication pattern for w with a stem change rule on w (cid:48) .", "For full reduplication, if a word w = s + s (cid:48) , there are (up to) two stem change functions g and g (cid:48) , and a word w (cid:48) , such that s = g ( w (cid:48) ) and s (cid:48) = g (cid:48) ( w (cid:48) ) , then we can postulate a full reduplication pattern for w .", "A large number of spurious candidate analyses will be generated once we allow stem change rules.", "However, some candidate analyses can be ruled out given other candidates.", "For example, the word say-ing' can be analyzed as ( say , $, x_< -ing >), but also as ( says , DEL-< s >, x_< -ing >), but the latter one is unnecessary given the former one and a heuristic that says that no stem changes are to be preferred to stem changes.", "So, to further decrease the search space, we employ a set of heuristics to eliminate some of the candidate analyses before the next step.", "They follow a principle of parsimony, namely once a simpler analysis is generated, the more complicated ones that are related will be excluded.", "2 5 Disambiguation with a Probabilistic Model After generating all candidate analyses for a given word, we evaluate how good each candidate is so we can choose the best one as the final analysis.", "We compute the conditional probability of a candidate analysis [ g, m ]( r ) given a word w = [ g, m ]( r ) ), namely P ( r, g, m | w ) .", "P ( r, g, m | w ) = 0 if [ g, m ]( r ) (cid:54) = w .", "Otherwise, we use the following formula to calculate this probability.", "The probabilities in this model can be estimated using EM initialized by counting all the candidate analyses of all words in the word list and assuming that each candidate has the same probability.", "2 The details will be given in a separate document with the code that will be made publicly available before the conference.", "We extend Xu et al. (2018)'s work and use statistically reliable paradigms for filtering unreliable ones.", "In detail, a paradigm is defined by Xu et al. (2018) upon a set of suffixes.", "Here, we extend this definition to a mixture of different types of morphological processes, i.e. M = { m } , that can be applied to the same set of roots R = { r } to be in a paradigm.", "Formally, a paradigm is defined as p = R M .", "Finally, the paradigms with at least 2 2 sizes are selected as reliable ones, namely at least two morphological patterns supported by at least two roots.", "Similar to Xu et al. (2018), stem changes are not part of the paradigm since they are generally independent processes.", "After finding possible paradigms, we use the same method for pruning unreliable paradigms.", "Given an unreliable paradigm p = R M , the intersection of the morphological pattern set M and the set M i of each reliable paradigm p i is computed, i.e. M (cid:48) i = M M i , and the one with the best score, e.g. M (cid:48) k will be chosen as the pruned result, i.e. p (cid:48) = R M (cid:48) k .", "Finally, the score of an intersection M (cid:48) i is the sum of the frequencies of all the morphological patterns in the intersection, as shown in equation 3.", "After the one-step roots of all the words are found, morphological derivations (e.g., sterile , sterilize , sterilizing ) are automatically generated iteratively by our system as well as final segmentations (e.g., steril-iz-ing ).", "As described in the next section, because evaluation will be based on morpheme boundaries identification, generating such a segmentation is necessary.", "We compare our model with Morfessor (Virpioja et al., 2013), the most popular baseline, Morpho-Chain (MC) (Narasimhan et al., 2015) and its improved version, Morph-Forest (MF) (Luo et al., 2017), and ParaMA (PMA) (Xu et al., 2018).", "We evaluate the models with segmentation points (boundaries of morphemes), the same metric used by Narasimhan et al. (2015) and Xu et al. (2018).", "We run our model in two different settings.", "In Lang Train Test Corpus Morphology Aka 74K 2K 3M pref, suf, red Hin 487K 2K 28M pref, suf, red Hun 4,390K 2K 574M pref, suf Ind 525K 2K 19M pref, suf, inf, red Rus 1,485K 2K 1,068M pref, suf Spa 564K 2K 24M pref, suf Swa 224K 2K 4M pref, suf, red, fv Tag 13K 2K 5M pref, suf, inf, lred, red Tam 2,363K 2K 47M pref, suf, red Table 2: Number of word types for training and testing, corpus size for training word vectors (only for Morpho-Chain and Morph-Forest systems), and the morphological features (pref: prefixation; suf: suffixation; inf: infixation; red: full reduplication; lred: left reduplication; fv: final vowel) for each language.", "the primary experiment, we run it as a fully unsupervised model (FU), assuming all possible typological features.", "In a secondary experiment, each language's morphological typology is provided by an oracle so that the model can only search relevant patterns per language (U+T).", "A vowel inventory is also provided so that our system can discover the vowel change rules described in Section 4.3.", "MC and MF are run in two different configurations, one with semantic vectors (+v) and the other without vectors (more comparable to Morfessor, ParaMA, and our system).", "We conduct the experiments with a data set containing 9 languages from diverse language families (Mott et al., 2020).", "The details of the data sets including the typological features for each language and the size of corpus that is used for training word vectors are shown in Table 2.", "The word lists used for training are extracted from the language pack created under the DARPA LORELEI (LOw REsource Languages and Emergent Incidents) program.", "The gold standard data, soon to be released by LDC, is annotated only with morpheme segmentations, and no data annotation was used in training.", "The languages with non-Latin scripts were romanized with the tools provided in the package.", "Results are presented in Figure 1.", "The details are shown in Table 3 and Table 4.", "Both our unsupervised model (FU) and model with given typology (U+T) achieve higher average F1 than previous work by a large margin, the highest on five of nine languages, and competitive results overall on the other four.", "Of the two systems, the typology feature oracle provided only slightly better average performance than fully unsupervised.", "As expected, 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Aka Hin Hun Ind Rus Spa Swa Tag Tam Average Morfessor MC MF MC+vec MF+vec ParaMA U+TYPL FU Figure 1: Comparison of different systems in F1 scores on the nine languages and their average.", "given the very low-resource setting, the vector con-figuration harms the performance of both MC and MF in languages such as Akan, Spanish, Swahili and Tagalog.", "Even though Russian has a larger corpus, the vectors still harm performance, which we believe is due to its complicated morphology that demands many examples to train reliable vectors.", "While having separate patterns for each morphology type seems to improve numbers, oracle information improves results only slightly, mostly on Hindi, Tagalog, and Tamil.", "Interestingly, the performance on Swahili has been noticeably decreased.", "Based on detailed observation, this is due to our infixation search providing an unexpected benefit for Swahili, a language with no linguistic infixation, but the final vowel pattern, by allowing us to capture string-internal linguistic suffixes as Aka Hin Hun Ind Rus Spa Swa Tag Tam Avg U+T 0.68 0.432 0.554 0.686 0.493 0.473 0.512 0.587 0.446 0.541 Pref+Suf 0.683 0.411 0.554 0.67 0.493 0.473 0.512 0.522 0.445 0.529 0.3 0.4 0.5 0.6 0.7 0.8 U+T Pref+Suf Figure 2: The performance of our model with oracle typological features (U+T) and with only prefixation and suffixation (Pref+Suf).", "in the passive suffix -wextracted from the verb kunyang'anywa here as bx_<w>_ex .", "In all, the performance of our model in either mode is better than the other systems we tested.", "To test the contribution of morphological patterns other than prefixation and suffixation, we perform an ablation study, running the system with only prefixation and suffixation enabled.", "The results are shown in Figure 2.", "First, most performance for most languages is due to prefixation and suffixation since these are predominant for most languages.", "However, performance decreases measurably for Tagalog, Indonesian and Hindi due to the presence of more complex morphological patterns.", "This shows that modeling morphological features other than prefixation and suffixation has important benefits on languages with complicated morphology.", "Our system, in both its configurations, achieves the highest average performance among those tested.", "It has other advantages as well.", "Firstly, although our model is evaluated in terms of morpheme boundaries, it produces much richer structures than that.", "It determines how a complex word is derived from another one through a particular morphological process such as prefixation, suffixation, infixation or full or partial reduplication.", "In comparison, other systems including Morpho-Chain, Morph-Forest, and ParaMA only deal with prefixes and suffixes.", "Our experiments as shown in Figure 2 indicate that modeling morphological patterns/processes other than prefixation and suffixation are useful.", "Systems that directly find morpheme boundaries such as Morfessor are not aware of the particular morphological processes that a word's derivation goes through.", "So for infixed words, for example, even if the morpheme boundaries are correctly iden-tified by such systems, they will incorrectly characterize the word as containing three morphemes rather than two.", "Such analyses are incorrect even though they are not penalized under a boundary-based evaluation metric.", "By modeling different types of morphological structures, our system can be used to study the productivity of each morphological process and thus can be used for a quantitative analysis for theoretical morphological studies in linguistics.", "Figure 3 shows the number of instances of each type of morphological process generated by our fully unsupervised model.", "Suffixation and prefixation are the most common processes.", "Most of our test languages exhibit more suffixation than prefixation, but Swahili has more prefixation than suffixation, as expected for a Bantu language.", "Figure 3 also shows that reduplication is rarer than other affixation.", "However, our model does discover full and left-partial reduplication successfully in languages that exhibit it.", "For example, about 1% of Akan words and fewer than 1% of Indonesian, Swahili and Tagalog words were analyzed with full or partial reduplication.", "Infixation is challenging to correctly identify because infixes can appear in almost any position inside a word, and therefore generate a large search space.", "Our unsupervised system uses infixation to represent both true morphological infixation as in Tagalog as well as word-internal agglutinative suffixation as in Swahili, Hindi, and Tamil.", "This hurts the performance for Hindi and Tamil, but provides a benefit for Swahili as discussed above.", "Finally, our system is fast, typically completing in several minutes, similar to ParaMA.", "Other systems including Morfessor, MC and MF typically require several hours, or even days on longer word 0 0.1 0.2 0.3 0.4 0.5 0.6 aka hin hun ind rus spa swa tag tam SUF PREF INF RED LRED RRED 0 0.002 0.004 0.006 0.008 0.01 0.012 aka hin hun ind rus spa swa tag tam RED LRED RRED Figure 3: Normalized distribution of morphological patterns discovered by our unsupervised model for each language (top) and zoomed in on less frequent patterns (bottom).", "In this paper, we develop a model for morphological analysis that exploits typological features to achieve the best performance on a wide range of languages.", "The tool is publicly available here: https://github.com/xuhongzhi/ParaMA2.", "This unsupervised model can be quickly and easily extended to novel languages without data annotation or expert input.", "Combined with the ability to process infixation and reduplication, our system improves access for geographically diverse low-resource languages.", "Although the evaluation is based on segmentation points, our model outputs much richer structure.", "It can also tell us the productivity of each morphological process and thus can obtain much deeper knowledge in terms of morphological structures of languages.", "Our next step will be to attempt to automate the determination of language typology, yielding somewhat better performance with a system requiring no human intervention per language at all.", "Future work will aim to extend the current model to capture particularly challenging morphological patterns such as templatic non-concatenative morphology and polysynthetic composition.", "We thank the rest of the University of Pennsylva-nia's LORELEI research team for the helpful discussions.", "We also thank the anonymous reviewers for their valuable and constructive comments for improving our paper.", "This research was funded by the DARPA LORELEI program under Agreement No.", "HR0011-15-2-0023." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "other", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Multilingual neural machine translation with a single model has drawn much attention due to its capability to deal with multiple languages.", "However, the current multilingual translation paradigm often makes the model tend to preserve the general knowledge, but ignore the language-specific knowledge.", "Some previous works try to solve this problem by adding various kinds of language-specific modules to the model, but they suffer from the parameter explosion problem and require specialized manual design.", "To solve these problems, we propose to divide the model neurons into general and language-specific parts based on their importance across languages.", "The general part is responsible for preserving the general knowledge and participating in the translation of all the languages, while the language-specific part is responsible for preserving the language-specific knowledge and participating in the translation of some specific languages.", "Experimental results on several language pairs, covering IWSLT and Europarl corpus datasets, demonstrate the effectiveness and universality of the proposed method.", "Neural machine translation(NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bah-danau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) has shown its superiority and drawn much attention in recent years.", "Although the NMT model can achieve promising results for high-resource language pairs, it is unaffordable to train separate models for all the language pairs since there are thousands of languages in the world (Tan et al., 2019; Aharoni et al., 2019; Arivazhagan et al., 2019).", "A typical solution to reduce the model size Corresponding author: Yang Feng.", "and the training cost is to handle multiple languages in a single multilingual neural machine translation (MNMT) model (Ha et al., 2016; Firat et al., 2016; Johnson et al., 2017; Gu et al., 2018).", "The standard paradigm of MNMT proposed by Johnson et al. (2017) contains a language-shared encoder and decoder with a special language indicator in the input sentence to determine the target language.", "Because different languages share all of the model parameters in the standard MNMT model, the model tends to converge to a region where there are low errors for all the languages.", "Therefore, the MNMT model trained on the combined data generally captures the general knowledge, but ignores the language-specific knowledge, rendering itself sub-optimal for the translation of a specific language (Sachan and Neubig, 2018; Blackwood et al., 2018; Wang et al., 2020b).", "To retain the language-specific knowledge, some researches turn to augment the NMT model with language-specific modules, e.g., the language-specific attention module (Blackwood et al., 2018), decoupled multilingual encoders and/or decoders (Vazquez et al., 2019; Escolano et al., 2020) and the lightweight language adapters (Bapna and Firat, 2019).", "However, these methods suffer from the parameter increment problem, because the number of parameters increases linearly with the number of languages.", "Besides, the structure, size, and location of the module have a large influence on the final performance, which requires specialized manual design.", "As a result, these problems often prevent the application of these methods in some scenarios.", "Based on the above, we aim to propose a method that can retain the general and language-specific knowledge, and keep a stable model size as the number of language-pair increases without introducing any specialized module.", "To achieve this, we propose to divide the model neurons into two parts based on their importance: the general neurons which are used to retain the general knowledge of all the languages, and the language-specific neurons which are used to retain the language-specific knowledge.", "Specifically, we first pre-train a standard MNMT model on all language data and then evaluate the importance of each neuron in each language pair.", "According to their importance, we divide the neurons into the general neurons and the language-specific neurons.", "After that, we fine-tune the translation model on all language pairs.", "In this process, only the general neurons and the corresponding language-specific neurons for the current language pair participate in training.", "Experimental results on different languages show that the proposed method outperforms several strong baselines.", "Our contributions can be summarized as follows: We propose a method that can improve the translation performance of the MNMT model without introducing any specialized modules or adding new parameters.", "We show that the similar languages share some common features that can be captured by some specific neurons of the MNMT model.", "We show that some modules tend to capture the general knowledge while some modules are more essential for capturing the language-specific knowledge.", "In this section, we will give a brief introduction to the Transformer model (Vaswani et al., 2017) and the Multilingual translation.", "We denote the input sequence of symbols as x (cid:48) = ( x 1 , . . . , x J ) , the ground-truth sequence as y = ( y 1 , . . . , y K ) and the translation as y = ( y 1 , . . . , y K ) .", "Transformer is a stacked network with N identical layers containing two or three basic blocks in each layer.", "For a single layer in the encoder, it consists of a multi-head self-attention and a position-wise feed-forward network.", "For a single decoder layer, besides the above two basic blocks, a multi-head cross-attention follows multi-head self-attention.", "The input sequence x will be first converted to a sequence of vectors and fed into the encoder.", "Then the output of the N -th encoder layer will be taken as source hidden states and fed into decoder.", "The final output of the N -th decoder layer gives the target hidden states and translate the target sentences.", "In the standard paradigm of MNMT, all parameters are shared across languages and the model is jointly trained on multiple language pairs.", "We follow Johnson et al. (2017) to reuse standard bilingual NMT models for multilingual translation by altering the source input with a language token lang , i.e. changing x (cid:48) to x = ( lang , x 1 , . . . , x J ) .", "Our goal is to build a unified model, which can achieve good performance on all language pairs.", "The main idea of our method is that different neurons have different importance to the translation of different languages.", "Based on this, we divide them into general and language-specific ones and make general neurons participate in the translation of all the languages while language-specific neurons focus on some specific languages.", "Specifically, the proposed approach involves the following steps shown in Figure 1.", "First, we pretrain the model on the combined data of all the language pairs following the normal paradigm in Johnson et al. (2017).", "Second, we evaluate the importance of different neurons on these language pairs and allocate them into general neurons and language-specific neurons.", "Last, we fine-tune the translation model on the combined data again.", "It should be noted that for a specific language pair only the general neurons and the language-specific neurons for this language pair will participate in the forward and backward computation when the model is trained on this language pair.", "Other neurons will be zeroed out during both training and inference.", "The basic idea of importance evaluation is to determine which neurons are essential to all languages while which neurons are responsible for some specific languages.", "For a neuron i , its average importance I across language pairs is defined as follow: I ( i ) = 1 MM (cid:88) m =1 m ( i ) , (1) where the ( ) denotes the importance evaluation function and M denotes the number of language Figure 1: The whole training process of the proposed method.", "pairs.", "This value correlates positively with how important the neuron is to all languages.", "For the importance evaluation function ( ) , we adopt two schemes: one is based on the Taylor Expansion and the other is based on the Absolute Value.", "Taylor Expansion We adopt a criterion based on the Taylor Expansion (Molchanov et al., 2017), where we directly approximate the change in loss when removing a particular neuron.", "Let h i be the output produced from neuron i and H represents the set of other neurons.", "Assuming the indepen-dence of each neuron in the model, the change of loss when removing a certain neuron can be represented as: | L ( h i ) | = |L ( H, h i = 0) L ( H, h i ) | , (2) where L ( H, h i = 0) is the loss value if the neuron i is pruned and L ( H, h i ) is the loss if it is not pruned.", "For the function L ( H, h i ) , its Taylor Expansion at point h i = a is: L ( H, h i ) = N (cid:88) n =0 L n ( H, a ) n !", "where L n ( H, a ) is the n -th derivative of L ( H, h i ) evaluated at point a and RN ( h i ) is N -th remainder.", "Then, approximating L ( H, h i = 0) with a first-order Taylor polynomial where h i equals zero: L ( H, h i = 0) = L ( H, h i ) L ( H, h i ) h i h i R 1 ( h i ) .", "where (0 , 1) .", "Considering the use of ReLU activation function (Glorot et al., 2011) in the model, the first derivative of loss function tends to be con-stant, so the second order term tends to be zero in the end of training.", "Thus, we can ignore the remainder and get the importance evaluation function as follows: TE ( i ) = | L ( h i ) | = (cid:12)(cid:12)(cid:12)(cid:12) L ( H, h i ) h i h i (cid:12)(cid:12)(cid:12)(cid:12) .", "In practice, we need to accumulate the product of the activation and the gradient of the objective function w.r.t to the activation, which is easily computed during back-propagation.", "Finally, the evaluation function is shown as: m TE ( i l ) = 1 T m (cid:88) t (cid:12)(cid:12)(cid:12)(cid:12) L ( H, h li ) h li h li (cid:12)(cid:12)(cid:12)(cid:12) , (7) where h li is the activation value of the i -th neuron of l -th layer and T m is the number of the training examples of language pair m .", "The criterion is computed on the data of language pair m and averaged over T m .", "Absolute Value We adopt the magnitude-based neuron importance evaluation scheme (See et al., 2016), where the absolute value of each neuron's activation value is treated as the importance: m AV ( i l ) = 1 T m (cid:88) t | h li | .", "The notations in the above equation are the same as those in the Equation 7.", "After the importance of each neuron is evaluated on the combined data, we need to determine the role of each neuron in the fine-tuning step following the method in the next section.", "In this step, we should determine which neurons are shared across all the language pairs and which neurons are shared only for some specific language pairs.", "General Neurons According to the overall importance I ( i ) in Equation 1, the value correlates positively with how important the neuron is to all languages.", "Therefore, we rank the neurons in each layer based on the importance and make the top percentage as general neurons that are responsible for capturing general knowledge.", "Language-specific Neurons Next, we regard other neurons except for the general neurons as the language-specific neurons and determine which language pair to assign them to.", "To achieve this, we compute an importance threshold for each neuron: ( i ) = k max( m ( i )) , m { 1 , . . . , M } , k [0 , 1] (9) , where max( m ( i )) denotes the maximum importance of this neuron in all language pairs and k is a hyper-parameter.", "The neuron will be assigned to the language-pairs whose importance is larger than the threshold.", "When the importance of neurons is determined, the number of language pairs associated with each neuron can be adjusted according to k .", "The smaller the k , the more language-pairs will be associated with the specific neurons.", "In this way, we flexibly determine the language pairs assigned to each neuron according to its importance in different languages.", "Note that the neuron allocation is based on the importance of language pair .", "We have also tried other allocation variants, e.g., based on the source language, target language, and find that the language pair-based method is the best among of these methods.", "The detailed results are listed in Appendix A. After this step, the model is continually fine-tuned on the combined multilingual data.", "If the training data is from a specific language pair, only the general neurons and the language-specific neurons for this language pair will participate in the forward computation and the parameters associated with them will be updated during the backward propagation.", "In this section, we describe the datasets using in our experiments on many-to-many and one-to-many multilingual translation scenarios.", "Many-to-Many For this translation scenario, we test our approach on IWSLT-17 1 translation datasets, including English, Italian, Romanian, Dutch (briefly, En, It, Ro, Nl).", "We experimented in eight directions, including It En, Ro En, Nl En, and It Ro, with 231.6k, 220.5k, 237.2k, and 217.5k data for each language pair.", "We choose test2016 and test2017 as our development and test set, respectively.", "Sentences of all languages were tokenized by the Moses scripts 2 and further segmented into subword symbols using Byte-Pair Encoding (BPE) rules (Sennrich et al., 2016) with 40K merge operations for all languages jointly.", "One-to-Many We evaluate the quality of our multilingual translation models using training data from the Europarl Corpus 3 , Release V7.", "Our experiments focus on English to twelve primary languages: Czech, Finnish, Greek, Hungarian, Lithuanian, Latvian, Polish, Portuguese, Slovak, Slovene, Swedish, Spanish (briefly, Cs, Fi, El, Hu, Lt, Lv, Pl, Pt, Sk, Sl, Sv, Es).", "For each language pair, we randomly sampled 0.6M parallel sentences as training corpus (7.2M in all).", "The Europarl evaluation data set dev2006 is used as our validation set, while devtest2006 is our test set.", "For language pairs without available development and test set, we randomly split 1K unseen sentence pairs from the corresponding training set as the development and test data respectively.", "We tokenize and true-case the sentences with Moses scripts and apply a jointly-learned set of 90k BPE obtained from the merged source and target sides of the training data for all twelve language pairs.", "To make the evaluation convincing, we reimplement and compare our method with four baseline systems, which can be divided into two categories with respect to the number of models.", "The multiple-model approach requires maintaining a dedicated NMT model for each language: 1 https://sites.google.com/site/iwsltevaluation2017 2 http://www.statmt.org/moses/ 3 http://www.statmt.org/europarl/ It En En It Ro En En Ro Nl En En Nl It Ro Ro It AVE Para Individual 34.99 31.22 28.58 23.19 30.21 27.69 19.52 20.95 27.04 466.4M Multilingual 37.55 32.62 31.58 24.64 31.13 28.86 20.82 23.79 28.87 64.69M +TS 38.11 33.46 31.82 24.96 32.04 30.06 21.43 23.59 29.43 +0 .", "Individual A NMT model is trained for each language pair.", "Therefore, there are N different models for N language pairs.", "The unified model-based methods handle multiple languages within a single unified NMT model: Multilingual (Johnson et al., 2017) Handling multiple languages in a single transformer model which contains one encoder and one decoder with a special language indicator lang added to the input sentence.", "+TS (Blackwood et al., 2018) This method assigns language-specific attention modules to each language pair.", "We implement the target-specific attention mechanism because of its excellent performance in the original paper.", "+Adapter (Bapna and Firat, 2019) This method injects tiny adapter layers for specific language pairs into the original MNMT model.", "We set the dimension of projection layer to 128 and train the model from scratch.", "Our Method-AV Our model is trained just as the Approach section describes.", "In this system, we adopt the absolute value based method to evaluate the importance of neurons across languages.", "Our Method-TE This system is implemented the same as the system Our Method-AV except that we adopt the Taylor Expansion based evaluation method as shown in Equation 7.", "+Expansion To make a fair comparison, we set the size of Feed Forward Network to 3000 to expand the model capacity up to the level of other baselines, and then apply our Taylor Expansion based method to this model.", "For fair comparisons, we implement the proposed method and other contrast methods on the advanced Transformer model using the open-source toolkit Fairseq-py (Ott et al., 2019).", "We follow Vaswani et al. (2017) to set the configurations of the NMT model, which consists of 6 stacked en-coder/decoder layers with the layer size being 512.", "All the models were trained on 4 NVIDIA 2080Ti GPUs where each was allocated with a batch size of 4,096 tokens for one-to-many scenario and 2,048 tokens for the many-to-many scenario.", "We train the baseline model using Adam optimizer (Kingma and Ba, 2015) with 1 = 0 .", "9 , 2 = 0 .", "98 , and (cid:15) = 10 9 .", "The proposed models are further trained with corresponding parameters initialized by the pre-trained baseline model.", "We vary the hyper-parameter that controls the proportion of general neurons in each module from 80% to 95% and set it to 90% in our main experiments according to the performance.", "The detailed results about this hyper-parameter are list in Appendix B. We set the hyper-parameter k to 0 .", "7 and do more analysis on it in Section 5.3.", "For evaluation, we use beam search with a beam size of 4 and length penalty = 0 .", "6 .", "The final translation is detokenized and then the quality is evaluated using the 4 -gram case-sensitive", "BLEU (Papineni et al., 2002) with the SacreBLEU tool (Post, 2018).", "4 Many-to-Many The results are given in Table 1.", "We can see that the improvements brought by +TS and +Adapter methods are not large.", "For the +TS method, attention module may be not essential to capture language-specific knowledge, and thus it is difficult to converge to good optima.", "For the +Adapter method, adding an adapter module to the end of each layer may be not appropriate for some languages and hence has a loose capture to the specific features.", "In all language pairs, our method based on Taylor Expansion outperforms all the baselines in the datasets.", "Moreover, the parameters in our model are the same as the Multilingual system and less than other baselines.", "One-to-Many The results are given in Table 2, our method exceeds the multilingual baseline in all language pairs and outperforms other baselines in most language pairs without capacity increment.", "When we expand the model capacity to the level of +Adapter, our approach can achieve better translation performance, which demonstrates the effectiveness of our method.", "Another finding is that the results of the individual baseline are worse than other baselines.", "The reason may be the training data is not big enough, individual baseline can not get a good enough optimization on 0.6M sentences, while the MNMT model can be well trained with a total of 7.2M data.", "In our method, we allocate neurons based on their importance for different languages.", "The rationality behind this mechanism is that different neurons should have distinct importance values so that these neurons can find their relevant language pairs.", "Therefore, we show the importance of neurons computed by Taylor Expansion in different modules for the one-to-many (O2M) and many-to-many (M2M) translation tasks.", "For clarity and convenience, we only show the importance values of three language pairs in the sixth layer of encoder and decoder.", "are Spanish and Portuguese, both of which belong to the Western Romance, the Romance branch of the Indo-European family, while the last one is Finnish, a member of the Finnish-Ugric branch of the Ural family.", "As we can see, the importance of Spanish and Portuguese are always similar in most neurons, but there is no obvious correlation between Finnish and the other two languages.", "It indicates that similar languages are also similar in the distribution of the neuron importance, which implies that the common features in similar languages can be captured by the same neurons.", "The results of M2M are shown in Figure", "2(c) and Figure", "2(d), and the language pairs are It En, Ro It, and En Ro, whose BLEU scores are 0.67, 1, and 1.7 higher than the multilingual baseline, respectively.", "In most neurons, the highest importance value is twice as high as the lowest and this high variance of importance provides the theoretical basis for later neuron allocation.", "Moreover, we can see a lot of importance peaks of the two language pairs: Ro It and En Ro, which means that these neurons are especially important for generating the translation for these language pairs.", "However, the fluctuation of It En is flat with almost no peaks, which means only a few neurons are specific to this language pair.", "This may be the reason why some language pairs have higher improvements, while some have lower improvements.", "Except for the general neurons shared by all the language pairs, our method allocates other neurons to different language pairs based on their importance.", "These language-specific neurons are important for preserving the language-specific knowledge.", "To better understand the effectiveness of our method, we will show how these specific neurons are distributed in the model.", "To evaluate the proportion of language-specific neurons for different language pairs at each layer, we introduce a new metric, LScore, formulated as: LScore( l, m ) = I ml I l , m { 1 , . . . , M } (10) where I ml denotes the number of neurons allocated to language pair m in the l -th layer, and I l denotes the total number of the language-specific neurons in the l -th layer.", "The larger the LScore, the more neurons allocated to the language pair m .", "We also Figure 4: The average BLEU over the Multilingual baseline with different hyper-parameters k on the many-to-many translation task.", "introduce a metric to evaluate the average proportion of language-specific neurons of each language in different modules, which formulated as: MScore( l, f ) = 1 MM (cid:88) m =0 I m l,f I l,f , m { 1 , . . . , M } (11) where I ml,f denotes the number of specific neurons for language pair m of in the f module of the l th layer and M denotes the total number of the language pair.", "The larger the MScore is, the more specific neurons are allocated to different language pairs in this module.", "As shown in Figure", "3(a) and Figure", "3(b), the language pairs have low LScores at the top and bottom layers and high LScores at the middle layers of both the encoder and decoder.", "The highest LScore appears at the third or fourth layers, which indicates that the neuron importance of different language pairs is similar and the neurons of the middle layers are shared by more languages.", "As a contrast, the bottom and top layers will be more specialized for different language pairs.", "Next, from Figure", "3(c) and Figure", "3(d), we can see the MScores of the attention modules are almost near 1.0, which means neurons in self attention and cross attention are almost shared across all language pairs.", "However, the MScores of Feed Forward Network (FFN) gradually decrease as layer depth increases and it shows that the higher layers in FFN are more essential for capturing the language-specific knowledge.", "When the importance of neurons for different languages is determined, the number of language pairs associated with each neuron can be adjusted ac-Figure", "ac-Figure 5: BLEU over best performance when erasing the general or language-specific neurons randomly on the many-to-many translation task.", "cording to k .", "When k = 1 .", "0 , the threshold is max( m ( i )) as computed by Equation 9, so the neurons will only be allocated to the language pair with the highest importance, and when k = 0 , the threshold is 0 so the neurons will be shared across all language pairs just like the Multilingual baseline.", "To better show the overall impact of the hyper-parameter k , we vary it from 0 to 1 and the results are shown in Figure 4.", "As we can see, the translation performance of the two proposed approaches increases with the increment of k and reach the best performance when k equals 0 .", "7 .", "As k continues to increase, the performance deteriorates, which indicates that the over-specific neurons are bad at capturing the common features shared by similar languages and will lead to performance degradation.", "The main idea of our method is to let the general knowledge and the language-specific knowledge be captured by different neurons of our method.", "To verify whether this goal has been achieved, we conduct the following experiments.", "For the general knowledge, we randomly erase 20% general neurons of the best checkpoint of our method, which means we mask the output value of these neurons to 0 , then generate translation using it.", "For language-specific knowledge, we randomly erase 50% specific neurons and then generate translation.", "As shown in Figure 5, when the general neurons are erased, the BLEU points of all the language pairs drop a lot (about 15 to 20 BLEU), which indicates general neurons do capture the general knowledge across languages.", "For specific neurons, we show three language pairs for the sake of convenience.", "We can see that when the neurons associated with the current language pair are erased, the performance of this language pair decreases greatly.", "However, the performance of other language pairs only declines slightly, because the specific knowledge captured by these specific neurons are not so important for other languages.", "Our work closely relates to language-specific modeling for MNMT and model pruning which we will recap both here.", "Early MNMT studies focus on improving the sharing capability of individual bilingual models to handle multiple languages, which includes sharing encoders (Dong et al., 2015), sharing decoders (Zoph et al., 2016), and sharing sublayers (Firat et al., 2016).", "Later, Ha et al. (2016) and Johnson et al. (2017) propose an universal MNMT model with a target language token to indicate the translation direction.", "While this paradigm fully explores the general knowledge between languages and hard to obtain the specific knowledge of each language (Tan et al., 2019; Aharoni et al., 2019), the subsequent researches resort to Language-specific modeling, trying to find a better trade-off between sharing and specific.", "Such approaches involve inserting conditional language-specific routing layer (Zhang et al., 2021), specific attention networks (Blackwood et al., 2018; Sachan and Neubig, 2018), adding task adapters (Bapna and Firat, 2019), and training model with different language clusters (Tan et al., 2019), and so on.", "However, these methods increase the capacity of the model which makes the model bloated.", "Moreover, our method is also related to model pruning, which usually aims to reduce the model size or improve the inference efficiency.", "Model pruning has been widely investigated for both computer vision (CV) (Luo et al., 2017) and natural language processing (NLP) tasks.", "For example, See et al. (2016) examines three magnitude-based pruning schemes, Zhu and Gupta (2018) demonstrates that large-sparse models outperform comparably-sized small-dense models, and Wang et al. (2020a) improves the utilization efficiency of parameters by introducing a rejuvenation approach.", "Besides, Lan et al. (2020) presents two parameter reduction techniques to lower memory consumption and increase the training speed of BERT.", "The current standard models of multilingual neural machine translation fail to capture the characteristics of specific languages, while the latest researches focus on the pursuit of specific knowledge while increasing the capacity of the model and requiring fine manual design.", "To solve the problem, we propose an importance-based neuron allocation method.", "We divide neurons to general neurons and language-specific neurons to retain general knowledge and capture language-specific knowledge without model capacity incremental and specialized design.", "The experiments prove that our method can get superior translation results with better general and language-specific knowledge.", "We thank all the anonymous reviewers for their insightful and valuable comments.", "This work was supported by National Key R&D Program of China (NO. 2017YFE0192900)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "abstain", "objective", "method", "result", "other", "other" ]
[ "Existing approaches to active learning maximize the system performance by sampling unlabeled instances for annotation that yield the most efficient training.", "However, when active learning is integrated with an end-user application, this can lead to frustration for participating users, as they spend time labeling instances that they would not otherwise be interested in reading.", "In this paper, we propose a new active learning approach that jointly optimizes the seemingly counteracting objectives of the active learning system (training efficiently) and the user (receiving useful in-stances).", "We study our approach in an educational application, which particularly bene-fits from this technique as the system needs to rapidly learn to predict the appropriateness of an exercise to a particular user, while the users should receive only exercises that match their skills.", "We evaluate multiple learning strategies and user types with data from real users and find that our joint approach better satisfies both objectives when alternative methods lead to many unsuitable exercises for end users.", "1 1 Introduction State-of-the-art machine learning approaches require huge amounts of training data.", "But for many NLP applications, there is little to no training data available.", "Interactive NLP systems are a viable solution to alleviate the cost of creating large training datasets before a new application can be used.", "Such systems start with no or few labeled instances and acquire additional training data based on user feedback for their predictions.", "Active learning (Set-tles, 2012) is a frequently used technique to quickly maximize the prediction performance, as the system acquires user feedback in each iteration for 1 Our code and simulated learner models are available on Github: https://github.com/UKPLab/ acl2020-empowering-active-learning those instances that likely yield the highest performance improvement (e.g., because the system is yet uncertain about them).", "Active learning has been shown to reduce the amount of user feedback required while improving system performance for interactive NLP systems (P.V.S and Meyer, 2017; Gao et al., 2018) and to reduce the annotation costs in crowdsourcing scenarios (Fang et al., 2014).", "However, outside the typical annotation setup, it can be boring or frustrating for users to provide feedback on ill-predicted instances that hardly solve their needs.", "Consider a newly launched web application for learning a foreign language, which aims at suggesting exercises that match the user's proficiency according to Vygotsky's Zone of proximal development (Vygotsky, 1978).", "The underlying machine learning system starts without any data, but employs active learning to select an exercise the system cannot confidently predict.", "Then, it adjusts its model interactively based on the user's feedback.", "While the system is still uncertain, the users often receive inappropriate (e.g., too hard or too easy) exercises.", "Thus, they get the impression that the system does not work properly, which is especially harmful during the inception phase of an application, as the community opinion largely defines its success.", "In this paper, we distinguish the system objective of maximizing the prediction performance with minimal labeled instances and the user objective of providing useful instances for the user's current needs.", "For the first time, we propose an active learning approach that jointly optimizes these seemingly counteracting objectives and thus trades off the demands of system and user.", "The users of educational applications can particularly benefit from this, as they can learn most if they receive appropriate learning material while the underlying system requires considerable training to reach acceptable performance.", "We employ our (re)train Active Learning User Objective System Objective Joint Optimization Unlabeled Data labeled instance system feedback user feedback instance User sample Labeled Data System Figure 1: Overview of our interactive approach.", "new approach in a language learning platform for C-tests (i.e., cloze tests, in which the second half of every second word is replaced by a gap).", "Our system successfully learns how to predict the difficulty of a C-test gap (system objective) and how to provide a C-test that is neither too easy for the current user, which would cause boredom, nor too hard, which would create frustration (user objec-tive).", "Predicting the difficulty of an exercise and correspondingly selecting exercises that match a user's proficiency are important steps towards self-directed language learning and massive open online courses (MOOCs) on language learning.", "Though we focus on this educational use case in this paper, our approach may also yield new insights for other problems that suffer from seemingly counteracting system and user objectives, for example, interactively trained recommender systems for books, movies, or restaurants.", "Active learning.", "Active learning aims to reduce the amount of training data by intelligently sampling instances that benefit the model most (Settles, 2012).", "A distinct characteristic of active learning is that labels for sampled instances are unknown and provided by an oracle after sampling.", "Various works investigate the use of active learning for crowdsourcing, where the oracles (i.e., the crowd-workers) may provide noisy labels (Snow et al., 2008; Laws et al., 2011).", "Within the educational domain, active learning research is scarce.", "2 One example is the work by Rastogi et al. (2018), who propose a threshold-based sampling strategy utilizing the prediction probability and achieve a considerable speed-up without any significant performance drop.", "Hastings et al. (2018) find that ac-2 Note, that in education, active learning often refers to a teaching paradigm which is unrelated to active learning in machine learning.", "tive learning can be used to efficiently train a system for providing feedback on student essays using teachers as oracles.", "Horbach and Palmer (2016) report mixed results for employing active learning in short-answer grading.", "While all of these works focus on improvements of the proposed system, users only benefit after training.", "In contrast, our work explicitly models the user objective, such that users already benefit while labeling training instances.", "Adaptive learning.", "Many systems provide user adaptation, and research has shifted from pre-defined sets of rules for adaptation to data-driven approaches.", "Several works investigate adaptive methods to provide exercises which are neither too hard nor too boring.", "For instance, Missura and Gartner (2011) model learning in a game-theoretic sense where the goal is to adjust the difficulty to neither being too easy nor too hard.", "Other works investigate adaptation in the context of testing (Zheng and Chang, 2015; Wang et al., 2016; Chaimongkol et al., 2016) and propose methods for an adaptive selection of appropriate tests for better assessing a student's proficiency.", "In a large survey, Truong (2016) discusses how to integrate different learning styles, modeling categorical student behavior, into an adaptive learning environment and emphasizes the need for more sophisticated methods.", "Despite much research in adaptive and active learning, none of the previous works consider jointly modeling and optimizing both the system and user objectives which may retain a user's motivation and keep them from leaving the platform due to boredom or frustration.", "Figure 1 shows our proposed interactive learning setup.", "The active learning component iteratively samples instances from a pool of unlabeled data and asks the user for a label that can be used to train the machine learning system.", "Previous work on active learning focused on optimizing the system objective (blue).", "That is, only the system provides feedback to the active learning component (e.g., how certain it is about the predicted label of an instance).", "In our work, we first model the user objective (green) and propose sampling strategies that maximize the user satisfaction based on the user's feedback (e.g., the user's label for an in-stance).", "Finally, we study our novel joint optimization strategies (gold) that trade off the demands of the system and the users.", "Whereas we distinguish between the user's feedback (exercise-level) and labeled instances (gap-level) in our work, our proposed approach can easily be adapted to more specific cases where the (implicit) user feedback and the provided label are the same.", "3 In the remainder of this section, we introduce sampling strategies that select which instance should be presented to the user next.", "We use the following notation: Let X be the pool of unlabeled instances.", "In every iteration of the application (e.g., when a user requests a new exercise), the sampling strategy s ( v ) returns an instance x X for user v .", "The user then provides a label y for instance x , potentially with additional feedback on the user's satisfaction.", "The active learning component finally removes x from its pool X and adds ( x, y ) to the set of labeled instances, before the system is retrained with the increased labeled training set.", "The simplest sampling strategy that we use as a baseline is random sampling s rand ( v ) , which selects an x X uniformly at random, regardless of the user.", "In the following subsections, we discuss more advanced strategies that optimize the system or user objective as well as our new joint optimization strategies.", "To optimize the system objective, we consider uncertainty sampling (Lewis and Gale, 1994).", "Uncertainty sampling assumes that instances for which the model is least certain during prediction provide the most information for the model once their labels are known.", "The sampled instance is thus s unc ( v ) = arg max x X U ( x ) (1) 3 Note, that from a single answer which is either correct or wrong, we cannot deduce a fine-grained gap label.", "To obtain these in a real-world setting, one either may assume querying groups of users or asking them for an explicit label.", "where U : x 7 [0 , 1] returns the uncertainty of predicting a label for instance x .", "Like random sampling, s unc ( v ) is independent of the current user v .", "A model's uncertainty can be measured in multiple different ways, for example, by the prediction probability of the predicted label (Lewis and Gale, 1994), as the difference in probabilities between the first and second most probable labels (Scheffer et al., 2001), and based on the Shannon entropy (Shannon, 1948) that considers all possible labels (Settles and Craven, 2008).", "We instantiate U for our educational application in section 4.", "The objective of users is to receive instances that meet their demands.", "We therefore define a new user-oriented sampling strategy as s usr ( v ) = arg max x X A ( x, v ) (2) where A : ( x, v ) 7 [0 , 1] returns the degree of appropriateness of instance x for the user v .", "In our educational application, we consider an exercise appropriate if it is neither too easy nor too difficult, as this maximizes the user's learning gain.", "To quantify A , we measure the error between the predicted label f ( x ) and the user's demand ( v ) as A ( x, v ) = 1 err[ f ( x ) , ( v )] (3) with an error function err [0 , 1] (cf., section 4).", "combines uncertainty sampling and user-oriented sampling by preferring appropriate instances for user v (as in s usr ), but among them returns the one the system is most uncertain about (as in s unc ).", "For our second strategy, we aggregate both objectives into a single function s tos ( v ) = arg max x X (cid:8) (1 ) A ( x, v ) (5) + U ( x ) (cid:9) which is the weighted sum of user-oriented and uncertainty sampling.", "The weight parameter [0 , 1] can be used to adjust the learning towards the system objective or the user objective.", "We consider our jointly optimized active learning particularly beneficial for educational applications, since (1) the users of such a system may fail to achieve their learning goals with inappropriate exercises.", "Additionally, (2) it is difficult to acquire large difficulty-annotated datasets for training, as actual users are required for producing realistic training data and existing learner datasets can hardly be shared due to privacy concerns.", "We therefore instantiate our approach for a language learning platform that predicts the difficulty of exercises and learns to provide appropriate (neither too easy nor too hard) exercises to its users.", "C-tests.", "For our experiments, we use the setup of the C-test difficulty prediction task as investigated by Beinborn (2016).", "C-tests are gap filling exercises proposed by Klein-Braley and Raatz (1982).", "In their proposed gap scheme, every second word is turned into a gap by removing the latter half of its characters.", "In contrast to cloze tests, C-tests do not require any distractors, since the first half of the word remains as a hint.", "Solving C-tests requires orthographic, morphologic, syntactic, and semantic competencies as well as general vocabulary knowledge (Chapelle, 1994).", "C-tests can be easily created automatically by choosing an arbitrary text and introducing the gaps as described above.", "Because of the context and the kept word prefixes, C-test gaps typically only allow for a single solution (given by the original text) and therefore do not require manual correction.", "The biggest challenge, however, lies in controlling the difficulty of the text and the derived C-test with its gaps as we have shown in previous work (Lee et al., 2019).", "System objective.", "Given a large pool X of C-tests x X with n gaps g i x , 1 i n , the system objective is to learn a classifier d ( g ) LD to judge the gap difficulty of gaps g x with minimal training data.", "As the difficulty classes LD , we use the four labels very easy , easy , hard , and very hard proposed by Beinborn (2016).", "These four classes are based on the mean error rates e ( g ) of a gap g observed across all users.", "Figure 2 shows the mapping between the mean error rates e ( g ) and the four gap difficulty classes LD .", "Data.", "For our experiments, we obtained 3,408 solutions to English C-tests from our university's language center.", "Each participant solved five C-very easy easy hard very hard [0, 0.25[ [0.25, 0.5[ [0.5, 0.75[ [0.75, 1] Figure 2: Gap difficulty classes and error rate ranges tests with 20 gaps each (i.e., 100 gaps per solution).", "The five C-tests vary across the participants based on a set of 74 different C-tests in total.", "We filter out answers from 22 participants who either did not provide any correct answer or only filled out the first of the five C-tests.", "Based on this dataset, we derive the ground-truth labels for the gap difficulty classification d ( g ) based on figure 2.", "Aggregated instances.", "In contrast to Beinborn's (2016) work, a particular challenge of our setup is the need to aggregate instances .", "The active learning strategies s ( v ) always sample entire C-tests x X and judge their appropriateness for a user v based on A ( x, v ) .", "The underlying classifier d ( g ) , however, operates at the level of gaps g x within a C-test.", "Similarly complex setups can be found in multiple other real-world tasks, including educational applications (e.g., providing reading recommendations at book or chapter level, but estimating appropriateness at word or sentence level) and product recommendation tasks (e.g., training a classifier for cast, plot, and action aspects, but recommending entire movies).", "For our instantiation, we measure the classifier's uncertainty using the Shannon entropy H ( g ) = X LDP ( | g ) log P ( | g ) (6) across the four difficulty classes LD of a gap g .", "P ( | g ) denotes the probability of the classifier d to assign the difficulty class to gap g .", "We then aggregate the resulting scores similar to the total token entropy proposed by Settles and Craven (2008): U ent ( x ) = 1 n n X i =1 H ( g i ) H max (7) where H max is the maximum achievable Shannon entropy, which serves as a normalization term.", "H max can be pre-computed as: H max = | LD | X i =1 1 | LD | log 1 | LD | (8) User objective.", "{ 1 , 2 , 3 , 4 , 5 } based on the users' ability to solve C-tests.", "The user representation ( v ) LP of user v thus returns a proficiency level between 1 and 5 with 5 indicating the highest proficiency.", "In our experiments, we use the C-test dataset introduced above to obtain ( v ) .", "Note that in this dataset, each user solved exactly five C-tests.", "We therefore map their score (i.e., the percentage of correctly filled gaps) to a proficiency level that roughly corresponds to the language courses offered by the university language center.", "Table 1 shows the five levels with their corresponding score ranges and the number of users in the dataset.", "X", "(9) where c : g 7 { 0 , 1 } is an indicator function to predict if gap g i will be correctly (1) or incorrectly (0) answered and maps the percentage of correct answers to the corresponding proficiency level according to Table 1.", "For our experiments, we define c ( g ) = (cid:26) 1 if k < j 0 otherwise (10) where k U ( 1 | LP | , | LP | ) and j U (0 , 1) are uniformly sampled random variables and = d ( g ) .", "Based on our estimation f ( x ) LP , we can now define the error function err as the normalized distance of f ( x ) to the required proficiency: err[ f ( x ) , ( v )] = 1 | LP || f ( x ) ( v ) | (11) 5 Experimental Setup System setup.", "We initialize our system with an empty set of labeled instances.", "In every iteration, we sample a C-test consisting of 20 gaps from the pool of unlabeled instances X using one of the sampling strategies introduced in the previous section.", "Then, we obtain labels based on how the user solved the test, which contributes (1) to the overall difficulty prediction for each gap and (2) to the representation of the current user's proficiency.", "Our approach can be used with any underlying classifier d ( g ) .", "In this paper, we train a multilayer perceptron (MLP) to predict the four difficulty classes for a C-test gap.", "To represent the input of the MLP, we use the 59 features previously proposed by Beinborn (2016).", "We furthermore introduce two novel features computed from BERT (Devlin et al., 2019): We hypothesize that the masking objective of BERT which masks individual words during training is very similar to a gap filling exercise and thus, a model trained in such a way may provide useful signals for assessing the difficulty of a gap.", "For each gap, we generate a sentence where only the gap is replaced by the masking token and fetch its predictions from the BERT model.", "From these predictions we take the prediction probability of the solution as the first feature and the entropy of the prediction probabilities of the top-50 predicted words as the second feature in concordance with findings by Felice and Buttery (2019) who show that entropy strongly correlates with the gap difficulty.", "Adding both features to the 59 features proposed by Beinborn (2016) increases the accuracy of our MLP from 0.33 to 0.37.", "4 While Beinborn successfully used support vector machines (SVM) in her work, we find that MLPs perform on par with SVMs (for the old and new features) and that they are more robust regarding the choice of the first sampled instance.", "Moreover, in our initial experiments with little training data, SVMs and Logistic Regression classifiers were only able to predict the majority class.", "Our MLP has a single hidden layer consisting of 61 hidden units.", "We train the neural network for 250 epochs with early stopping after 20 epochs without any improvement and use Adam (Kingma and Ba, 2015) as our optimizer.", "Note that our main interest is in the analysis of the novel active learning approach, which is why we do not systematically study the underlying classifier, but use a setup comparable to the state-of-the-art results reported by Beinborn (2016).", "We run experiments for each of our sampling strategy.", "We select five C-tests without any overlap between users, texts, and their corresponding user answers to create an independent test set and put the remaining 69 C-tests into the pool of unlabeled data.", "In the first iteration, we use the randomly initialized weights of our neural network to select 4 The results are averaged across ten runs with different random initializations.", "the starting example.", "To provide comparable results between different runs, we keep the parameter initialization of our neural network fixed when comparing different sampling strategies.", "We limit each experimental run to 8 5 = 40 iterations, as the five proficiency levels are not evenly distributed with the smallest class having only eight C-tests.", "At each iteration, we train our model on 80% of the already labeled data and use the remaining 20% as our validation set (split randomly).", "We use the best-performing model on the validation set for testing and store it as our model initialization for the next iteration.", "On an Intel Core i5-4590 , a single run with 40 iterations takes less than four minutes.", "Learner behavior.", "To study the benefit of our approach for different types of learners, 5 we derive four prototypical learner behaviors from our C-test dataset.", "To prepare this, we first compile a probabilistic model for the learners of each proficiency group as described in Table 1 to obtain learner-specific gap error rates e ( g, v ) .", "The learner-specific gap error rates are computed by binning all learners into the specific groups and then computing the error rate by averaging for each gap.", "If there is no error rate for a given gap and learner in our dataset, we use the averaged gap error rate of the corresponding proficiency group to simulate an answer.", "In contrast to Equation (10), we do not sample k , but use the learner-specific error rates e ( g, v ) for gap g i from the proficiency level ( v ) .", "Again, j U (0 , 1) is a uniformly sampled random variable.", "For a language learning platform, it is likely that motivated learners who continually practice improve their proficiency over time.", "Less motivated learners or learners who suffer from distractions, interruptions, or frustration, however, may show different paces in their learning speed or even deteriorate in their proficiency.", "Therefore, we study four prototypical types of learner behavior: Static learners ( STAT ) do not improve their skills over the course of our experiments.", "same, pre-defined proficiency level.", "This models learners with a slow progress or with little motivation overall.", "Motivated learners ( MOT ) continually improve their language proficiency throughout our experiments with a fixed step size of t 1 C-tests.", "That is, we simulate that their proficiency level ( v ) increases by one every t 1 iterations.", "Interrupted learners ( INT ) experience a drop in their proficiency during our experiments.", "Such cases occur, for example, if a learner has to interrupt their learning process for a longer time.", "For our simulation, we start with the motivated learner setup, constantly increasing the proficiency every t 1 iterations.", "However, this learner experiences a sudden increase ( t 2 ) and drop ( t 3 ) in the proficiency level by one.", "After recovering from the drop ( t 4 ) the proficiency will again increase according to the motivated learner ( t 5 ).", "Artificially decreasing learner.", "( DEC )", "Finally, our last group of simulated learners displays a constant drop in their proficiency during our simulation.", "Although such cases rarely occur in the real world, we use this learner to evaluate all sampling strategies in the case of constant drop.", "Similar to the motivated learner, we start with the highest possible proficiency and decrease it by one every t 1 iterations.", "For our experiments, we assume a static learner that remains at proficiency level ( v ) = 3 .", "For motivated learners, we set the initial proficiency level to 1 and use a step size of t 1 = 8 , so that they traverse all proficiency levels throughout a single run.", "For interrupted learners, we also use t 1 = 8 with an additional increase after t 2 = 12 , a drop after t 3 = 16 , and a recovery (increase) after t 4 = 20 .", "Starting from t 5 = 24 , interrupted learners behave the same as motivated learners.", "Like Beinborn (2016), we cannot publish the C-test data due to data privacy reasons, but we provide our code and simulated learner models on GitHub.", "6 6 Experiments We present and discuss our results for U ent and A as defined in section 4.", "initializations and report the averaged scores.", "For random sampling, we do ten runs with different random seeds for each weight initialization to provide more stable results.", "We set = 0 .", "5 for our trade-off sampling strategy.", "As our system and user objectives have different scopes (gap-level vs. exercise-level), we quantify both differently.", "To measure the system objective, we report the accuracy of our model for predicting the individual gap difficulties of the test data after each iteration.", "As our training data increases by 20 gaps after each iteration, we provide plots for all experiments from the first to the last (40-th) iteration.", "For quantifying the user objective, we evaluate all sampling strategies across all 40 iterations, i.e., how well our sampling strategies were able to satisfy the user's needs after the whole set of exercises.", "Instead of accuracy, we take the distance-based metric mean absolute error ( MAE ).", "As users explicitly query a C-test of a specific proficiency level at each iteration, suggesting a C-test which deviates by two levels from the requested proficiency has a worse impact on the user's learning experience than a C-test which only deviates by one level.", "For better interpretability, we do not normalize the MAE as we do for our error function err , i.e., a MAE of 1 means that on average, the difficulty of the sampled instances was off by a whole proficiency level from the queried ones.", "Since the interrupted learner experiences both a drop and increase in proficiency in a less constant manner than the motivated or decreasing learners, we conduct further analysis of our sampling strategies for the interrupted learner.", "System objective.", "Figure 3 shows the system objective for U ent after each iteration.", "Vertical blue lines indicate increases in the learner's proficiency whereas the vertical yellow line indicates a drop.", "We observe that although random sampling performs rather well in the early iterations, all our proposed strategies as well as the uncertainty sampling baseline are able to outperform it in the later iterations.", "Moreover, all proposed strategies perform similar to uncertainty sampling.", "This is surprising, especially for the user-oriented sampling strategy as it inherently does not optimize the system objective.", "One reason for this may be the similarity 0.0 0.1 0.2 0.3 0.4 0 5 10 15 20 25 30 35 40 Iteration Interrupted Learner combtosranduncusr Figure 3: Accuracy on the test data for U ent .", "of the user-oriented sampling strategy to curriculum learning (Bengio et al., 2009), which opts to organize model training in a meaningful way.", "As we sample instances the model is most confident in (i.e., have the highest prediction confidence) this leads to instances which are easier to learn and may especially be helpful in low-data scenarios.", "To better quantify our results, we compare the averaged accuracy scores across all iterations, shown in table 2 and conduct Wilcoxon signed-rank tests (Wilcoxon, 1992) on the active learning curves for system and model objectives to test for statistical significance.", "We can observe that for the static, motivated, and interrupted learners both our joint sampling strategies outperform all baselines significantly (p < 0.05), but show no significant difference between each other.", "7 Only for the decreasing learner all strategies show no significant difference at all.", "In concordance with our observations for the user-oriented sampling which may benefit from first sampling easy-to-learn instances, jointly optimizing system and user objective seems to benefit from curriculum learning and active learning paradigms.", "User objective.", "Table 3 shows the MAE for all strategies using U ent .", "We can observe that all strate-7 The system performance of random sampling remains the same for all learner types as it is averaged across all runs.", "gies which consider a separate user objective sample instances which significantly better fit the current user proficiency.", "8 Furthermore, the combined sampling approach which puts more emphasis on the user objective outperforms our trade-off sampling for all learner behaviors and even manages to outperform the user-oriented sampling strategy for the decreasing learner.", "We further investigate how well our approaches react to changes in the user objective by plotting the mean difficulty ( v ) of sampled instances after each step for all our strategies modeling the user objective.", "As figure 4 shows, all sampling strategies are able to match the queried C-test difficulties well, as they do not deviate much from the queried difficulty (in black).", "Adaptive choice of .", "We furthermore investigate how the choice of affects our trade-off sampling strategy.", "As the system predictions may not be very accurate in early iterations, it is reasonable to put more emphasis on the system objective in the beginning, but focus on providing suited C-tests (user objective) in later iterations.", "We thus define as an adaptive function = f ( i ) = 1 i = i 0 .", "5 which highly emphasizes the system objective in early stages and anneals with an increasing number 8 Statistical testing was again conducted using a Wilcoxon signed-rank test for p < 0.05.", "Figure 5 shows the system performance of our trade-off sampling strategy averaged across ten different runs.", "The colored areas show the corresponding upper and lower quartiles.", "As shown in table 4, we can see that our annealed leads to considerable improvements for system and user objective, leading to a significant increase in average accuracy from 0 .", "339 to 0 .", "347 and a decrease in the MAE from 0 .", "93 to 0 .", "48 for the interrupted learner, outperforming all other sampling strategies.", "Further findings.", "We observe similar results for system and user objectives for the other learner types.", "Investigating the stability of all sampling approaches furthermore shows that our joint optimization strategies perform better and more stable in early iterations.", "Due to averaging, U ent cannot distinguish between C-tests with only a few highly uncertain gaps and C-tests which have a higher number of less uncertain gaps.", "However, in preliminary experiments with a different aggregation function which is more robust to C-tests with only a few highly uncertain gaps, we come to similar findings across all sampling strategies and learner types.", "Detailed results for our other learner behaviors, the stability of our sampling strategies, and the results of our preliminary experiments with a different aggregation function are provided in the paper's appendix.", "Limitations.", "Although our setup with simulated learners may seem artificial compared to an evaluation study with real-world learners, to conduct such a study in an ethical way, we need to ensure that participants are not hurt in their learning process.", "Thus, strategies which can be evaluated in user studies are limited to those which consider the user objective.", "In contrast, the use of simulated learners allows us to compare our proposed strategies against common active learning strategies which do not consider the user objective at all.", "Another limitation is how to estimate a learner's current proficiency given that we do not know the true difficulty of a C-test.", "This raises the general question of using relative or absolute difficulties for the selection of suited exercises.", "In this work, we assumed absolute proficiency levels and implemented according learner behaviors to provide a more controlled environment for our experiments.", "In the case of absence of any absolute (true) difficulty estimations for C-tests, we see several directions for future work:", "a) As a simple baseline, a normalized version of ( x ) may be applied on a learner's previously filled-out C-tests.", "However, this assumes that all C-tests are equally difficult which may lead to unsuited C-tests.", "b) Training an additional model for assessing a learner's proficiency given their results on a C-test with the gap-difficulty predictions from our model serving as additional input.", "c) Instead of using the absolute difficulty, one may define an optimal error margin as a zone of proximal development (Vygotsky, 1978).", "This requires an adaptation of the user objective to the relative difficulties of exercises for individual learners, but may be an important step in achieving highly personalized user models without any absolute labels.", "In this work, we investigated how we can incorporate user feedback into existing active learning approaches without hurting the user's actual needs.", "We formalize both system (active learning) and user objectives and propose two novel sampling strategies which aim to maximize both objectives jointly.", "We evaluate our sampling strategies for the task of selecting suited C-tests, a type of fill-the-gap exercise, which fit the current proficiency of a human learner.", "We create simulated learners for five different proficiency levels from real-world data and use them to define different learning behaviors.", "Our experiments show that both our novel sampling strategies are successfully selecting instances which lead to a better model training while not hurting a learner's progress by selecting too easy or too difficult C-tests.", "Although system and user objective at first seem counteracting, our experiments indicate that they complement each other as jointly optimizing them outperforms optimizing only one of the goals.", "Additional experiments with an adaptive for our trade-off sampling strategy show that properly balancing system and user objective can lead to considerable improvements in performance for both objectives.", "Our findings open up new opportunities for training models on low-resource scenarios with implicitly collected user feedback while jointly serving the user's actual needs.", "Additional use cases like the training of personalized recommendation models as well as the use of reinforcement learning to find a good trade-off between system and user objective remain to be investigated in future work.", "This work has been supported by the German Research Foundation with the ArguAna project (GU 798/20-1) and the Evidence project (GU 798/27-1).", "We thank the anonymous reviewers for their detailed and helpful comments as well as Edwin Simpson and Yevgeniy Puzikov for the insightful discussions about our work.", "We especially thank the language center of the Technische Universitat Darmstadt for providing us with the data and Dr. Lisa Beinborn for providing us with the code to extract her proposed features." ]
[ "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "objective", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "objective", "objective", "objective", "objective", "abstain", "other", "other", "other" ]
[ "This work takes a first step toward movie content analysis by tackling the novel task of movie overview generation.", "Overviews are natural language texts that give a first impression of a movie, describing aspects such as its genre, plot, mood, or artistic style.", "We create a dataset that consists of movie scripts, attribute-value pairs for the movies' aspects, as well as overviews, which we extract from an online database.", "We present a novel end-to-end model for overview generation, consisting of a multi-label encoder for identifying screenplay attributes, and an LSTM decoder to generate natural language sentences conditioned on the identified attributes.", "Automatic and human evaluation show that the encoder is able to reliably assign good labels for the movie's attributes, and the overviews provide descriptions of the movie's content which are informative and faithful.", "Movie summarization is the task of automatically summarizing a screenplay in order to gain a general impression of its content.", "This may include describing the movie's main characters and plot, its genre, artistic style, and so on.", "As more and more movies are being produced every year 1 , there is an ever growing need to facilitate this task.", "Potential applications include producing shorter versions of scripts to help with the decision making process in a production company, enhancing movie search by generating descriptions of what the movie is about, and notably, supporting movie recommendation engines by abstracting over specific keywords to more general concepts.", "Figure 1 gives an example of the type of movie content analysis we would like to obtain automatically.", "The information is taken from Jinni, a 1 According to http://www.boxofficemojo.com/ during 20092016, movie releases went up from 536 to 729.", "The Silence of the Lambs can be described as tense, captivating, and suspenseful.", "The plot revolves around special agents, mind games, and a psychopath.", "The main genres are thriller and crime.", "In terms of style, The Silence of the Lambs stars a strong female character.", "In approach, it is serious and realistic.", "It is located in Maryland and Virginia.", "The Silence of the Lambs takes place in the 1990s.", "It is based on a book.", "The movie has received attention for being a modern classic, an Oscar winner, and a blockbuster.", "Note that The Silence of the Lambs involves brief nudity and sexual content.", "large database (and movie recommendation engine) which indexes movies based on attributes and their values 2 (see the top half of Figure 1) and further aggregates these into a comprehensive overview (see the second half of Figure 1).", "Jinni's movie attributes were created by film professionals based on analysis of user reviews and metadata.", "There are hundreds, and they aim to describe aspects such as mood, style, plot, and setting for any released movie or TV show.", "Although some of these attributes could not be possibly ascribed without information from external sources (e.g., Praise , or Based on ), others could be inferred by watching the movie or reading the 2 Throughout this paper attributes are in italic font and their values in sans serif.", "screenplay (e.g., Genre, Plot, Flag, Mood, Place ).", "This work takes a step toward automatic script summarization by jointly modeling the tasks of movie attribute identification and overview generation.", "Specifically, we propose a novel neural network architecture which draws insights from encoder-decoder models recently proposed for machine translation (Bahdanau et al., 2015) and related sentence generation tasks (Wen et al., 2015; Mei et al., 2016; Lebret et al., 2016).", "Our model takes the screenplay as input and generates an overview for it.", "Rather than representing the script as a sequence, we employ feed-forward neural networks (Zhang and Zhou, 2006; Kurata et al., 2016) to encode the screenplay into various attributes (e.g., Plot , Genre ) and their labels (e.g., thriller , romance ), viewing movie content analysis as a multi-label classification problem.", "Our decoder generates movie overviews using a Long Short-Term Memory network (LSTM; Hochreiter and Schmidhuber, 1997), a type of recurrent neural network with a more complex computational unit which is semantically conditioned (Wen et al., 2015, 2016) on this attribute specific representation.", "Our model is trained end-to-end using screenplays and movie overviews as the supervision signal.", "In both automatic and human-based evaluations our neural network architecture outperforms competitive baselines and generates movie overviews which are well-received by human judges.", "To the best of our knowledge, this is the first work to automatically analyze and summarize the content of screenplays.", "Recent years have seen increased interest in the computational analysis of movie screenplays.", "Ye and Baldwin (2008) create animated story-boards using the action descriptions of movie scripts.", "Danescu-Niculescu-Mizil and Lee (2011) use screenplays to study the coordination of linguistic styles in dialog.", "Bamman et al. (2013) induce personas of film characters from movie plot summaries.", "Agarwal et al. (2014a; 2014b; 2015) extract social networks from scripts, create xkcd movie narrative charts, and automate the Bechdel test which is designed to assess the presence of women in movies.", "Gorinski and Lapata (2015) summarize screenplays by selecting important scenes.", "Our work joins this line of research in an attempt to automatically induce information pertaining to a movie's content such as its genre and plot elements.", "There has been a surge of interest recently in repurposing sequence transduction neural network architectures for various generation tasks such as machine translation (Sutskever et al., 2014), sentence compression (Chopra et al., 2016), and simplification (Zhang and Lapata, 2017).", "Central to these approaches is an encoder-decoder architecture modeled by recurrent neural networks.", "The encoder reads the source sequence into a list of continuous-space representations from which the decoder generates the target sequence.", "Previously proposed architectures are not directly applicable to our task for at least two reasons:", "(a) the correspondence between screenplays and overviews is very loose, and", "(b) the screenplay is not strictly speaking a sequence (a screenplay is more like a book consisting of thousands of sentences), and cannot be easily compressed into a vector-based representation from which to generate the overview.", "Rather than attempting to decode the overview directly from the screenplay, we encode the latter into attribute-value pairs which we then decode into overviews.", "We conceptualize the generation task as a joint problem of multi-label categorization, where each screenplay is assigned to one or more categories, and content-sensitive natural language generation.", "Many machine learning techniques have been proposed for building automatic text categorization systems (see Sebastiani, 2002 and Dalal and Zaveri, 2011 for overviews), including neural networks (Belanger and McCallum, 2016; Kurata et al., 2016).", "Our encoder is a feed-forward neural network, which, however, is able to capture label interactions which are important for our content analysis task.", "Our decoder employs an enhanced LSTM architecture which directly maximizes the probability of the overview given the screenplay's attribute values.", "Conditional LSTMs have been applied to various related tasks, including image description generation (Vinyals et al., 2015), the verbalization of database records (Mei et al., 2016; Lebret et al., 2016), and the generation of dialogue acts (Wen et al., 2015, 2016).", "Lapata (2015) obtained by automatically crawling web-sites such as imsdb.com .", "We crawled Jinni 3 in order to obtain attributes and overviews (see Figure 1) for each movie in ScriptBase.", "As mentioned earlier, attributes have values which are essentially labels/tags describing the movie's content, whereas overviews are short summaries giving a first impression of the movie.", "The crawl resulted in 917 movies which Jinni and ScriptBase had in common.", "We further split these into training, development and test sets, with 617, 200, and 100 instances, respectively.", "We concentrate on the six types of attributes shown in Table 1 whose values we hypothesize can be inferred from analyzing the movie's screenplay.", "Table 1 provides an overview of the number of labels used in our experiments.", "Jinni contains a wealth of attribute values varying from nine for Flag to more than 400 for Plot .", "Additionally, value names for some attributes are synonyms or near-synonyms (e.g., Nudity and Brief Nudity for Flag ).", "We reduced the set of attribute values to those that occurred most frequently (column Frequent in the table) and merged synonyms into a common label (column Merged).", "We could approach the movie overview generation task using an attention-based encoder-decoder model (Bahdanau et al., 2015).", "The encoder would transform the screenplay into a sequence of hidden states with an LSTM (Hochreiter and Schmid-huber, 1997) or another type of computational unit (Cho et al., 2014).", "The decoder would use another recurrent neural network to generate the overview one word at time, conditioning on all previously generated words and the representation of the input, while an attention mechanism would revisit the input sequence dynamically highlighting pieces of information relevant for the generation task.", "As mentioned earlier, viewing screen-3 Dated on 2015/04/18 and available from http://www.", "plays as a sequence of sentences is problematic both computationally and conceptually.", "Even if we used a hierarchical encoder (Tang et al., 2015; Yang et al., 2016) by first building representations of sentences and then aggregating those into a representation of a screenplay, it is doubtful whether a fixed length vector could encode the content of the movie in its entirety or whether the attention mechanism would effectively isolate the parts of the input relevant for generation.", "We therefore propose an architecture that consists of two stacked neural network models for the tasks of movie attribute identification and overview generation.", "Figure 2 illustrates our model.", "We use simple feed-forward neural networks to impose some structure on the input by identifying the labels that most likely apply to the screenplay.", "We subsequently employ a semantically conditioned LSTM (Wen et al., 2015, 2016) 1772 to select the content for which to generate sentences.", "This architecture is advantageous for a number of reasons.", "Firstly, by imposing structure on the screenplays, the generation network is faced with a more compact and informative representation.", "This allows us to make use of a content selection LSTM similar to Wen et al., (2015; 2016), generating fluent and label-specific outputs.", "Secondly, it enables us to train the screenplay encoder (aka classification network) and the decoder jointly, in an end-to-end fashion.", "As shown in Figure 1, the overview highlights various aspects of the movie, essentially devoting a sentence to each attribute.", "This observation motivates us to encode the screenplay as a set of attributes (with their values) and then decode these into a sentence one by one.", "We treat attribute encoding as a multi-label classification problem: an attribute (e.g., Genre or Plot ), will typically have multiple values (aka labels) which are suitable for the movie and should occur in the generated sentence.", "Furthermore, these labels naturally in-fluence each other.", "For example, a movie whose Genre is Crime is also likely to be a Thriller while it is less likely to be a Parody .", "In traditional multilabel classification such interactions are either ignored (Read et al., 2011; Tsoumakas and Katakis, 2006; Godbole and Sarawagi, 2004; Zhang and Zhou, 2005), or represented by label combinations (Tsoumakas and Vlahavas, 2007; Read et al., 2008).", "A few approaches assume or impose an existing structure on the label space (Schwing and Urtasun, 2015; Chen et al., 2015; Huang et al., 2015; Jaderberg et al., 2014; Stoyanov et al., 2011; Hershey et al., 2014; Zheng et al., 2015).", "We employ a neural network approach with the aim of abstracting the screenplay into a set of meaningful labels whose correlations are discovered automatically, during training.", "As shown on top of Figure 2, our encoder is a feed-forward neural network where individual neurons represent the labels to be classified.", "The input to the network is a feature vector x representing the screenplay (we discuss the specific features we use in more detail shortly): h n = ( W n x n ) (1) The input is split into k segments by feature type, and the feature segments are fed into k separate fully connected hidden layers.", "The hidden layer outputs are then combined using simple element-wise addition: f = h 1 h 2 h k (2) The combined feature layer is used to compute an l -sized output layer, where l corresponds to the size of the classification label set.", "The final activation of the output units is obtained by applying the sigmoid function to the output layer: O = ( W o f ) (3) In order to better capture label interactions, we adapt a method of network initialization recently introduced in Kurata et al. (2016).", "In this approach, instead of initializing the model's output weights W o from a uniform distribution, the first p rows of the weight matrix are initialized according to patterns observed in the data.", "To this end, we initialize the n th row of W o with pattern n (equa-tion (4)), which is a vector corresponding to the n th label-assignment observed in the training data: W no = i ( n ) (4) The initialization weight i for unit l of pattern n is set to 0 if the corresponding label is not present in the given instance; or to the upper bound UB of the normalized initialization weights of hidden layer h and output layer o , scaled by the number of times c the pattern occurs in the data: i ( ln ) = ( c UB if ln = 1 0 otherwise (5) UB = 6 p | h | + | o | (6) We follow Glorot and Bengio (2010) in using 6 as normalization factor for UB, and limit the number of patterns to the most frequently observed label assignments.", "Our model uses three types of features representing the screenplay's lexical make up, its underlying character relations, and interactions.", "Lexical Features An obvious feature class is the language of the movie.", "Comedies will be characterized by a different vocabulary compared to thrillers or historical drama.", "We thus represent each script as a vector of 7,500 dimensions corresponding to the most frequent words in the training corpus.", "Vector components were set to the 1773 words' tf-idf values.", "Words in scripts were further annotated with their sentiment values using the AFINN lexicon (Nielsen, 2011), a list of words scored with sentiment strength within the range [ 5 , + 5 ] .", "We extracted several features based on these sentiment values such as the sentiment score of the entire movie, the number of scenes with positive/negative sentiment, the ratio of positive to negative scenes, and the minimum and maximum scene sentiment.", "From scene headings, we were also able to extrapolate the number of internal and external locations per script.", "Graph-based Features Our graph-based features are similar to those described in Gorinski and Lapata (2015).", "Specifically, we view screenplays as weighted, undirected graphs, where vertices correspond to movie characters and edges denote character-to-character interactions (essen-tially the number of times two characters talk to each other or are involved in a common action).", "From the graph we extract features corresponding to the number of main and supporting characters, which we identify by measuring their centrality in the movie network (e.g., the number of edges terminating in a given node).", "We also estimate character polarity by summing the sentiment of each character's utterances as well as the ratio of positive to negative characters in a given script.", "Interaction-based Features We extract features based on how often any two characters interact, i.e., whether they are engaged in a conversation or in the same event (e.g., if a character kills another).", "We identify interactions as described in Gorinski and Lapata (2015) and measure the number of interactions per scene and movie, the number of positive and negative interactions, and their ratio.", "Our decoder generates a movie overview from the multi-label encoding described above.", "For this, we adapt the LSTM architecture of Wen et al. (2015; 2016) which was originally designed for dialogue act generation (e.g., given input inform(type=hotel, count=182, dogsal-lowed=dontcare), the network outputs there are 182 hotels if you do not care whether dogs are allowed ).", "The network performs content selection, i.e., decides which attribute labels to talk about, while generating the sentences describing them.", "a traditional LSTM cell to generate a natural language surface form.", "At each timestep t , the output word w t is drawn from an output distribution conditioned on the previous hidden layer h t 1 as well as the previous content vector p t 1 .", "The content selection cell effectively acts as a sentence planner , retaining or omitting information from the original vector p 0 at every time step t to guide the sentence generating LSTM cell.", "Our LSTM architecture is defined by the following equations: i t = ( W wi w t + W hi h t 1 ) (7) f t = ( W wf w t + W hf h t 1 ) (8) o t = ( W wo w t + W ho h t 1 ) (9) c t = tanh ( W wc w t + W hc h t 1 ) (10) r t = ( W wr w t + W hr h t 1 ) (11) p t = r t (cid:12) p t 1 (12) c t = f t (cid:12) c t 1 + i t (cid:12) c t + tanh ( W pc p t ) (13) where is the sigmoid function, i t , f t , o t , r t [ 0 , 1 ] n are input, forget, output, and reading gates respectively, and c t and c t are proposed cell value and true cell value at time t .", "In the original paper, the input p 0 to the LSTM is a 1-hot representation of the information that should be included in the natural language output.", "In our setup, we relax this constraint such that each element of p 0 [ 0 , 1 ] , i.e., we directly use the output of the multi-label encoder just described.", "The proposed architecture is trained jointly in an end-to-end fashion, minimizing the objective:", "where y t and y t are the observed and predicted word distributions over the training data, p T is the content vector at the final time index T , p 0 is the initial content vector as given by the encoder network, and , are training constants.", "The second term in the objective penalizes the network for generating output without realizing all required labels, while the third term deters the network from utilizing more than one label at any given time step.", "The model is trained on pairs of scripts and sentences extracted from Jinni.", "To give a concrete example, a training instance for the Plot sentence from Figure 1 would consist of the features representing the movie's screenplay, and the overview's 1774 Attributes ZeroR NB DS SVM Lib MLE Mood 43.6 51.2 47.1 45.3 50.7 58.4 Plot 31.3 36.7 35.4 31.5 39.6 43.9 Genre 37.1 52.4 45.8 40.6 54.9 55.3 Attitude 63.0 68.5 67.3 64.0 71.6 76.5 Place 51.3 49.7 54.9 51.4 54.2 58.6 Flag 51.7 49.3 54.7 51.4 50.9 57.0 All 46.3 51.3 50.9 47.4 53.7 58.3 Table 2: Attribute identification (average %F1 across 10 folds).", "Plot sentence The plot revolves around special agents, mind games, and a psychopath. .", "The Plot multi-label network encodes the script into content vector p 0 , and the LSTM learns which la-bels represented in p 0 to talk about while its training objective discourages to leave too many labels unmentioned.", "The observed output error is back-propagated through the LSTM and the embedding network using stochastic gradient descent (Bottou, 1991) with decaying learning rate.", "In this section we report our evaluation experiments.", "We begin by assessing how good our encoder is at capturing screenplay content and then proceed to evaluated the generated overviews themselves.", "In order to assess encoder's ability to induce structure over screenplays, we focus solely on the top part of the architecture in Figure 2.", "Specifically, we trained stand-alone models for the six attributes shown in Table 1 on the gold data provided in the Jinni dataset.", "All networks used the same features introduced earlier and were initialized using the pattern-based method of Kurata et al. (2016).", "To better capture the fact that we are dealing with multi-label assignments, we used the global error function described in Zhang and Zhou (2006).", "Given the network output vector y for input x , the true bag of label assignments y and its complement y , the error observed for each instance is computed as: E = 1 | y || y | ( k , l ) y y exp ( ( y k y l )) (15) The networks were trained with stochastic gradient descent during back propagation, using the same method as for the full model.", "We compared our multi-label encoders (MLE) against several baselines.", "These include assigning the most frequent attribute labels to each movie based on the attributes' mean distribution (ZeroR), Naive Bayes (NB), Decision Stump (DS), LibLinear (Lib; Fan et al., 2008) and Support Vector Machines (SVMs; Chang and Lin, 2011).", "For each comparison system, we trained a binary classifier per attribute label using features identical to the ones used for the MLE.", "Table 2 shows F1 performance on the training data for MLE and comparison systems, averaged over 10 folds.", "As can be seen, MLE performs best, followed by LibLinear.", "Table 3 compares MLE, ZeroR, and Lib, the strongest baseline, on the test set using F1 and the best parameters found for each system during cross-validation.", "As can be seen, MLE outperforms Lib across attributes, and is superior to ZeroR by a large margin.", "F1 differences between MLE and LibLinear are significant ( p < 0 . 01), using approximate randomization testing (Noreen, 1989).", "Overall, the results in Tables 2 and 3 indicate that the classification task is hard.", "This is especially true for Plot which has the largest number of labels.", "Nevertheless, the multi-label encoders introduced here achieve good performance on their own, indicating that they are able to capture the content of the screenplay, albeit approximately.", "We next evaluate the performance of the jointly trained system which we call MORGAN as a shorthand for M ovie O ve R view G ener A tio N model.", "MORGAN is trained on pairs of screenplays and their corresponding verbalizations in the Jinni dataset.", "Unfortunately, our dataset is relatively small for neural network training; it contains 617 movies only, i.e., there are 617 sentences for each attribute.", "To alleviate this problem, we augmented the data as follows.", "We extracted sentence templates from the training set (209 in total), ex-1775 Mood T can be described as M 1 and M 2 The mood of T is M 1 .", "amples of which are shown in Table", "5. We replaced the title and attribute values with variables (shown as capital letters in the table).", "We then used the templates to generate additional data for each movie by substituting attribute variables in template sentences with permutations of the movie's gold-standard labels.", "We thereby obtained a total of 31,000 training instances.", "The model was trained with a learning rate of 0 .", "5, using a decay of 0.01 over 50 epochs, fix-ing it for subsequent epochs.", "Constants and in equation (14) were set to 10 4 and 100, respectively.", "At test time, we used screenplay features as input and generated one sentence per attribute.", "We arranged these into an overview following the ordering Mood (cid:29) Plot (cid:29) Genre (cid:29) Attitude (cid:29) Place (cid:29) Flag which is fixed and attested in all overviews in our dataset.", "We compared MORGAN against several systems: (1) a random baseline, selecting for each movie and attribute type a random sentence from the training set; (2) a nearest-neighbor baseline (NN) which uses the same screenplay features as MORGAN (and cosine similarity) to identify the closest matching script in the training data, and rehashes its overview as output; (3) an attention-based LSTM (Bahdanau et al., 2015) trained on script sentence pairs (31,000 in total); and (4) six attention-based LSTMs, one per attribute type, trained on script sentence pairs (on average 5,200 per LSTM).", "The attention LSTMs were trained on the same screenplay features as MORGAN , with the attention mechanism at each timestep t focusing on parts of the input.", "Example overviews gen-Models BLUE Coherence Grammaticality Random 38.0 2.42 3.83 NN 40.4 3.45 3.93 Attn 23.0 2.93 3.91 typAttn 37.9 3.20 3.80 MORGAN 42.0 3.72 4.08 Jinni 4.27 4.22 Table 5: BLEU scores and mean coherence and grammaticality ratings for movie overviews.", "We evaluated system output with multi-reference BLEU 4 (Papineni et al., 2002), using sentences from the extended gold-standard as references.", "Table 5 (first column) summarizes our results.", "As can be seen, MORGAN outperforms the attention based models, the nearest neighbor system, and the random baseline.", "The attention-based models cannot succinctly capture the movie's content in order to render it into meaningful sentences.", "Although the generated sentences are more or less grammatical on their own (see Table 6), the generated overview lacks coherence, and is fairly repetitive.", "The model does not reliably learn what type of information to focus on for the generation task.", "For MORGAN this problem is alleviated during the encoding step, which performs content distillation prior to generating overview sentences.", "In addition to evaluating system output automatically, we are also interested in how it is perceived by humans.", "To this end, we ran two judgment elicitation studies on Amazon Mechanical Turk.", "Both experiments were conducted on 12 movies.", "In a pre-test we asked 20 workers whether they had seen the movies in our test set and chose the three most popular ones from each of the genres Action, Comedy, Drama, and Romance.", "In our first experiment Turkers were presented with an overview taken from the Jinni gold standard, MORGAN or one of the comparison systems and asked to rate its coherence (i.e., whether it was readily comprehensible or difficult to follow) on a scale from 1 (incoherent) to 5 (coherent).", "Subsequently, they had to rate the grammaticality of 4 We use NLTK's (http://www.nltk.org/) implementation of BLEU, and report the interpolation of BLEU 1 through 4.", "each overview sentence, again on a scale from 1 (ungrammatical) to 5 (grammatical) and decide whether it appropriately described aspects of the movie's content (Yes, No, Unsure).", "We elicited five responses for each overview across six systems (Jinni, typAttn, Attn, Random, NN, and MORGAN ) and 12 movies.", "Finally, participants had to answer a question relating to the movie's content, to make sure that they had actually seen the movie.", "We discarded responses with wrong answers to the content question.", "Examples of the overviews participants judged are given in Table", "6. Table 5 (columns 2 and 3) summarizes the results of our first judgment elicitation study.", "All systems perform well with regards to grammati-Model Mood Plot Genre Attitude Place Flag All Random 37.7 39.6 34.0 43.4 35.8 50.9 19.0 NN 78.6 67.9 71.4 66.1 58.9 91.1 58.9 Attn 38.2 38.2 38.2 41.8 51.0 34.5 40.0 typAttn 60.0 60.0 53.3 57.8 66.7 64.4 40.0 MORGAN 89.5 73.7 80.7 71.9 63.2 89.5 82.5 Jinni 91.1 89.3 92.9 82.1 67.9 75.0 91.1 Table 7: Proportion of sentences and overviews (All) which describe the movie accurately.", "cality.", "This is not surprising for Random and NN which do not perform any generation.", "Attn and typAttn also perform well with MORGAN achiev-1777 ing highest scores for grammaticality amongst automatic systems.", "Grammaticality differences between the various systems in Table 5 and the Jinni gold standard are not statistically significant (us-ing a one-way ANOVA with post-hoc Tukey HSD tests).", "Overviews generated by MORGAN are perceived as more coherent in relation to those generated by comparison systems, even though the model does not explicitly take coherence into account.", "MORGAN overviews are not significantly different in terms of coherence from Jinni, typAttn, and NN, but are significantly better than Random and Attn.", "Table 7 shows the percentage of sentences (per attribute and overall) which participants think describe the movie's content felicitously.", "MORGAN identifies most aspects of the movie successfully, in some cases close to ( Mood, Place ) or even better ( Flag ) than the original Jinni overview.", "MORGAN is significantly better compared to all other models but not significantly worse than Jinni (us-ing a 2 test; see last column in Table 7).", "In a second experiment, participants were presented with six overviews for a movie (from Jinni, Attn, typAttn, Random, NN, and MORGAN ) and asked to rank them (equal ranks were not allowed) in order of relevance (i.e., whether they express content relevant to the movie).", "Again, we obtained five responses for each movie.", "As can be seen in Table 8, while Jinni is ranked first most of the time, MORGAN is ranked second followed by the NN system.", "We further converted the ranks to ratings on a scale of 1 to 6 (assigning ratings 6 ... 1 to rank placements 1 ... 6) and performed and ANOVA which showed that all systems are significantly ( p < 0 . 05) worse than Jinni but MORGAN is significantly better than the comparison systems.", "In this work we have presented a novel approach to automatic movie content analysis.", "We have assembled a new dataset which combines ScriptBase (Gorinski and Lapata, 2015), a corpus of movie scripts, with information gathered from Jinni, a large movie database.", "We proposed an end-to-end model for movie overview generation via multi-attribute encoders and a semantically conditioned LSTM decoder.", "Experimental results show that our encoders are capable of distilling meaningful structures from the screenplay.", "When applied to the overview generation task, our end-Model 1st 2nd 3rd 4th 5th 6th AvgRank Random 1.0 5.8 16.3 22.1 19.2 35.6 4.59 NN 5.8 19.2 24.0 23.1 15.4 12.5 3.60 Attn 3.8 13.5 20.2 28.8 16.3 17.3 3.92 typAttn 1.9 7.7 15.4 10.6 33.6 30.8 4.58 MORGAN 8.7 42.3 22.1 12.5 12.5 1.9 2.71 Jinni 78.8 11.5 1.9 2.9 2.9 1.9 1.45 Table 8: Relevance rankings (shown as proportions) given to overviews by human subjects.", "to-end model outperforms a standard attention-based LSTM.", "Human evaluation also indicates the overviews generated by our model are felicitous, informative, and rated favorably by humans.", "In the future, we would like to investigate how attribute-specific features can improve performance compared to our more general feature set which is invariant for each sentence type.", "It would also be possible to equip the model with a hierarchical decoder which generates a document instead of individual sentences.", "Although currently our model relies solely on textual information, it would be interesting to incorporate additional modalities such as video (Zhou et al., 2010) or audio (e.g., we expect comedies to be visually very different from thrillers, or romantic movies to have a different score from superhero movies).", "Finally, we would like to examine whether the content analysis presented here can extend to different types of fiction such as novels or short stories.", "Acknowledgments We thank the NAACL reviewers for their constructive feedback.", "We gratefully acknowledge the financial support of the European Research Council (award number 681760)." ]
[ "objective", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "result", "objective", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "objective", "other", "other", "method", "abstain", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "result", "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain" ]
[ "Petroni et al. (2019) demonstrated that it is possible to retrieve world facts from a pre-trained language model by expressing them as cloze-style prompts and interpret the model's prediction accuracy as a lower bound on the amount of factual information it encodes.", "Subsequent work has attempted to tighten the estimate by searching for better prompts, using a disjoint set of facts as training data.", "In this work, we make two complementary contributions to better understand these factual probing techniques.", "First, we propose OPTIPROMPT , a novel and efficient method which directly optimizes in continuous embedding space.", "We find this simple method is able to predict an additional 6.4% of facts in the LAMA benchmark.", "Second, we raise a more important question: Can we really interpret these probing results as a lower bound?", "Is it possible that these prompt-search methods learn from the training data too?", "We find, somewhat surprisingly, that the training data used by these methods contains certain regularities of the underlying fact distribution, and all the existing prompt methods, including ours, are able to exploit them for better fact prediction.", "We conduct a set of control experiments to disentangle learning from learning to recall, providing a more detailed picture of what different prompts can reveal about pre-trained language models.", "1 1 Introduction Pre-trained language models like BERT are optimized to predict the distribution of words in an Internet corpus (Devlin et al., 2019).", "Naturally, this distribution encodes information about world facts.", "Recently, researchers have taken an interest in measuring how much factual information language models acquire from pre-training.", "Petroni et al. (2019) formally define this project in the LAMA * The first two authors contributed equally.", "1 The code is publicly available at https://github.", "benchmark, which consists of ( subject , relation , object ) triples along with human-written templates that express each relation.", "They show that BERT can predict objects given cloze-style promptsfor example, Dante was born in [MASK] and they present their result as a lower bound on the amount of factual information BERT encodes.", "Subsequent work has attempted to tighten this bound by finding better prompts.", "Jiang et al. (2020) use text mining and paraphrasing to find a set of candidates and select the prompts that lead to the highest accuracy on a training set.", "Shin et al. (2020) train a model to generate prompts automatically by searching for the sequence of tokens that maximizes expected likelihood of the gold object label.", "Both of these methods collect additional triples from Wikidata to use for tuning their prompts.", "In this paper, we first take a natural next step in the search for better prompts: rather than confining our search space to discrete input tokens, we directly optimize in the input embedding space, finding the real-valued input vectors that are most effective at eliciting facts.", "We also find that initializing with manual prompts can provide a better starting point for the search process.", "Our approach, OPTIPROMPT , is simple and compute-efficient, and improves accuracy on the LAMA benchmark from 42 .", "2% to 48 .", "6% , compared to previous discrete alternatives.", "On the more difficult LAMA-UHN split (Poerner et al., 2019), which filters out easy-to-guess entity names, OPTIPROMPT improves accuracy from 31.3% to 38.4%.", "At the same time, we observe that prompts that are optimized on training data may exploit some regularities in the underlying distribution of facts.", "How can we make sure our prompts are recovering information solely from the language model?", "An analogous question has been explored recently in linguistic probing, which aims to explore the linguistic properties encoded in contextualized word representations (Belinkov et al., 2017; Tenney et al., 2019; Lin et al., 2019)for example, by seeing if a classifier can predict that chef is the nominal subject of made given the representations returned from a language model (Figure 1).", "Recent work has attempted to disentangle the information encoded in the representations from the information learned by the probe (Hewitt and Liang, 2019; Pimentel et al., 2020; Voita and Titov, 2020; Zhu and Rudz-icz, 2020).", "However, this question has not been yet explored in factual probing, in part because it is assumed that there is no way to predict a knowledge fact simply from observing a non-overlapping set of facts about other entities.", "2 For example, learning that Dante was born in Florence should tell you nothing about the birthplace of John Donne .", "We analyze our training data and find that this assumption is not warranted.", "Even though the training data was collected independently of the LAMA benchmark, there are sufficient regularities in the underlying distribution of Wikidata relations that a naive classifier fit to the training data can achieve surprisingly good performance.", "Furthermore, our experiments reveal that all the data-driven prompt-search methods, including previous methods and our proposed OPTIPROMPT , are able to exploit this 2 In knowledge base completion or link prediction, researchers study how to predict a fact (Barack Obama, nationality, ?) from other triples such as (Barack Obama, place_of_birth, Honolulu) and (Honolulu, city_of, USA) .", "In knowledge probing, the underlying assumption is that one can't predict facts from the other facts of the same relation.", "information to achieve better prediction accuracy.", "Given some training data, a good search algorithm can find prompts that recover a non-trivial number of facts from a neural network with randomly initialized parameters, exploiting both simple class statistics and higher order lexical regularities.", "This finding makes it challenging to interpret relative accuracy scores on the knowledge probing task.", "We show how our control experiments allow us to form a more detailed understanding of the behavior of different probes.", "For example, by partitioning the test set into easy examples, which can be predicted by random controls, and hard examples, we can form some conclusions about which facts are less likely to have been learned from training data.", "OPTIPROMPT outperforms prior methods in both subsets, suggesting it is both better at learning from training data and better at eliciting facts from a language model.", "We conclude with suggestions for future work that might be less susceptible to the confounding effect of training data.", "The factual probing setting was introduced by the LAMA benchmark (Petroni et al., 2019), which is designed to measure the amount of factual information encoded in a pre-trained language model (LM).", "In LAMA, a fact is defined as a triple (cid:104) s, r, o (cid:105) , where s is a subject (e.g., Dante ), r is a relation from a fixed set of relations R (e.g., place of birth ), and o is an object ( Florence ).", "LAMA facts are drawn from a number of sources, including Wikidata, ConceptNet (Speer and Havasi, 2012), and SQuAD (Rajpurkar et al., 2016).", "We follow recent factual probing work (Jiang et al., 2020; Shin et al., 2020) in focusing on the T-REx split (Elsahar et al., 2018), which contains up to 1000 (cid:104) s, r, o (cid:105) triples for each of 41 Wikidata relation types.", "The relation types are divided into three categories: 1-1 includes relations like capital of ; N-1 includes relations like place of birth ; and N-M includes relations like shares border with .", "In the LAMA evaluation, each relation is associated with a human-written prompt that contains a single [MASK] tokenfor example, [X] was born in [MASK] .", "To accommodate masked language models such as BERT, LAMA is restricted to facts for which the object label is a single token in a predefined vocabulary Method Prompt Data-driven?", "V .", "3 Given a subject s , a relation prompt t r , and a masked language model, we can identify the word o V to which the LM assigns the highest probability of P ( [MASK] = o | t r ( s )) , where t r ( s ) represents the prompt template with the subject placeholder [X] replaced by s .", "If o is the same as the gold object o , we conclude that the LM encodes information about the fact.", "LAMA is an evaluation benchmark, so there is no training data.", "It is constructed so that a pre-trained language model can be evaluated off-the-shelf with no additional fine-tuning.", "Petroni et al. (2019) remark that their benchmark provides only a lower-bound estimate of the amount of factual information stored in an LM, because their manually written prompts might not be optimal for eliciting facts.", "Accordingly, subsequent work has focused on tightening this bound by using additional training data to find more optimal prompts.", "Jiang et al. (2020) use a range of text-mining and paraphrasing techniques to generate a set of candidate prompts for each relation.", "They collect a training dataset from Wikidata, ensuring that there is no overlap with subject-object pairs in the LAMA benchmark, and select prompts by measuring accuracy on this training data.", "They consider a number of rules for selecting prompts, including topK baselines and an optimized ensemble, which consists of multiple prompts per relation with weights tuned on the training data.", "Their prompt dataset, LPAQA, is available online.", "4 3 Subject names are usually longer, with an average length of 3 .", "Shin et al. (2020) take prompt optimization one step further by training a statistical model, AUTOPROMPT , to search over the space of input tokens for prompts that elicit correct predictions.", "They collect 1000 (cid:104) s, r, o (cid:105) triples for each relation type, either from the original T-REx dataset (Elsahar et al., 2018) or from Wikidata, with no triples that appear in the LAMA benchmark.", "They define a prompt for a given relation r as the subject followed by a fixed number of trigger tokens: t r = [X][T] 1 [T] 2 . . . [T] m [MASK] , where [X] is replaced by the subject, [T] i represents a trigger token which can be any token in the vocabulary, and the number of [T] tokens is set as a pre-defined number m .", "The tokens are initialized as [MASK] tokens and then iteratively updated, at each step using a gradient-based searching algorithm (Wallace et al., 2019) to replace one of the trigger tokens with the token that is estimated to maximize the likelihood of the gold label on the training set.", "Our approach is motivated by the view that restricting the search to the space of vocabulary tokens is a suboptimal and artificial constraint.", "In the case of AUTOPROMPT , optimizing over a discrete subspace is also inefficient: at each step we have to enumerate a set of candidate tokens, replace the selected trigger token, and re-run the model (Shin et al., 2020).", "The examples in Table 1 also illustrate that optimized textual prompts can be opaque, despite consisting of tokens from the English vocabulary.", "This undermines one argument in favor of natural language prompts, which is that they are Method 1-1 N-1 N-M All UHN Majority 1.8 23.9 22.0 22.0 23.8 LAMA (manual) 68.0 32.4 24.7 31.1 21.8 LPAQA (manual + paraphrased) 65.0 35.9 27.9 34.1 28.7 AUTOPROMPT (5 [T] s) 58.0 46.5 34.0 42.2 31.3 OPTIPROMPT (5 [V] s) 49.6 53.1 39.4 47.6 37.5 OPTIPROMPT (10 [V] s) 60.7 53.2 39.2 48.1 37.9 OPTIPROMPT (manual) 59.6 54.1 40.1 48.6 38.4 Table 2: Micro-averaged results (top-1) on the LAMA benchmark using the BERT-base-cased model, averaged over relations.", "OPTIPROMPT In this view, we propose OPTIPROMPT , a method for continuous prompt optimization.", "Rather than limiting the search to the space of discrete tokens, OPTIPROMPT searches for optimal prompts directly, composing prompts using any vector in the embedding space.", "We first follow AUTOPROMPT and define a prompt in the following form: t r = [X] [V] 1 [V] 2 . . . [V] m [MASK] , where each [V] i R d is a dense vector with the same dimension as the LM's input embedding (e.g., 768 for BERT-base) and the number of [V] vectors is set to a pre-defined number m .", "Treating prompts as dense vectors allows us to search for optimal prompts much more efficiently.", "Given some initial values for [V] i , we keep all other model parameters fixed and use gradient-descent to minimize the negative log-likelihood of a training set: L r = 1 | D r | (cid:88) ( s,o ) D r log P ( [MASK] = o | t r ( s )) , where D r is the set of ( subject , object ) pairs with relation r and t r represents the prompt template for relation r with subject tokens s substituted for the placeholder [X] .", "In this basic form, we pick a fixed value for m (treated as a hyperparameter) and randomly initialize all the [V] tokens.", "We also consider a more sophisticated form of using manual prompts (we use the prompts provided in the LAMA benchmark) to decide the number as well as the position of the [V] tokens for each relation and initialize each [V] i with the pre-trained input embedding for the corresponding tokens in the manual prompt.", "As shown in Table 1, we can convert a manual prompt [X] is [MASK] citizen into t r = [X][V] 1 [MASK][V] 2 , and use the embeddings of is and citizen to initialize [V] 1 and [V] 2 respectively.", "Our motivation is that a good initialization is likely to be important in this challenging non-convex optimization problem.", "Setup We train OPTIPROMPT using the data collected by Shin et al. (2020), which contains 800 training examples with 200 held out for development.", "For our main experiments, we probe the BERT-base-cased model and we compare other pre-trained language models in Appendix C. We report top-1 micro-averaged accuracy: 1 |R| (cid:88) r R 1 | D r | (cid:88) ( s,o ) D r 1 [ o = o ] , where R is the set of relations, D r is the set of ( subject , object ) pairs with relation r , and o = arg max o P ( [MASK] = o | t r ( s )) .", "More implementation details can be found in Appendix B.1.", "LAMA results Our results are in Table 2. Overall, OPTIPROMPT outperforms the previous reported results in terms of accuracy on the LAMA benchmark.", "Compared to AUTOPROMPT 5 , our 5 For AUTOPROMPT , we obtain a slightly different accuracy 42.2% by evaluating their released prompts, instead of 42.9% reported in their paper.", "We suspect that this is due to a discrepancy in the vocabulary used in different papers.", "We use the vocabulary provided in the LAMA benchmark for all the evaluation: https://github.com/ facebookresearch/LAMA#unified-vocabulary .", "models perform 5.4%6.4% higher on LAMA and 6.2%7.1% on the more-difficult LAMA-UHN benchmark.", "The improvement is consistent across all categories, with the exception of the 1-1 category, which contains two relations, capital and its inverse, capital of .", "Interestingly, the prompt that yields the best results in this category is the manual prompt, with LPAQA and AUTOPROMPT prompts performing steadily worse.", "We speculate that there are very few prompts that elicit this relation with high accuracy and they are difficult to find via stochastic, non-convex optimization.", "We also find that initializing the prompt vectors using the manually written prompts improves performance consistently.", "This confirms our intuition that the manual initialization provides a good prior for finding a good solution in the non-convex optimization problem.", "The results are broken down by relation in Table 8 in the Appendix.", "Our factual probing results confirm that OPTIPROMPT is an effective approach, outperforming the best previous method by 6.4% on the LAMA benchmark.", "However, can we conclude that BERT encodes 6.4% more facts than was previously known?", "Our prompts, like LPAQA and AUTOPROMPT , are optimized on in-distribution Wikidata relations, which raises the possibility that they exploit some regularities in the underlying fact distribution.", "In this section we aim to answer two questions.", "First, are there patterns in the Wikidata fact distribution that statistical model could theoretically exploit to predict unseen facts?", "Second, are optimized prompts capable of exploiting these patterns in practice?", "We first examine whether it is possible to predict any facts by just looking at the training data.", "The simplest pattern is the class prior P ( o | r ) : if one or two object labels dominate the relation r , it is easier to guess them regardless of the subject entity.", "A more sophisticated pattern is to find a correlation between subject tokens and object labelsthat is, to estimate P ( o | r, w 1 , ..., w | s | ) , where w 1 , . . . , w | s | V are the tokens of the subject name.", "To see whether such patterns exist, we fit two simple probabilistic models to the Wikidata training set collected by Shin et al. (2020).", "The first model always predicts the majority class, with class priors learned from the training data, and the sec-Relation Class Prior Naive Bayes All 17.3 24.6 1-1 0.2 0.3 N-1 23.2 28.6 N-M 11.0 21.8 member of 2.2 59.6 manufacturer 8.9 62.0 Table 3: Results for simple classifiers fit to the Wikidata training data and evaluated on the LAMA test set.", "ond is a Naive Bayes classifier (bag-of-words) with add-one smoothing (see details in Appendix B.2).", "Table 3 shows the accuracy of these models on the LAMA benchmark, averaged over relations.", "The majority class model performs well because, on some relations, well over half of the examples are from the majority class.", "6 The Naive Bayes baseline performs even better in all categories by learning correlations between subject tokens and object labels.", "This analysis complements an observation of Poerner et al. (2019), who point out that BERT can exploit superficial information in a cloze prompt to guess the correct answerfor example, predicting that people with stereotypically Italian names were likely born in Rome .", "Our results show that it is possible to learn these correlations even without prior information about entity names, and there might be other, subtler patterns in the Wikidata distribution.", "We have shown that the training data clearly encodes certain regularities and simple statistical models can learn to fit the training data.", "In the following, we study whether a prompt optimization method built with pre-trained language models, is expressive enough to exploit these regularities in practice.", "We attempt to answer this question by means of two random controls, inspired by similar proposals from linguistic probing.", "In our Random Model (RM) baseline, we optimize prompts to elicit 6 These include native language ( 60% French ) and continent ( 72% Antarctica ).", "facts from a neural network with the same architecture as the pre-trained LM but with randomly initialized parameters.", "This is analogous to a control function Pimentel et al. (2020), a function that removes information from a linguistic representation.", "Any successful predictions in this setting must be the result of optimizing on training data.", "We also consider a Random Embeddings (RE) baseline, where we reinitialize only the input embeddings.", "7 This is analogous to a control task (Hewitt and Liang, 2019), a variant of the probing task in which word types are associated with random labels.", "8 Our motivation is that the Random Model setting is more difficult to optimize, so might underestimate the ways a prompt model could exploit information from the training data.", "Finally, we directly fine-tune a reinitialized BERT model on the training data with the goal of getting a better estimate of the number of LAMA facts that could be predicted from the training data.", "The results are shown in Figure 2 (see implementation details and more results in Appendix B.1 and Table 8).", "In the Random Embeddings setting, both AUTOPROMPT and OPTIPROMPT are capable of finding prompts that elicit some correct predictions.", "In the Random Model setting, AUTOPROMPT gets 7 In the RE setting, the classifier head of the model is also reinitialized, as the output embeddings are tied to the input embeddings.", "8 Hewitt and Liang (2019) consider tasks like part-of-speech tagging, where each word type can be associated with a randomly selected tag.", "We randomize the inputs rather than the labels, which preserves most of the the statistical correlations between subject token types and object labels but removes lexical information from the embeddings.", "0% of predictions correct, presumably because it is more difficult to optimize, but OPTIPROMPT is still capable of finding successful prompts.", "Most successful predictions are obtained by finding a prompt that elicits the majority class label, although OPTIPROMPT also makes a number of correct predictions that cannot be attributed to this strategy.", "Our qualitative analysis suggests that these prompts exploit both class statistics and correlations between objects and subject tokens (Appendix A.2).", "Fine-tuning BERT results in even higher accuracy, indicating that there are patterns that prompts fail to exploit.", "The random controls represent a challenging setting for prompt optimization, and it is possible that the prompts are better exploiting the training data when they have access to full pre-trained BERT model.", "We find evidence that this is the case by calculating how often each prompt elicits the training class majority label on LAMA, plotting the results in Figure 3. Both AUTOPROMPT and OPTIPROMPT are prone to over-predicting the majority class label.", "For example, although AUTOPROMPT gets 0% accuracy in the RM setting, it finds a prompt that elicits the majority label more than 95% of the time for six relations when optimized on the pre-trained BERT model.", "9 LPAQA prompts predict the majority class less often, possibly because they are less effective at 9 Shin et al. (2020) attempt to prevent the model from using this strategy by filtering out prompts that contain proper nouns or gold object labels, but this evidently is not enough.", "For example, the prompt for the position held relation is [X] explorers voting municipal consecrated [MASK] ., which elicits bishop for 100% of LAMA examples.", "fitting the training distribution.", "However, it is still clear that LPAQA prompts also encode distribution of the training data.", "For instance, the highest ranked occupation prompts discovered by LPAQA include prompts such as [MASK] and actors [X] and [MASK] and player [X] ., 10 which reflect several of the most common occupations in Wikidata.", "We also discuss examples in Appendix A.2 of cases where LPAQA finds subtle changes to the prompt template that leads the model to predict the majority label more often than the manual prompt and the true test distribution.", "All the above evidence shows that optimized prompts can learn new facts to some extent.", "this light?", "In order to get another perspective of the relative improvement, we partition LAMA into an easy subset and a hard subset (examples from each subset can be found in Table 5).", "The easy subset consists of the facts that can be correctly predicted by any of three models fit to the training data: the Naive Bayes model described in Section 4.2 and a fine-tuned BERT model with either token embeddings reinitialized or all parameters reinitialized.", "The easy subset serves as an estimate of the set of facts that can be predicted from training data.", "The hard subset consists of the remain facts.", "Table 4 shows the results of each prompt on these two subsets of LAMA (the per-relation results are given in Table 9).", "First, we observe that all the probing methods achieve a much higher accuracy on the easy subset.", "Using more sophisticated prompt optimization techniques tends to result in big improvements on the easy subset of LAMA and smaller improvements on the hard subset.", "OPTIPROMPT outperforms AUTOPROMPT by 7.4% on the easy examples; while on the hard examples, where we filtered out facts that we know can be predicted from the training data, OPTIPROMPT also yields a big improvement (+6.3%).", "This suggests that OPTIPROMPT is both better at learning from training data and better at eliciting facts from an LM.", "For a more qualitative analysis, we randomly sample ten facts from each subset, keeping only facts that are predicted correctly by at least one model and exclude examples that have the majority class label.", "The examples, shown in Table 5, give a better idea of the types of predictions elicited by different prompts.", "For example, both AUTOPROMPT and OPTIPROMPT appear to be exploiting the training data in some cases.", "In the easy subset, they elicit more accurate predictions on cases when Rel.", "the answer is a token in the subject name.", "In the hard subset, they show signs of having over-fit to the training distribution, incorrectly predicting the most common object labels for continent ( Antarctica ) and manufacturer ( IBM ).", "OPTIPROMPT performs better than the other prompts on some facts in both categories.", "On an easy profession example, while AUTOPROMPT incorrectly predicts the majority label ( politician ), OPTIPROMPT along with our Naive Bayes modelapparently encodes a lexical correlation between some aspect of the subject's name and the correct label, actor .", "On the other hand, OPTIPROMPT out-performs the other prompts on two more difficult examples: Francis Hagerup used to work in Oslo and William Lyon Mackenzie Kingused to work in Ottawa .", "In both cases, LPAQA predicts the training majority label ( London ), AUTOPROMPT gets geographically closer ( Copenhagen and Montreal ), and OPTIPROMPT predicts the correct city.", "We note that we cannot conclude that there is no way to predict these hard facts from training data.", "A more general limitation of this analysis is that it does not allow us to say which strategy a model uses to make a particular prediction.", "Many facts can be predicted either by learning the class prior; by learning a lexical correlation between subject tokens and objects; by exploiting lexical information from the LM; or because the LM genuinely encodes information about a particular entity.", "Still, the qualitative examples reveal interesting patterns in the behavior of the different prompt models that could not be observed from the summary accuracy results on the LAMA benchmark, and looking at specific predictions across a number of prompts gives us more evidence for deciding what kind of information the LM encodes about a particular fact.", "Our experiments show that OPTIPROMPT is an effective optimization algorithm, outperforming prior work at the task of eliciting facts from a pre-trained language model.", "However, our results are complicated by the fact that any data-driven optimization can find prompts that encode new information from the training data.", "This leaves open the question of which method we should select if we are interested in factual probing.", "Continuous vs. discrete prompts We find that both continuous and discrete optimization are capable of finding prompts that exploit the training data.", "Even when the prompt is discrete, it is rarely clear why a prompt elicits a particular prediction.", "11 Hence, we believe that continuous prompting is more preferable, because it is easier and more efficient to optimize, and makes better predictions (in both easy and hard subsets).", "On the other hand, 11 For an illustration, see Appendix A.2 for a list of the AUTOPROMPT templates that elicit the majority class label more than 95% of the time.", "one drawback of OPTIPROMPT (which is shared by AUTOPROMPT ) is that we need white-box access to the LM to compute the gradients.", "Discrete prompts will still be necessary in cases where the model parameters are not available, for example in the case of very large language models that are provided over an API.", "Learning vs. learning to recall Regardless of how we choose to optimize prompts, it remains difficult to say why a model made a particular predictionwhether it was learned from training data or encoded in the LM.", "Some avenues for future work might be to consider techniques for attributing predictions to specific training instances, with the goal of developing a causal understanding of how facts are acquired during pre-training or prompt optimization.", "More generally, our real goal is to understand how pre-trained language models learn and represent information.", "Prompt-based probing might provide some insight into this question, but we hope that future research will eventually be able to provide more mechanistic explanations for neural network behavior.", "For example, it would be interesting to understand how information about entities is laid out in neural network parameters and later retrieved in response to an input prompt.", "Our work follows from the line of factual probing experiments initiated by Petroni et al. (2019), who introduced the LAMA benchmark for cloze-style factual probing.", "Subsequent work on LAMA has introduced data-driven methods for optimizing prompts (Jiang et al., 2020; Shin et al., 2020).", "Poerner et al. (2019) point out that many facts in LAMA can be predicted using lexical clues, and they introduce a new benchmark, LAMA-UHN, that is less susceptible to these heuristics.", "Our work follows these projects by introducing", "(a) more effective techniques for optimizing prompts, and", "(b) a more comprehensive approach for accounting for the role of train/test overlap.", "Concurrently with this work, other authors explore continuous prompt optimization: Haviv et al. (2021) use an encoder to map a manually written prompt to a sequence of continuous vectors, which are then replaced with the discrete tokens that are nearby in embedding space; Li and Liang (2021) propose Prefix-Tuning, which fine-tunes the left-most hidden representations in auto-regressive language models; Liu et al. (2021) use an LSTM to generate a sequence of prompt vectors.", "Prompting has been explored more generally as a method for achieving few-shot learning with language models (Brown et al., 2020; Schick and Schtze, 2020; Gao et al., 2020).", "Linguistic probing is an extensive area of research that we do not attempt to summarize here (see Rogers et al., 2020 for an overview).", "Our work is most related to recent proposals about how to measure whether a probe is extracting information from a representation or learning to predict the annotation from probe training data.", "These include random baselines (Hewitt and Liang, 2019) and information-theoretic measurements (Voita and Titov, 2020).", "We adopt the notion of control functions from Pimentel et al. (2020).", "Our study also relates to a larger category of work diagnosing short-cut learning (Geirhos et al., 2020) in neural NLP models.", "McCoy et al. (2019) discover that models like BERT are often right for the wrong reason, exploiting shallow heuristics rather than underlying linguistic structure, and similar effects have been discovered in many other tasks (Sugawara et al., 2018; Wallace et al., 2019).", "We introduce OPTIPROMPT , an effective continuous method for optimizing prompts.", "Applied to factual probing, OPTIPROMPT outperforms the best previous prompt method by 6.4% on the LAMA benchmark.", "We find that the typical training data used for prompt optimization reveals useful information about the underlying task distribution, to the point that search algorithms can find prompts that recover facts even from a randomly initialized model.", "By comparing the predictions of different prompt methods across our different controls we can form a more detailed understanding of how different prompts behave and what they can reveal about pre-trained language models.", "Our experiments illustrate that the facts recovered from a pre-trained language model should not be considered real facts.", "Optimizing any kind of statistical model for factual prediction is likely to devolve into stereotype-learning as the model learns lexical correlations between entity names and object labels.", "This problem is more pronounced if our training distribution comes from a source like Wikidata, which we find to be im-balanced.", "More generally, language models that are trained on the Internet will model the toxic and harmful language that is found there, a well-documented finding for pre-trained language models like BERT (e.g., Gehman et al., 2020; Nadeem et al., 2020).", "Using such models for factual prediction is liable to amplify those biases.", "OPTIPROMPT is intended to be a diagnostic tool and general-purpose optimization method, not a way to use BERT as a knowledge base.", "We thank Zhengbao Jiang for answering questions about LPAQA.", "We thank the members of the Princeton NLP group and the anonymous reviewers for their valuable comments and feedback.", "This work is supported in part by a Graduate Fellowship at Princeton University." ]
[ "abstain", "abstain", "objective", "objective", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "other", "method", "other", "other", "other", "other", "abstain", "method", "other", "method", "abstain", "other", "method", "abstain", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "The analysis of data in which multiple languages are represented has gained popularity among computational linguists in recent years.", "So far, much of this research focuses mainly on the improvement of computational methods and largely ignores linguistic and social aspects of C-S discussed across a wide range of languages within the long-established literature in linguistics.", "To fill this gap, we offer a survey of code-switching (C-S) covering the literature in linguistics with a reflection on the key issues in language technologies.", "From the linguistic perspective, we provide an overview of structural and functional patterns of C-S focusing on the literature from European and Indian contexts as highly multilingual areas.", "From the language technologies perspective, we discuss how massive language models fail to represent diverse C-S types due to lack of appropriate training data, lack of robust evaluation benchmarks for C-S (across multilingual situations and types of C-S) and lack of end-to-end systems that cover sociolinguistic aspects of C-S as well.", "Our survey will be a step towards an outcome of mutual benefit for computational scientists and linguists with a shared interest in multilingualism and C-S.", "It is common for individuals in multilingual communities to switch between languages in various ways, in speech and in writing.", "In example 1, a bilingual child alternates between German and Turkish (in bold) to describe her teacher at school.", "Note that the Turkish possessive case marker ( -si ) is attached to a German noun (Karakoc and Herkenrath, 2019).", "The goal of this paper is to inform researchers in computational linguistics (CL) and language technologies about the linguistic and social aspects of code-switching (C-S) found in multilingual contexts (e.g. Europe and India) and how linguists describe and model them.", "Our intent is to increase clarity and depth in computational investigations of C-S and to bridge the fields so that they might be mutually reinforcing.", "It is our hope that interested readers can profit from the insights provided by the studies reported in this survey, for instance, in understanding the factors that guide C-S outcomes or in making use of existing annotation schema across multilingual contexts.", "For linguists, the specific ways in which languages are switched matters.", "The use of a single Spanish word in an English tweet (ex. 2) is not as syntactically complicated as the integration in ex.", "1. In fact, it may not signal multilingualism at all, simply borrowing.", "Many words, particularly anglicisms, circulate globally: marketing, feedback, gay", ".", "To produce example (2), the speaker needs to know only one Spanish word.", "But, to produce example (1), the speaker has to know what word order and case marker to use, and which languages they should be drawn from.", "NLP scholars are not always concerned with the difference between examples (1) and (2) so that, with some exceptions (Bhat et al., 2016), grammatical work in NLP tends to rely heavily on the notion of a matrix language model advanced by Joshi (1982) and later adapted by Myers-Scotton (1997) as the Matrix Language Frame (MLF) model.", "The MLF holds that one language provides the grammatical frame into which words or phrases from another are embedded and its scope of application is a clause.", "Thus, it would not apply to the alternational English-Afrikaans C-S in example (3) as each clause is in a separate language (Dulm, 2007).", "Although it dominates computational approaches to C-S, the MLF is contested on empirical and theoretical grounds.", "The consistent identification of a matrix language is not always possible, the criteria for defining it are ambiguous, and its scope is limited (Meakins, 2012; Bhat et al., 2016; Adamou, 2016; MacSwan, 2000; Auer and Muhamedova, 2005).", "Bullock et al. (2018) computationally show that different ways of determining the matrix language only reliably converge over sentences with simple insertions as in example (2).", "For many linguists, the MLF is not the only way, or even an adequate way, to theorize C-S.", "The Equivalence Constraint (Poplack, 1980) captures the fact that C-S tends to occur at points where the linear structures of the contributing languages coincide, as when the languages involved share word order.", "Other syntactic theories are built on the differences between lexical and functional elements, including the Government Constraint (DiSciullo et al., 1986) and the Functional Head Constraint (Belazi et al., 1994).", "Incorporating the latter in NLP experiments has been shown to improve the accuracy of computational and speech models (Li and Fung, 2014; Bhat et al., 2016).", "Functional elements include negative particles and auxiliaries, which are respectively classified as Adverbs and Verbs (lexical classes), in some NLP tag sets (Al-Ghamdi et al., 2016).", "This means that NLP experiments often use annotations that are too coarse to be linguistically informative with regard to C-S.", "Constraint-free theories (Mahootian and Santorini, 1996; MacSwan, 2000) hold that nothing restricts switching apart from the grammatical requirements of the contributing languages.", "Testing such theories in NLP experiments would require syntactically parsed corpora that are rare for mixed language data (Partanen et al., 2018).", "In sum, working together, theoretical and computational linguists could create better tools for processing C-S than those currently available.", "In addition to focusing on the linguistic aspects and constraints on C-S, linguists are also interested in the social and cognitive motivations for switching across languages.", "What a (multilingual) speaker is trying to achieve by switching languages can affect its structural outcome.", "Linguists recognize that pragmatic, interactional, and socio-indexical functions may condition C-S patterns.", "For instance, Mysln and Levy (2015) demonstrate that Czech-English speakers switch to English for high-information content words in prominent prosodic positions when speaking Czech.", "Other uses of C-S with structural traces include signalling an in-group identity through backflagging (Muysken, 1995) or emblematic tag-switching (Poplack, 1980).", "These are words or phrases that are used at the edge of clauses (e.g., Spanish ojala or English so ).", "Other functions, among these, quoting a speaker, getting the attention of an interlocutor, or reiterating an utterance to soften or intensify a message will also be indicated via C-S in predictable linguistic constructions, such as with verbs of saying', vocative expressions, and sequential translation equivalents (Gumperz, 1982; Zentella, 1997).", "According to Clyne (1991), there are eight factors (e.g. topic, type of interaction, interlocutors, role relationship, communication channel) that can influence C-S choices.", "Lavric (2007) explains CS choices in line with politeness theory, focusing on prestige and face-saving moves in multilingual conversations.", "Heller (1992) takes a macro-social view, arguing that French-English C-S in Quebec may signal a political choice among both dominant and subordinate groups.", "Gardner-Chloros and Edwards (2004) suggest that social factors influence language choice, with different generations of speakers from the same community exhibiting very different C-S patterns.", "Similarly Sebba (1998) argues that as speakers cognitively construct equivalence between morphemes, words, and phrases across their languages, communities of the same languages may do this differently.", "Evidence from computational studies suggests that C-S is speaker-dependent (Vu et al., 2013).", "Gender and identity also play a role for C-S practices in English and Greek Cypriot community in London (Finnis, 2014).", "From a computational perspective, Papalexakis et al. (2014) investigated the factors that influence C-S choices (Turkish-Dutch) in computer mediated interaction and how to predict them automatically.", "While C-S implies active alternation between grammatical systems, borrowing does not.", "It is difficult to know if a lone word insertion (e.g. example (2)) constitutes a borrowing or a C-S without considering how the items are integrated into the grammar of the receiving language (Poplack et al., 1988).", "When such analyses are done, most lone-item insertions are analyzable as one-time borrowings, called nonce borrowings (Sankoff et al., 1990).", "Similarly, what looks like complex C-S may not be perceived as switching at all.", "Auer (1999) distinguishes a continuum of mixing types: prototypical C-S is pragmatic and intentional, Language Mixing serves no pragmatic purpose, and Mixed Languages are the single code of a community.", "These can look structurally identical, but the latter can be modeled as a single language (e.g. languages like Michif Cree (Bakker, 1997) or Gurinji Kriol (Meakins, 2012)) rather than the intertwining of two.", "Bila-niuk (2004) describes the Surzhyk spoken by urban Russian-Ukrainian bilinguals (in Ukraine) as be-tween C-S and Mixed Language' since speakers are highly bilingual and the direction of switching is indeterminate.", "Loan translation and transfer involve the words from only one language but the semantics and grammatical constructions from the other.", "In example 4, the Turkish verb yapmak , to do', takes on the Dutch meaning of doen in Turkish spoken in the Netherlands (Dogruoz and Backus, 2009).", "4. Ilkokul-u Istanbul-da yap-t-m .", "primary.school-ACC Istanbul-LOC do-past-1sg .", "I finished primary school in Istanbul.' In transfer, grammatical constructions can be borrowed from one language to another without the words being borrowed.", "Treffers-Daller (2012) demonstrates the transfer of verb particles from Germanic languages into French.", "In Brussels French (Belgium), the construction chercher apr ` es look after' (for look for') is a translation of the Dutch equivalent and, in Ontario French (Canada), chercher pour is the translation equivalent of English look for'.", "In reference French (France), there is normally no particle following the verb.", "The degree to which linguistic features like loan translation and transfer can be found alongside C-S is unknown.", "The contexts in which people acquire and use multiple languages in Europe are diverse.", "Some acquire their languages simultaneously from birth, while others acquire them sequentially, either naturally or via explicit instruction.", "Multilingualism is the norm in many zones where local residents may speak different languages to accommodate their interlocutors.", "Speakers who use local dialects or minoritized varieties may also be engaged in C-S when switching between their variety and a dominant one (Mills and Washington, 2015; Blom and Gumperz, 1972).", "C-S in bilingual language acquisition of children has been studied across language contact contexts in Europe.", "In Germany, Herkenrath (2012) and Pfaff (1999) focused on Turkish-German C-S and Meisel (1994) on German-French C-S of bilingual children.", "From a comparative perspective, Poeste et al. (2019) analyzed C-S among bilingual, trilingual, and multilingual children growing up in Spain and Germany.", "In the Netherlands, Bosma and Blom (2019) focused on C-S among bilingual Frisian-Dutch children.", "In addition to analyzing C-S in children's speech, Juan-Garau and Perez-Vidal (2001) and Lanza (1998) investigated C-S in the interaction patterns between bilingual children and their parents (i.e. Spanish-Catalan and English-Norwegian respectively).", "Within an educational setting, Kleeman (2012) observed C-S among bilingual (North Sami-Norwegian) kindergarten children in the North of Norway.", "Similarly, Jrgensen (1998) and Cromdal (2004) report the use of C-S for resolving disputes among bilingual (Turkish-Danish) children in Denmark and multilingual (Swedish-English and/or a Non-Scandinavian Language) children in Sweden respectively.", "C-S does not only take place between standard languages but between minority languages and dialects as well.", "For example, Themistocleous (2013) studied C-S between Greek and Cypriot Greek and Deuchar (2006) focused on the C-S between Welsh and English in the UK.", "Berruto (2005) reports cases of language mixing between standard Italian and Italoromance dialects in Italy.", "In the Balkans, Kyuchukov (2006) analyzed C-S between Turkish-Bulgarian and Romani in Bulgaria.", "C-S between dialects and/or standard vs. minority languages in computer mediated interaction was analyzed by Siebenhaar (2006) among Swiss-German dialects and by Robert-Tissot and Morel (2017) through SMS corpora collected across Germanic (i.e. English and German) and Romance languages (French, Spanish, Italian) in Switzerland.", "C-S is commonly observable across immigrant contexts in Europe.", "In the UK, Georgakopoulou and Finnis (2009) described the C-S patterns between English and Cypriot Greek while Issa (2006) focused on the C-S between English and Cypriot Turkish communities in London.", "Wei and Milroy (1995) analyzed the C-S between English and Chinese from a conversational analysis point of view based on the interactions of bilingual (Chinese-English) families in Northeastern England.", "In addition, Ozanska-Ponikwia (2016) investigated the Polish-English C-S in the UK as well.", "C-S among immigrant community members have also been widely studied in Germany (e.g. Turkish-German C-S by Keim (2008) and Cetinoglu (2017), Russian-German C-S by Khakimov (2016)).", "In the Netherlands, C-S studies include Turkish-Dutch C-S by Backus (2010) and Dutch-Morroccan C-S by Nortier (1990).", "In Belgium, Meeuws and Blom-maert (1998) studied the French-Lingala-Swahili C-S among immigrants of Zaire and Treffers-Daller (1994) studied French-Dutch C-S in Brussels.", "In Spain, Jieanu (2013) describes the Romanian-Spanish C-S among the Romanian immigrants.", "In addition to the C-S analyses within spoken interactions of immigrant communities across Europe, there are also studies about C-S within computer mediated communication as well.", "These studies include Greek-German C-S by Androutsopoulos (2015) in Germany, Turkish-Dutch C-S by Papalexakis et al. (2014), Papalexakis and Dogruoz (2015) and a comparison of Turkish-Dutch and Moroccan-Dutch C-S by Dorleijn and Nortier (2009) in the Netherlands.", "Similarly, Marley (2011) compared French-Arabic C-S within computer mediated interaction across Moroccan communities in France and the UK.", "In addition to daily communication, some linguists are also interested in the C-S observed in historical documents.", "While Swain (2002) explored Latin-Greek C-S by Cicero (Roman Statesman), Dunkel (2000) analyzed C-S in his communication with Atticus (Roman philosopher who studied in Athens) in the Roman Empire.", "Argenter (2001) reports cases of language mixing within the Catalan Jewish community (in Spain) in the 14th and 15th centuries and Rothman (2011) highlights the C-S between Italian, Slavic and Turkish in the historical documents about Ottoman-Venetian relations.", "In Switzerland, Volk and Clematide (2014) worked on detecting and annotating C-S patterns in diachronic and multilingual (English, French, German, Italian, Romansh and Swiss German) Alpine Heritage corpus.", "Within the media context, Martin (1998) investigated English C-S in written French advertising, and Onysko (2007) investigated the English C-S in German written media through corpus analyses.", "Zhiganova (2016) indicates that German speakers perceive C-S into English for advertising purposes with both positive and negative consequences.", "Similar to humans, institutions and/or organizations could also have multilingual communication with their members and/or audience.", "For example, Wodak et al. (2012) analyzed the C-S and language choice at the institutional level for European Union institutions.", "According to the 2011 Census (Chandramouli, 2011), 26% of the population of India is bilingual, while 7% is trilingual.", "There are 121 major languages and 1599 other languages in India, out of which 22 (Assamese, Bangla, Bodo, Do-gri, Gujarati, Hindi, Kashmiri, Kannada, Konkani, Maithili, Malayalam, Manipuri, Marathi, Nepali, Oriya, Punjabi, Tamil, Telugu, Sanskrit, Santali, Sindhi, Urdu) are scheduled languages with an of-ficial recognition (almost 97% of the population speaks one of the scheduled languages).", "Most of the population ( 93%) speak languages from the Indo-Aryan (Hindi, Bengali, Marathi, Urdu, Gujarati, Punjabi, Kashmiri, Rajasthani, Sindhi, Assamese, Maithili, Odia) and Dravidian (Kannada, Malayalam, Telugu, Tamil) language families.", "The census excludes languages with a population lower than 10,000 speakers.", "Given this, it is probably difficult to find monolingual speakers in India considering the linguistic diversity and wide-spread multilingualism.", "Kachru (1978) provides one of the early studies on the types and functions of C-S in India with a historical understanding of the multilingual context.", "In addition to the mutual influences and convergence of Indo-Aryan and Dravidian languages internally, he mentions Persian and English as outside influences on Indian languages.", "Similarly, Sridhar (1978) provides an excellent comparative overview about the functions of C-S in Kannada in relation to the Perso-Arabic vs. English influences.", "Kumar (1986) gives examples about the formal (e.g. within NPs, PPs, VPs) and functional (i.e. social and stylistic) aspects of Hindi-English C-S from a theoretical point of view.", "More recently, Doley (2013) explains how fish mongers in a local fish market in Assam adjust and switch between Assamese, English and local languages strategically to sell their products to multilingual clientele.", "Another observation about C-S in daily life comes from Boro (2020) who provides examples of English, Assamese and Bodo (another language spoken in the Assam region) C-S and borrowings.", "In addition to English, Portuguese was also in contact with the local languages as a result colonization in South India.", "For example, Kapp (1997) explains the Portuguese influence through borrowings in Dravidian languages (i.e. Kannada and Telugu) spoken in India.", "Instead of automatic data collection and methods of analyses, the C-S examples for the abovementioned studies are (probably) encountered and collected by the authors themselves in daily life interactions over a period of time with limited means.", "Nowadays, these small sets of data would be regarded as insignificant in computational areas of research.", "However, ignoring these studies and data could have serious consequences since crucial information about the social and cultural dynamics in a multilingual setting would also be lost.", "For example, Nadkarni (1975) proves this point by explaining how social factors influence the C-S between Saraswat Brahmin dialect of Konkani (Indo-Aryan language) and Kannada (Dravidian language) in the South of India.", "Both languages have been in contact with each other for over four hundred years.", "Saraswat Brahmins are fluent in both Konkani and Kannada but they do not speak Konkani with Kannada speakers and they also do not C-S between Konkani and Kannada.", "Nadkarni (1975) attributes this preference to the high prestige associated with Konkani within the given social context.", "Since Kannada (perceived as less prestigious) is widely spoken in that region, Konkani speakers learn and speak Kannada for functional purposes in daily life which does not involve C-S.", "C-S in India has been investigated through written media, advertising and film industry as well.", "Si (2011) analyzed Hindi-English C-S in the scripts of seven Bollywood movies which were filmed between 1982 and 2004.", "Her results indicate a change of direction C-S over the years.", "More specifically, Hindi was the dominant language with occasional switches to English for the early productions but English became the dominant language especially for younger generations in the later productions.", "A similar trend has been observed for Bengali movie scripts as well.", "Through analyzing movie scripts (between 1970s and 2010s), Chatterjee (2016) finds a drastic increase in the use of bilingual verbs (e.g. renovate koreche renovation do) over time and attributes this rise to the increasing popularity of English in Indian society.", "Within the immigrant context, Gardner-Chloros and Charles (2007) focused on the types and functions of C-S between Hindi and English across the TV programs (e.g. highly scripted vs. loosely scripted programs) of a British/Asian cable channel in the UK.", "Although they have come across C-S in a variety of TV shows, the least amount of C-S was encountered in the news broadcasts (i.e. highly scripted).", "In general, they have encountered less C-S on TV broadcasts in comparison to the natural speech and attribute this factor to the consciousness of TV personalities about pure language use (instead of C-S).", "Similarly, Zipp (2017) analyzed Gujarati-English C-S within a radio show targeting British South Asians living in the US and concluded that C-S was part of identity construction among youngsters (group identity).", "Pratapa and Choudhury (2017) perform a quantitative study of 18 recent Bollywood (Hindi) movies and find that C-S is used for establishing identity, social dynamics between characters and the socio-cultural context of the movie.", "From an advertising point of view, Kathpalia and Wee Ong (2015) analyzed C-S in Hinglish (i.e. Hindi, English, Urdu, Sanskrit according to their definition) billboards about the Amul brand in India.", "After compiling the advertisements on billboards (1191), they classified the structures and functions of C-S.", "Their results indicate more intrasentential C-S than intersentential ones on the billboards.", "In terms of function, the advertisers used C-S to indicate figures of speech (e.g. puns, associations, contradictory associations, word-creation and repetitions) to attract the attention of the target group.", "Mohanty (2006) provides an extended overview of the multilingual education system in India exploring the types and quality of schools across a wide spectrum.", "In general, high-cost English Medium (EM) education is valued by upper-class and affluent families.", "Although low-cost EM education is also available for lower income families, he questions its impact in comparison to education in the local languages.", "Sridhar (2002) explains that C-S is commonly practiced among students in schools across India.", "In addition, she finds it unrealistic to ask the students to separate the two languages harshly.", "In immigrant contexts, Martin et al. (2006) investigates how Gujarati-English C-S is used among the South Asian students in educational settings in the UK.", "Another analysis reveals a shift from Bengali toward English among the younger generations of the immigrant Bengali community in the UK (Al-Azami, 2006).", "In terms of the C-S patterns, first generation immigrants integrate English words while speaking Bengali whereas English dominates the conversations of younger generations with occasional switches to Bengali.", "There are also studies about Bengali-English C-S in the UK school settings (Pagett, 2006) and Bangladesh (Obaidullah, 2016) as well.", "However, a systematic comparison between Bengali-English C-S in India, Bangladesh and immigrant settings are lacking.", "In their study about aphasic patients, Shyamala Chengappa and Bhat (2004) report increased frequency of C-S between Malayalam and English for aphasic patients in comparison to the control group.", "However, there were less differences between the groups in terms of functions of C-S.", "Deepa and Shyamala (2019) find that amount and types of C-S could be used to differentiate between healthy and mild dementia patients who are bilingual in Kannada and English.", "Although both studies are carried out with limited subjects, they offer insights about the use of C-S in health settings as well.", "There has been significant interest in building language technologies for code-switched languages over the last few years, spanning a diverse range of tasks such as Language Identification, Part of Speech Tagging, Sentiment Analysis and Automatic", "Automatic Speech Recognition.", "In the European language context, work has mainly focused on Turkish-Dutch, Frisian-Dutch, Turkish-German and Ukranian-Russian with some initial attempts being made in parsing Russian-Komi text.", "In the Indian language context, Hindi-English is the most widely studied language pair for computational processing, with some recent work on Telugu-English, Tamil-English, Bengali-English and Gujarati-English.", "Sitaram et al. (2019) provide a comprehensive survey of research in computational processing of C-S text and speech and Jose et al. (2020) present a list of datasets available for C-S research.", "However, despite significant efforts, language technologies are not yet capable of processing C-S as seamlessly as monolingual data.", "We identify three main limitations of the current state of computational processing of C-S: data, evaluation and user-facing applications.", "The use of Deep Neural Networks, which require large amounts of labeled and unlabeled training data have become the de facto standard for building speech and NLP systems.", "Since C-S languages tend to be low resourced, building Deep Learning-based models is challenging due to the lack of large C-S datasets.", "Massive multilingual Language Models (LMs) such as multilingual BERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) have shown promise in enabling the coverage of low-resource languages without any labeled data by using the zero-shot framework.", "These LMs are typically trained in two phases: a pre-training phase, in which unlabeled data from one or multiple languages may be used and a fine-tuning phase, in which task-specific labeled data is used to build a system capable of solving the task.", "Since multilingual LMs are trained on multiple languages at the same time, it has been suggested that these models may be capable of processing C-S text (Johnson et al., 2017), with promising results initially reported on POS tagging (Pires et al., 2019).", "Khanuja et al. (2020) found that multilingual BERT outperforms older task-specific models on C-S tasks, however, the performance on C-S is much worse than the performance on the same tasks in a monolingual setting.", "Further, these LMs are either trained primarily on monolingual datasets such as Wikipedia in the case of mBERT, or Common Crawl 1 in the case of XLM-R.", "So, they are either not exposed to C-S data at all during training, or they miss out on several language pairs, types and functions of C-S that are encountered in daily life but not available on the web.", "Since massive multilingual LMs are now replacing traditional models across many NLP applications, it is crucial to consider how they can be trained on C-S data, or made to work for C-S by incorporating other sources of knowledge.", "Much of speech and NLP research is now driven by standard benchmarks that evaluate models across multiple tasks and languages.", "Due to the shortage of standardized datasets for C-S, until recently, the evaluation of C-S models was performed over individual tasks and language pairs.", "Khanuja et al. (2020) and Aguilar et al. (2020) proposed the first evaluation benchmarks for C-S that span multiple tasks in multiple language pairs.", "The GLUE-CoS benchmark (Khanuja et al., 2020) consists of the following C-S tasks in Spanish-English and Hindi-English: Language Identification (LID), Part of Speech (POS) tagging, Named Entity Recognition (NER), Sentiment Analysis, Question Answering and Natural Language Inference (NLI).", "The LINCE benchmark (Aguilar et al., 2020) covers Language Identification, Named Entity Recognition, Part-of-Speech Tagging, and Sentiment Analysis in four language pairs: Spanish-English, Nepali-English, Hindi-English, and Modern Standard Arabic-Egyptian Arabic.", "Although these benchmarks are important starting points for C-S, it is clear that they do not represent the entire spectrum of C-S, both from the point of view of potential applications and language pairs.", "Further, it is important to note that while state-of-the-art models perform well on tasks such as LID, POS tagging and NER, they are only slightly better than chance when it comes to harder tasks like NLI, showing that current models are not capable of processing C-S language.", "Moreover, many of the C-S tasks in the benchmarks above consist of annotated tweets, which only represent a certain type of C-S.", "Due to these limitations, we currently do not have an accurate picture of how well models are able to handle C-S.", "Although speech and NLP models for C-S have been built for various applications, a major limitation of the work done so far in computational processing of C-S is the lack of end-to-end user-facing applications that interact directly with users in multilingual communities.", "For example, there is no widely-used spoken dialogue system that can understand as well as produce code-switched speech, although some voice assistants may recognize and produce C-S in limited scenarios in some locales.", "Although computational implementations of grammatical models of C-S exist (Bhat et al., 2016), they do not necessarily generate natural C-S utterances that a bilingual speaker would produce (Pratapa et al., 2018).", "Most crucially, current computational approaches to C-S language technologies do not usually take into account the linguistic and social factors that influence why and when speakers/users choose to code-switch.", "Bawa et al. (2020) conducted a Wizard-of-Oz study using a Hindi-English chatbot and found that not only did bilingual users prefer chatbots that could code-switch, they also showed a preference towards bots that mimicked their own C-S patterns.", "Rudra et al. (2016) report a study on 430k tweets from Hindi-English bilingual users and find that Hindi is preferred for the expression of negative sentiment.", "In a follow-up study, Agarwal et al. (2017) find that Hindi is the preferred language for swearing in Hindi-English C-S tweets, and swearing may be a motivating factor for users to switch to Hindi.", "The study also finds a gender difference, with women preferring to swear in English more often than Hindi.", "Such studies indicate that multilingual chatbots and intelligent agents need to be able to adapt to users' linguistic styles, while also being capable of determining when and how to code-switch.", "Due to the paucity of user-facing systems and standard benchmarks covering only a handful of simpler NLP tasks, it is likely that we overestimate how well computational models are able to handle C-S.", "In sum, language technologies for C-S seem to be constrained by the lack of availability of diverse C-S training data, evaluation benchmarks and the absence of user-facing applications.", "They need to go beyond pattern recognition and grammatical constraints of C-S in order to process and produce C-S the way humans do.", "Hence, it is important for the CL community to be aware of the vast literature around C-S in linguistics, particularly as we proceed to solving more challenging tasks.", "The goal of this paper was to inform computational linguists and language technologists about the linguistic and social aspects C-S studies focusing on the European and Indian multilingual contexts.", "There are some similarities (e.g. themes for linguistic research in C-S) but also differences between the two contexts in terms of the social, cultural and historical characteristics.", "For example, C-S in immigrant communities has been a common theme for both multilingual contexts.", "In Europe, C-S has been widely studied within the immigrant communities who have come through labor immigration in the 1960s.", "However, there is a need for more research about the C-S in immigrant languages with a more recent history as well as minority languages and regional dialects.", "Analyzing C-S in the immigration context is even more complex for Indian languages.", "There are hardly any systematic linguistic comparisons between the C-S within the same language pairs in India and immigrant contexts (e.g. C-S between Hindi-English in India vs. Hindi-English in the US/UK).", "There is also a need for more research about C-S between less known language pairs in India.", "However, some of these languages are not even officially listed (e.g. in census results) since they have less than 10,000 speakers.", "In these cases, collecting and analyzing the multilingual and C-S data becomes more difficult.", "A common flaw that is shared both by linguistics and computational areas of research is to focus only on the positive evidence and assume that C-S will occur in all multilingual contexts.", "However, there is also a need for negative evidence to falsify this assumption.", "As illustrated through an example from Konkani-Kannada language contact in India (see section 6), bilingual speakers may prefer not to C-S due to historical, social and cultural factors operating in that setting.", "Therefore, developing an automatic C-S system for a random pair of languages without an in-depth and systematic analysis of linguistic and social aspects of C-S in a particular context would not be very useful for the targeted users and/or language technologists.", "To date, the literature focusing on the social and linguistic aspects of C-S is less visible for CL researchers.", "This lack of visibility leads to misunderstandings and/or incomplete citations of earlier research which would have saved time and resources for CL research if resolved.", "One of the reasons is perhaps the differences in publishing traditions between humanities and computational areas of research.", "Conference and workshop proceedings are commonly accepted means of publication in computational linguistics.", "Whereas, journal publications, books and/or chapters are the publication forms in humanities.", "However, guidelines about how to cite publications in humanities are often missing in computational venues.", "There are also differences in terms of length, review cycles and open access policies between the two fields which may influence the visibility of research output for each other.", "It is perhaps useful to remember that science advances by standing on the shoulders of giants (i.e. building upon earlier research).", "With our contribution to the conference, we hope to connect the two fields and start a scientific dialogue to enhance the advancement in both fields." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective" ]
[ "The lack of meaningful automatic evaluation metrics for dialog has impeded open-domain dialog research.", "Standard language generation metrics have been shown to be ineffective for evaluating dialog models.", "To this end, this paper presents USR , an U n S upervised and R eference-free evaluation metric for dialog.", "USR is a reference-free metric that trains unsupervised models to measure several desirable qualities of dialog.", "USR is shown to strongly correlate with human judgment on both Topical-Chat (turn-level: 0.42 , system-level: 1.0 ) and PersonaChat (turn-level: 0.48 and system-level: 1.0 ).", "USR additionally produces interpretable measures for several desirable properties of dialog.", "The lack of meaningful automatic evaluation metrics is a significant impediment for open-domain dialog generation research.", "Standard language generation metrics have been shown to be ineffective for dialog evaluation (Deriu et al., 2019; Liu et al., 2016).", "Without well-accepted, meaningful automatic metrics, open-domain dialog researchers have come to rely on human evaluation.", "Due to its timeand cost-intensive nature, human evaluation is typically only used for the final dialog model.", "As such, during development dialog systems are generally optimized for poorly-correlated automatic metrics (e.g., F-1, BLEU, PPL) which can result in sub-par human evaluation scores (Di-nan et al., 2019).", "To facilitate development of open-domain dialog models with meaningful automatic metrics, this paper presents the U n S upervised and R eference free ( USR ) evaluation metric for dialog.", "Standard automatic metrics for evaluating dialog generation (e.g., BLEU, F-1, METEOR, ROUGE) have several shortcomings that make them unsuitable for dialog evaluation: (1) The one-to-many nature of dialog (Zhao et al., 2017) makes word-overlap metrics ineffective for scoring valid system output that deviates from the ground-truth response (Liu et al., 2016; Gupta et al., 2019).", "(2) Human evaluation of dialog typically measures multiple properties (e.g., appropriate, interesting, consis-tent).", "Automatic metrics on the other hand, condense the multi-faceted nature of dialog quality to a single uninterpretable metric.", "(3) There are many definitions of what a good dialog is and, as such, it is not feasible to construct a one size fits all metric.", "Depending on the task and the data, the desired qualities of a dialog system may differ (Walker et al., 1997; Deriu et al., 2019).", "USR is a reference-free metric that consists of several interpretable sub-metrics which are combined in a configurable manner.", "Rather than relying on a ground-truth reference response, unsupervised models are trained to measure desired qualities of dialog (e.g., interesting, natural).", "As such, USR (1) alleviates the one-to-many issue of standard metrics, (2) produces interpretable measures for desirable properties of dialog, and (3) provides a configurable mechanism for combining several sub-metrics into an overall quality score.", "To evaluate the performance of USR, human quality annotations were collected for models trained on the Topical-Chat (Gopalakrishnan et al., 2019) and the PersonaChat corpora (Zhang et al., 2018).", "USR is shown to strongly correlate with human judgment on both Topical-Chat (turn-level Spearman: 0.42 , system-level Spearman: 1.0 ) and PersonaChat (turn-level Spearman: 0.48 and system-level Spearman: 1.0 ).", "The strong correlation with human judgment across two datasets and a variety of model types shows that USR is a valuable tool for the dialog community.", "Further, since USR does not require any explicit supervision, it has the potential to generalize to several dialog tasks and datasets.", "The contributions of this paper as as follows: (1) a strongly-correlated, unsupervised and reference free metric is proposed for evaluating open-domain dialog systems, (2) a thorough human quality annotation is carried out and is released 1 to facilitate future benchmarking of dialog evaluation metrics.", "Standard automatic metrics for language generation correlate poorly with human judgement of dialog (Liu et al., 2016; Lowe et al., 2017; Gupta et al., 2019).", "For example, the F-1 score can be gamed by outputting the most frequent words, regardless of the context (Dinan et al., 2019).", "The poor performance of present metrics is largely due to the one-to-many nature of dialog (Zhao et al., 2017).", "To avoid comparing to a single reference response, several authors have proposed using multiple reference responses.", "Multiple reference responses can be obtained with retrieval models (Galley et al., 2015; Sordoni et al., 2015) or through data collection (Gupta et al., 2019).", "These multi-reference metrics show improvement in performance, but it is infeasible to thoroughly cover the space of potential responses.", "As such, this paper addresses the one-to-many issue of dialog by presenting a reference-free metric.", "Lowe et al. (2017) train ADEM to produce a quality score conditioned on the dialog context, the reference response and the generated response.", "Venkatesh et al. (2018) present a framework for evaluation of Alexa prize conversations, which attains moderate correlation with user ratings.", "Both of these methods are trained on explicit quality annotations.", "In contrast, USR requires no explicit supervision and will more easily generalize to new datasets and tasks.", "Li et al. (2017) proposes a reference-free dialog evaluator which is trained to discriminate between human and generated responses.", "This work is similar to USR in that it evaluates the quality of a response without a reference or quality annotation training data.", "Using the evaluation model as a reward during reinforcement learning exhibited strong performance.", "However, correlation with human judgement was not evaluated.", "Intuitively, it appears insufficient to rely on a discriminator as a meaningful evaluation of dialog since this assumes that all human responses are perfect and all generated responses are imperfect.", "To evaluate the correlation of automatic metrics with human judgment, human quality annotation was carried out across two open-domain dialog corpora.", "Generated responses were obtained from several models described in Section 3.3.", "For each dialog context, an additional human response was also written.", "Human annotation was then carried out on sixty dialog contexts, with six responses per context for Topical-Chat (four system outputs, one newly-annotated human output, one original ground-truth response) and five for PersonaChat (one less system output).", "Each response was given six different scores: Understandable (0-1), Natural (1-3), Maintains Context (1-3), Interesting (1-3), Uses Knowledge (0-1), Overall Quality (1-5).", "Three annotators labeled each response.", "The task instructions were very detailed in order to minimize subjectivity in the quality annotations.", "For example, individuals may differ in their definition of Interesting (e.g., some individuals find football interesting, others do not).", "Thus, the instructions contained a clear, albeit somewhat rigid definition, of Interesting.", "The instructions for Overall Quality annotation, however, were less rigid and therefore those annotations contain some amount of annotator-specific subjectivity.", "The data collection and experiments with PersonaChat were carried out to assess the generality of the USR metric.", "As such, the annotation questions used were specifically tailored to Topical-Chat, but are still suitable for PersonaChat.", "The Topical-Chat dataset (Gopalakrishnan et al., 2019) is a large collection of human-human knowledge-grounded open-domain conversations that consists of 11,319 dialogs and 248,014 utterances.", "Following the same experimental setup as Gopalakrishnan et al. (2019), heuristics are employed to identify the most relevant fact for each response.", "As such, the task is to produce a response conditioned on both a dialog context and a fact.", "The PersonaChat dataset (Zhang et al., 2018) is a corpus of human-human persona-conditioned conversations that consists of 10,907 dialogs and 162,064 utterances.", "Each worker is asked to condition their responses on a persona, which we consider to be analogous to the facts in the Topical-Figure 1: On the Topical-Chat corpus, six responses are obtained for each dialog context.", "Four use the trained Transformer model with different decoding strategies.", "One is a new human-generated response.", "One is the original ground-truth.", "A similar setup was employed for PersonaChat, albeit with different models.", "A Transformer (Vaswani et al., 2017) is trained to produce the response, r , conditioned on dialog context, c , and fact, f .", "The input to the transformer is the concatenation of c and f , similar to Gopalakrishnan et al. (2019).", "The transformer consists of 6 layers, a hidden size of 512, randomly-initialized word embeddings of size 300, a dropout rate of 0.1 and it is trained for 50 epochs.", "A single Transformer model is trained, which matches the automatic metrics reported by Gopalakrishnan et al. (2019).", "Different decoding strategies are used to obtain four different outputs from this model.", "In addition to standard argmax sampling, nucleus sampling (Holtzman et al., 2019) is used at three different rates: p = { 0 .", "3 , 0 .", "5 , 0 .", "7 } .", "The outputs from these four decoding strategies are listed with the original ground-truth utterance and a new human-generated response, for a total of six responses for each context, as shown in Figure 1.", "Three models were used to generate system outputs: a sequence-to-sequence model (Seq2Seq), an LSTM language model (LM) and a Key-Value Profile Memory Network (KV-MemNN).", "We use the pre-trained models provided in ParlAI 2 for the ConvAI2 competition (Dinan et al., 2019).", "A fourth open-source model was also used to produce output for quality annotation, however it 2 https://github.com/facebookresearch/ ParlAI/tree/master/projects/convai2 was ultimately excluded from the released dataset and experiments due to possible data leakage.", "Quality annotation was performed by six dialog researchers.", "Using a crowdsourcing platform, such as Amazon Mechanical Turk (AMT), would have allowed for more efficient and scalable annotation.", "However, crowdsourcing was not used because (1) the annotation instructions are lengthy, (2) a preliminary annotation pass was carried out, followed by a group discussion, (3) having many annotations from a few annotators allows examination of annotator-specific subjectivity.", "Annotators were provided with a set of instructions (Appendix A).", "A small preliminary annotation pass was carried out, with each individual annotating 5 dialog contexts (for a total of 30 re-sponses).", "The inter-annotator agreement was computed for each of the questions.", "The instructions were refined after the preliminary pass and a discussion meeting (e.g., Maintains Context was changed to be a 3-point rating instead of a 2-point rating).", "After the instructions were modified, the full annotation pass was carried out.", "Each response was rated according to the qualities mentioned at the beginning of this section.", "Instructions for each of qualities are summarized below: Understandable (0 1): Is the response understandable given the previous context?", "Natural (1 3): Does the response seem to be something that a person would naturally say?", "Maintains Context (1 3): Does the response serve as a valid continuation of the preceding conversation?", "Interesting (1 3): Is the response dull or interesting?", "Uses Knowledge (0 1): Given the fact that the response is conditioned on, how well does the response use that fact?", "Overall Quality (1 5): Given your answers above, what is your overall impression of the quality of this utterance?", "The instructions contained detailed descriptions and examples of what constitutes a response in each category (e.g., what makes a response score Metric Spearman Pearson Topical-Chat Understandable 0.5102 0.5102 Natural 0.4871 0.4864 Maintains Context 0.5599 0.5575 Interesting 0.5811 0.5754 Uses Knowledge 0.7090 0.7090 Overall Quality 0.7183 0.7096 PersonaChat Understandable 0.2984 0.2984 Natural 0.4842 0.4716 Maintains Context 0.6125 0.6130 Interesting 0.4318 0.4288 Uses Knowledge 0.8115 0.8115 Overall Quality 0.6577 0.6603 Table 1: Inter-annotator agreement for all the metrics. For all the correlations presented in this table, p < 0 . 01 . 2 on Maintains Context ).", "These instructions were written to minimize subjectivity in the annotations, which results in clear, agreed upon definitions.", "For Topical-Chat, the full annotation consisted of 60 dialog contexts randomly sampled from the frequent test set , for a total of 360 responses scored on six different qualities.", "For PersonaChat, 60 dialog contexts were sampled from the ConvAI2 validation set, with a total of 300 responses scored on six different qualities.", "Each response was labeled by three different annotators.", "Annotators were randomly assigned to each dialog context.", "Inter-annotator agreements for the different ratings across both datasets are presented in Table 1.", "The correlation between each pair of annotations is computed and the average correlation over all the pairs is reported.", "Correlation is used instead of Cohen's Kappa in order to better account for the ordinal nature of the ratings (i.e., 4 should correlate better with 5 than 1 ), and to maintain consistency with the evaluation of the automatic metrics.", "Most inter-annotator correlations are above 0 .", "4 , which indicates moderate to strong agreement.", "The low agreement for Understandable on PersonaChat is likely a consequence of the simple language in the dataset.", "Most responses are understandable, except for those requiring background knowledge (e.g., that cod' is an acronym for Call of Duty' ).", "Since the annotators have differing background knowledge, the few occasions where they fail to understand an utterance will differ, hence the lower agreement.", "The agreement for Overall Quality is relatively high (0.71 for Topical-Chat and 0.66 for PersonaChat) which suggests that any ambiguity in the specific dialog qualities is mitigated when the annotator is asked for an overall impression.", "Table 2 presents the scores for the different systems on each of the six qualities.", "Across both datasets and all qualities, the new human generated response strongly outperforms all other response types, even the original ground truth.", "This may be because the new human generated response was written with this quality annotation in mind, and as such is optimized for turn-level evaluation.", "On the other hand, the workers who produced the original ground-truth response, were more concerned with the quality of the overall dialog than with the quality of each individual response.", "On the Topical-Chat corpus, argmax decoding has a moderately higher performance over the nucleus sampling (Holtzman et al., 2019) methods.", "This should not be taken as an indication that argmax decoding is the superior method, since the hyperparameters (e.g., temperature) were not tuned for nucleus sampling.", "It should be noted that the objective was not to train and evaluate the best performing models, but instead to produce responses of varying qualities and obtain accurate human judgements of these responses.", "A regression was trained to map from the five ratings to the overall score in order to analyze the relationship between them.", "For better interpretability of the regression weights, the scores were normalized (using z-score) before training the regression.", "For better interpretability, a softmax was computed over the weights.", "Since individuals may differ in their definition of a good response , a specific regression is trained for each of the five annotators who labeled responses for the Topical-Chat corpus.", "Figure 2 displays the weights attributed to each of the five qualities by each of the annotators.", "Annotators attributed different weights to the specific features.", "For example, A3 emphasized naturalness while A2 paid more attention to whether a response was grounded on knowledge.", "Despite the differences across annotators, a good response was generally expected to be natural, maintain context, and be interesting.", "These annotator-specific weights demonstrate that individuals define good dialog differently.", "Future work could explore per-System Und (0-1) Nat (1-3) MCtx (1-3) Int (1-3) UK (0-1) OQ (1-5) Topical-Chat Original Ground-Truth 0.95 2.72 2.72 2.64 0.72 4.25 Argmax Decoding 0.60 2.08 2.13 1.94 0.47 2.76 Nucleus Sampling (0.3) 0.51 2.02 1.90 1.82 0.42 2.40 Nucleus Sampling (0.5) 0.48 1.92 1.93 1.72 0.34 2.29 Nucleus Sampling (0.7) 0.52 2.01 1.87 1.80 0.37 2.39 New Human Generated 0.99 2.92 2.93 2.90 0.96 4.80 PersonaChat Original Ground-Truth 0.99 2.89 2.82 2.67 0.56 4.36 Language Model 0.97 2.63 2.02 2.24 0.08 2.98 LSTM Seq2Seq 0.92 2.64 2.49 2.29 0.47 3.47 KV-MemNN 0.93 2.70 2.18 2.56 0.17 3.25 New Human Generated 1.00 2.97 2.88 2.87 0.96 4.80 Table 2: Average scores for the six different responses, on the six quality: Understandable, Natural, Maintains Context, Interesting, Uses Knowledge and Overall Quality.", "sonalized dialog evaluation wherein the evaluation metric is tailored to a specific individual.", "A potential criticism of this quality annotation could be that certain dialog qualities are missing.", "To address concerns about the completeness of the set of five qualities, a regression can be trained to produce the overall score conditioned on the quality ratings.", "The Spearman correlation between the predicted score and the original overall score is 0.9654 , which signifies that the set of qualities is thorough and contains enough information to reflect the overall quality of the response.", "This section describes the automatic metrics explored for evaluating generated responses.", "Section 4.1 describes several existing metrics that were studied.", "Section 4.2 presents USR, a novel unsupervised and reference-free metric.", "Several existing and easily-applicable metrics for dialog evaluation are compared.", "the list of available metrics is not exhaustive.", "Only the most commonly used and the most accessible are addressed.", "F-1 score computes the word-overlap between the generated response and the ground-truth, by taking the harmonic mean of the precision and recall.", "It is one of the four metrics used by the creators of the Topical-Chat dataset (Gopalakrishnan et al., 2019), along with perplexity and unique uni-gram/bigram counts.", "Dinan et al. (2019) described a simple adversarial example that attains a high F-1 score on PersonaChat.", "We produce a similar example for the Topical-Chat dataset and find that always outputting a concatenation of the ten most common tokens in the dataset ( . i the , that a to it is of ) attains an F-1 score of 25.6 which is a +3.6 improvement over the Transformer presented by Gopalakrishnan et al. (2019).", "BLEU (Papineni et al., 2002) is a well-known word overlap metric that computes n-gram precision between the generated sequence and the reference.", "Because precision favors shorter sentences, BLEU also adds a brevity penalty that punishes shorter sentences.", "BLEU has been found to correlate poorly with human judgment (Liu et al., 2016; Lowe et al., 2017; Gupta et al., 2019).", "METEOR (Denkowski and Lavie, 2014) was designed as an improvement on BLEU using a harmonic mean of precision and recall, as well as stemming and synonyms.", "ROUGE-L (Lin, 2004) identifies the longest common subsequence between the generated and reference sequence to better account for sentence-level structure when computing word overlap.", "Greedy Matching (Rus and Lintean, 2012) is an embedding-based metric that greedily matches each word in the generated sequence to a reference word based on the cosine similarity of their embeddings.", "The final score is then an average over all the words in the generated sequence.", "Embedding Average (Wieting et al., 2015) computes a sentence embedding for both the generated sequence and the ground-truth response by taking an average of word embeddings.", "The score is then a cosine similarity of the average embedding for both the generated and reference sequence.", "Vector Extrema (Forgues et al., 2014) follows a similar setup to Embedding Average, where the score is the cosine similarity between sentence embeddings.", "Rather than taking an average over word embeddings, this method identifies the maximum value for each dimension of the word embedding.", "Taking the maximum is motivated by the idea that common words will be de-emphasized as they will be closer to the origin.", "Vector Extrema has been shown to perform better on dialog tasks than other metrics (Gupta et al., 2019; Liu et al., 2016).", "Skip-Thought (Kiros et al., 2015) uses a recurrent neural network to produce a sentence-level embedding for the generated and reference sequences.", "A cosine similarity is then computed between the two embeddings.", "The implementation provided by Sharma et al. (2017) is used.", "BERTScore (Zhang et al., 2019) uses a pre-trained BERT (Devlin et al., 2018) model to greedily match each word in a reference response with one word in the generated sequence.", "By doing so, it computes the recall of the generated sequence.", "BERTScore was shown to have strong system-level and segment-level correlation with human judgment on several machine translation and captioning tasks.", "However, although it is a more sophisticated metric, it still compares word similarity between a reference and a generated sequence.", "While this method may work well for tasks where there is a limited space of outputs for each input (e.g., captioning, translation), it is ineffective at dealing with the one-to-many nature of dialog.", "This section proposes describes the USR metric, an unsupervised, reference-free evaluation metric", "for dialog.", "USR leverages pre-trained language models, specifically RoBERTa (Liu et al., 2019), to measure properties of dialog.", "USR is designed to be reference-free because there is no one right answer due to the inherent one-to-many nature of dialog (Zhao et al., 2017).", "Several sub-metrics were developed for the different qualities of dialog (e.g., Natural, Interesting, Uses Knowledge).", "While USR measures the overall quality of a response, its sub-metrics assess specific dialog qualities and therefore facilitate better understanding of a model's performance.", "The masked language modelling (MLM) metric uses a fine-tuned RoBERTa (Liu et al., 2019) model to estimate the likelihood of a response.", "RoBERTa is pre-trained on a massive amount of English data and fine-tuned on the corpus being evaluated (either Topical-Chat or PersonaChat), making it capable of identifying unnatural and incorrect responses.", "The likelihood estimated by the fine-tuned RoBERTa model is used as an automatic metric for evaluating the understandability and naturalness of responses.", "The RoBERTa-base model (Liu et al., 2019) is fine-tuned on the training set of the Topical-Chat corpus (Gopalakrishnan et al., 2019) using the implementation open-sourced by Wolf et al. (2019a).", "The language model is fine-tuned on only the dialog, without any of the facts, for a single epoch.", "RoBERTa uses both past and future context to predict a probability distribution for a masked word.", "The input sequence to MLM is a concatenation of a dialog context, c , and a response, r .", "One word at a time, each word in r is masked and its log likelihood is computed.", "Given the masked log-likelihood for the i -th word of r as l i , the value of the metric is then computed to be (cid:80) | r | i l i .", "Figure 3 visualizes this process.", "Recent research has highlighted the complementary nature of dialog retrieval and generation with respect to multi-tasking (Wolf et al., 2019b) and pre-training (Mehri et al., 2019).", "Because of this complimentary nature, using dialog retrieval (DR) for evaluating generative models is an intuitive choice, especially for metrics like Maintains Context and Uses Knowledge .", "The fine-tuned RoBERTa model described in Section 4.2.1 is further fine-tuned for the retrieval task.", "This task is set up in the same manner as Figure 3: Visualization of the masked language modelling (MLM) metric.", "the Ubuntu dialog corpus (Lowe et al., 2015).", "The model is trained given a context x , a response r , and a binary label y indicating whether r is the true response or randomly sampled.", "The context x may consist of the dialog history and the fact, denoted c , or just the fact, denoted f .", "Two different versions of the dialog retrieval (DR) metric are trained, with different values of x .", "The DR metric score is defined to be the probability P ( y = 1 | x, r ) a given DR metric model produces.", "Though the DR metric is trained for the task of retrieval, this is done in an unsupervised manner.", "The retrieval task is an unsupervised task since it requires no additional labels during training (e.g., explicit quality annotations).", "The DR metric is appropriate for Maintains Context , Interesting and Uses Knowledge .", "If a retrieval model predicts that a generated response is contextually relevant to a dialog context, it indicates that the response Maintains Context .", "Likewise, if a retrieval model predicts that the response r is contextually relevant to fact f , it signifies that r most likely Uses Knowledge .", "Interesting is the measure of whether the response is dull/generic or if it provides some in-teresting/engaging information.", "The DR metric is trained to distinguish between a ground-truth response ( y = 1 ) and a randomly sampled response ( y = 0 ).", "Generic responses are applicable to many contexts, and will often appear as both ground-truth responses and randomly sampled responses.", "As such, the model will likely learn to assign a low probability distribution to these generic responses and will often output P ( y = 1 | r, x ) = 0 .", "5 .", "As such, generic responses will generally be scored lower than other contextually relevant, interesting responses.", "The DR metrics will learn to favor responses that are unique to a given context x , rather than being applicable to many different contexts.", "Given meaningful automatic metrics for each of the five dialog qualities, USR combines the scores into an overall measure that correlates well with Overall Quality ratings.", "In Section 3.5, a regression model was trained to reproduce the overall score from each of the specific quality scores.", "The predictions of this regression model attained a 0.9654 Spearman correlation with the original scores.", "This same regression is used by USR on top of the automatic metrics presented in Sections 4.2.1 and 4.2.2.", "USR combines its sub-metrics into one measure of overall quality.", "This combination is configurable, adaptable to different datasets or tasks.", "For example, if a specific application prefers natural responses over interesting ones, the weights of the regression model can be adjusted.", "Analysis demonstrated that individuals used different weights when producing the overall score (Figure 2).", "USR might be able to be personalized for specific individuals by adjusting the weights of the regression model.", "This section evaluates all of the automatic metrics described in Section 4, by comparing them to human judgement.", "The best sub-metrics for each dialog quality are used as input for the regression model of the USR metric.", "While the best performing sub-metrics are not consistent across the two datasets, the USR metric nonetheless exhibits strong results.", "The annotations for the original ground-truth are not used for evaluation, in order to accurately compare referenced and reference-free metrics.", "Table 3 shows turn-level correlations of the best automatic metrics for each dialog quality on Topical-Chat.", "USR is shown to strongly outperform both word-overlap and embedding-based metrics across all of the dialog qualities.", "Interestingly, the best non-USR metric is consistently either METEOR or BERTScore possibly because both methods are adept at comparing synonyms during evaluation.", "For some dialog qualities, the overall USR metric outperforms the best sub-metric.", "For example, USR does better for Maintains Context than USR-DR.", "This is likely because the information from the other sub-metrics (e.g., Uses Knowledge ) is valuable and effectively leveraged by USR.", "Table 4 reports the turn-level correlations of the best automatic metrics for each dialog quality on the PersonaChat corpus.", "Across all dialog qualities, USR strongly outperforms the word-overlap and embedding-based metrics.", "Conversations in PersonaChat generally consist of individuals communicating facts from their own persona in a relevant and coherent manner.", "As such, when models trained on PersonaChat produce subpar outputs, it is generally because the outputs either (1) do not effectively use the persona or (2) are not rele-vant/coherent to the dialog context.", "This explains why the correlations are significantly higher for Maintains Context and Uses Knowledge .", "As a consequence of PersonaChat's strong dependency on both the dialog context and the persona, USR-DR (x = c) which uses both the dialog context and the persona to perform dialog retrieval, generally outperforms all other metrics.", "Table 5 shows turn-level correlation with the Overall Quality ratings on Topical-Chat, for all of the automatic metrics.", "USR shows a strong improvement over all other methods.", "This strong performance can be attributed to two factors: (1) Metric Spearman Pearson Understandable BERTScore (base) 0.0685 0.0672 USR MLM 0.1186 0.1313 USR 0.1324 0.1241 Natural VectorExtrema 0.1375 0.1458 USR DR (x = c) 0.2291 0.1733 USR 0.2430 0.1862 Maintains Context METEOR 0.2564 0.2500 USR DR (x = c) 0.5625 0.6021 USR 0.5280 0.6065 Interesting BERTScore (base) 0.0491 0.0325 USR DR (x = c) 0.2634 0.0606 USR 0.0171 0.0315 Uses Knowledge METEOR 0.1719 0.1678 USR DR (x = c) 0.6309 0.4508 USR 0.3177 0.4027 Table 4: Turn-level correlations on Persona-Chat.", "the ability of MLM and DR to accurately quantify qualities of a generated response without a reference response, and (2) the ability of USR to effectively combine MLM and DR into a better correlated overall metric.", "USR shows a similar improvement over all other metrics on PersonaChat, as shown in Table 6.", "However, DR (x = c) outperforms USR despite the fact that four out of the five sub-metrics input into the USR regression are DR (x = c).", "This result is probably due to PersonaChat's strong dependancy on both dialog context and persona, both of which DR (x = c) explicitly leverages.", "We compute the system-level correlation between all automatic metrics and the Overall Quality ratings.", "USR significantly ( p < 0 . 01 ) outperforms all other metrics with a Spearman correlation of 1.0 on both datasets and Pearson correlations of 0.92 (Topical-Chat) and 0.82 (PersonaChat).", "The full set of system-level correlations can be found in the appendix.", "These results demonstrate USR's effectiveness.", "It strongly outperforms other metrics on both turn-level and system-level correlations.", "Gopalakrishnan et al. (2019) use the F-1 score as their primary automatic evaluation metric when presenting Topical-Chat.", "The results demonstrate a significant difference between USR and the F-1 score, suggesting that USR is a better metric for the Topical-Chat corpus.", "USR achieves statistically significant correlations with human judgement.", "The results hold across two datasets, Topical-Chat (Gopalakrishnan et al., 2019) and PersonaChat (Zhang et al., 2018).", "USR is configurable.", "Notably it is composed of several specific dialog quality sub-metrics.", "These sub-metrics are combined in a configurable manner, using a regression.", "For other tasks, datasets or even users, this regression can be adjusted, allowing qualities to be removed or re-weighted.", "Additional sub-metrics could be added.", "USR should be used alongside human evaluation.", "USR was created to facilitate development and tuning of dialog models.", "As such, USR can be used for model selection and hyperparameter tuning.", "USR should not be used to claim superior performance over another method.", "USR may not work with non-generative models, which were not addressed here.", "Responses produced by a model that is too similar to the evaluation models (e.g., to DR) are a particular concern.", "This paper presents USR , an U n S upervised and R eference-free evaluation metric for dialog.", "To address the shortcomings of standard metrics for language generation, USR (1) is reference-free, (2) is composed of multiple sub-metrics that evaluate specific qualities of dialog, (3) has a definition of good dialog that is configurable.", "Thus the metric may be adapted to different tasks and datasets.", "USR is shown to strongly correlate with human judgment on Topical-Chat (turn-level: 0.42 , system-level: 1.0 ) and PersonaChat (turn-level: 0.48 , system-level: 1.0 ).", "We thank the following individuals for their help with annotation: Evgeniia Razumovskaia, Felix Labelle, Mckenna Brown and Yulan Feng." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other" ]
[ "Concept graphs are created as universal taxonomies for text understanding in the open domain knowledge.", "The nodes in concept graphs include both entities and concepts.", "The edges are from entities to concepts, showing that an entity is an instance of a concept.", "In this paper, we propose the task of learning interpretable relationships from open domain facts to enrich and refine concept graphs.", "The Bayesian network structures are learned from open domain facts as the interpretable relationships between relations of facts and concepts of entities.", "We conduct extensive experiments on public English and Chinese datasets.", "Compared to the state-of-the-art methods, the learned network structures help improving the identification of concepts for entities based on the relations of entities on both English and Chinese datasets.", "Concept graphs are created as universal taxonomies for text understanding and reasoning in the open domain knowledge (Dagan et al., 2010; Bowman et al., 2015; Zamir et al., 2018; Huang et al., 2019; Hao et al., 2019; Jiang et al., 2019).", "The nodes in concept graphs include both entities and concepts.", "The edges are from entities to concepts, showing that an entity is an instance of a concept.", "The task of extracting and building concept graphs from user-generated texts has attracted a lot of research attentions for a couple of decades (Fell-baum, 1998; Wu et al., 2012; Shwartz et al., 2016; Chang et al., 2018; Le et al., 2019; Lewis, 2019).", "Most of these methods rely on high quality syntactic patterns to determine whether an entity belongs to a concept.", "For example, given the pattern X is a Y or Y , including X appearing in sentences, we can infer that the entity X is an instance of the concept Y .", "These pattern-based methods require that an entity and concept pair co-occurs in sentences.", "However, due to the different expressions of a certain concept, an entity and a concept may rarely appear in sentences together.", "We conduct a data analysis of millions of sentences extracted from Wikipedia and discover that only 10.61% of entity-concept pairs co-occur in sentences out of more than six million of pairs from the public Microsoft concept graph ( https: //concept.research.microsoft.com ).", "We also analyze Baidu Baike ( http://baike.baidu.com ) and its corresponding concept graph.", "A similar phenomenon is observed that only 8.56% entity-concept pairs co-occur in sentences.", "Table 1 shows the statistics for Wikipedia and Baidu Baike.", "With such limitations, the existing approaches have diffi-culties in helping build a complete concept graph from open domain texts.", "Nowadays, the task of open domain information extraction (OIE) has become more and more important (Christensen et al., 2011; Wu and Weld, 2010; Etzioni et al., 2011; Mausam et al., 2012; Sun et al., 2018b,a; Di et al., 2019; Rashed et al., 2019; Liu et al., 2020a,b).", "OIE aims to generate entity and relation level intermediate structures to express facts from open domain sentences.", "These open domain facts usually express natural languages as triples in the form of (subject, predicate, object).", "For example, given the sentence Anderson, who hosted Whose Line, is a winner of a British Comedy Award in 1991., two facts will be extracted.", "They are (Anderson, host, Whose Line) and (Anderson, winner of a British Comedy Award, 1991).", "The subject and object in a fact are both Concept Graph Facts f 1 : ( s 1 ,r 1 ,o 1 ) f n : ( s n ,r n ,o n ) r 1 r p c 1 c q e 1 e m c 1 c q e 1 e m Texts Subject-Relation View Concept Discovery Bayesian Network Structure Learning r 1 r 2 c 1 c 2 c 3 e f 1 : ( e, r 1 , o 1 ) f 2 : ( e, r 2 , o 2 ) c 1 r 1 r p e 1 e m Object-Relation View Entity-Concept View r 1 c 1 c q e 1 e m r p Figure 1: The workflow of learning interpretable relationships from open domain facts for concept discovery.", "entities.", "The open domain facts contain rich information about entities by representing the subject or object entities via different types of relations (i.e., groups of predicates).", "It would be helpful for concept graph completion if we can take advantage of the relations in open domain facts.", "We again take the above two facts of Anderson as an instance.", "If we have explored the connections between relations of facts and concepts, and learned that host and winner of a British Comedy Award are associated with an English presenter subject with a higher probability than a Japanese presenter subject, we can infer that Anderson belongs to the English presenter concept regardless of whether these two co-appear in a sentence or not.", "In real-world open domain corpus, however, the connections between relations and concepts are not available to us.", "In this paper, we propose the task of learning interpretable relationships between entities, relations and concepts from open domain facts to help enriching and refining concept graphs.", "Learning Bayesian networks (BNs) from data has been studied extensively (Heckerman et al., 1995; Koivisto and Sood, 2004; Scanagatta et al., 2015; Niinimaki et al., 2016) in the last few decades.", "The BNs formally encode probabilistic connections in a certain domain, yielding a human-oriented qualitative structure that facilitates communication between a user and a system incorporating the probabilistic model.", "Specifically, we apply the Bayesian network structure learning (BNSL) (Chow and Liu, 1968; Yuan et al., 2011; Yuan and Malone, 2013) to discover meaningful relationships between entities, relations and concepts from open domain facts.", "The learned network encodes the dependencies from the relations of entities in facts to the concepts of entities, leading to the identification of more entity-concept pairs from open domain facts for the completion of concept graphs.", "Figure 1 illustrates the proposed workflow of learning interpretable relationships from open domain facts.", "We summarize our contributions as follows: We propose the task of learning interpretable relationships between entities, relations and concepts from open domain facts, which is important for enriching and refining concept graphs.", "We build the BNSL model to discover meaningful network structures that express the connections from relations of entities in open domain facts to concepts of entities in concept graphs.", "Experimental results on both English and Chinese datasets reveal that the learned interpretable relationships help identify concepts for entities based on the relations of entities, resulting in a more complete concept graph.", "Concept Graph Construction .", "Concept graph construction has been extensively studied in the literature (Fellbaum, 1998; Ponzetto and Strube, 2007; Banko et al., 2007; Suchanek et al., 2007; Wu et al., 2012; Shwartz et al., 2016; Chang et al., 2018; Le et al., 2019; Lewis, 2019).", "Notable works toward creating open domain concept graphs from scratch include YAGO (Suchanek et al., 2007) and Probase (Wu et al., 2012).", "In addition, a wide variety of methods (Nakashole et al., 2012; Weeds et al., 2014; Roller et al., 2014; Shwartz et al., 2016; Roller et al., 2018; Chang et al., 2018; Le et al., 2019; Lewis, 2019) are developed to detect the hypernymy between entities and concepts for a more complete concept graph.", "Distributional representations of entities and concepts are learned for good hypernymy detection results (Weeds et al., 2014; Roller et al., 2014; Chang et al., 2018; Lewis, 2019).", "In contrast to distributional methods, path-based algorithms (Nakashole et al., 2012; Shwartz et al., 2016; Roller et al., 2018; Le et al., 2019) are proposed to take advantage of the lexico-syntactic paths connecting the joint occurrences of an entity and a concept in a corpus.", "Most of these methods require the co-occurrence of entity and concept pairs in sentences for the graph completion task.", "However, due to the different expressions of a certain concept, an entity and a concept may rarely appear in one sentence together.", "With such limitations, the existing methods in the literature cannot deal with those non co-occurring entity concept pairs, leading to an incomplete concept graph.", "Open Domain Information Extraction .", "Open domain information extraction (OIE) has attracted a lot of attention in recent years (Wu and Weld, 2010; Christensen et al., 2011; Etzioni et al., 2011; Mausam et al., 2012; Pal and Mausam, 2016; Yahya et al., 2014; Sun et al., 2018b,a; Roy et al., 2019; Liu et al., 2020a,b).", "It extracts facts from open domain documents and expresses facts as triples of (subject, predicate, object).", "Recently, a neural-based OIE system Logician (Sun et al., 2018b,a; Liu et al., 2020a,b) is proposed.", "It introduces a unified knowledge expression format SAOKE (symbol aided open knowledge expression) and expresses the most majority information in natural language sentences into four types of facts (i.e., relation, attribute, description and concept).", "Logician is trained on a human labeled SAOKE dataset using a neural sequence to sequence model.", "It achieves a much better performance than traditional OIE systems in Chinese language and provides a set of open domain facts with much higher quality to support upper-level algorithms.", "Since the subject and object in a fact are both entities, the open domain facts contain rich information about entities by representing the subjects or objects via different types of relations (i.e., groups of predicates).", "It can help the task of concept graph completion by making full use of the relations in open domain facts.", "In this paper, we leverage the high-quality facts of Logician as one dataset in the experiment.", "real-world data is a well-motivated but computationally hard task (Heckerman et al., 1995; Koivisto and Sood, 2004; de Campos et al., 2009; Malone et al., 2011; Scanagatta et al., 2015; Niinimaki et al., 2016).", "A Bayesian network specifies a joint probability distribution of a set of random variables in a structured fashion.", "A key component in this model is the network structure, a directed acyclic graph on the variables, encoding a set of conditional independence assertions.", "Several exact and approximate algorithms are developed to learn optimal Bayesian networks (Chow and Liu, 1968; Koivisto and Sood, 2004; Singh and Moore, 2005; Silander and Myllymaki, 2006; Yuan et al., 2011; Yuan and Malone, 2013).", "Some exact algorithms (Koivisto and Sood, 2004; Singh and Moore, 2005; Silander and Myllymaki, 2006) are based on dynamic programming to find the best Bayesian network.", "In 2011, an A (cid:63) search algorithm is introduced (Yuan et al., 2011) to formulate the learning process as a shortest path finding problem.", "However, these exact algorithms are inefficient due to the full evaluation of an exponential solution space.", "In this paper, we consider the Chow-Liu tree building algorithm (Chow and Liu, 1968) to approximate the underlying relationships between entities, relations and concepts as a dependency tree.", "This method is very efficient when there are large numbers of variables.", "We formulate the relationships between entities, relations, and concepts as follows:", "Entities are associated with a set of relations that represent the behaviors and attributes of entities; A concept is defined by a set of relations.", "The instances of a concept are those entities that associate with the corresponding set of relations.", "In concept graphs, a concept is associated with a set of entities which share some common behaviors or attributes.", "However, the essence of a concept is a set of relations, and entities which associate with these relations automatically become the instance of the concept.", "So our formulation of the relationships between entities, relations and concepts can be illustrated by Figure 2.", "In the closed domain, a knowledge base has a predefined ontology and the relationships in Figure 2 are already known.", "For example, DBPe-dia (Auer et al., 2007) builds a knowledge graph e m c q Entity e 1 r 1 c 1 Relation Concept r p Figure 2: Relationships of entities, relations and concepts.", "from Wikipedia to encode the relationships between entities and relations in the forms of facts.", "The relationships between relations and concepts are represented in the ontology structure of DBPe-dia, where each concept is associated with a group of relations.", "However, in the open domain, a predefined ontology does not exist, and hence the components in Figure 2 may not be associated with each other.", "For instance, given an open domain concept graph, we can discover the relationships between entities and concepts.", "Given the open domain corpus/facts, we can find the relationships between entities and relations.", "But the relationships between open domain concepts and relations are not available, to our knowledge.", "In this paper, we aim to find the connection between open domain relations and concepts, so that we can provide interpretations to the question why the entity is associated with those concepts in open domain.", "Suppose we have a set of entities E = { e 1 , , e m } , a set of relations R = { r 1 , , r p } a set of concepts C = { c 1 , , c q } , and a set of observed triplets O = { ( e, r, c ) } .", "Here E and C are from a concept graph G .", "R is from a set of facts F = { f 1 , , f n } extracted from a text corpus D .", "A triplet ( e, r, c ) is observed means that the entity e with relation r and concept of c is found in above data sources.", "Given a set of observations O with N samples, the Bayesian network can be learned by maximizing the joint probability p ( O ) : p ( O ) = (cid:89) ( e,r,c ) O p (( e, r, c )) = (cid:89) ( e,r,c ) O p ( c | ( e, r )) p ( r | e ) p ( e ) = (cid:89) ( e,r,c ) O p ( c | r ) p ( r | e ) p ( e ) where p ( c | ( e, r )) = p ( c | r ) is due to our Bayesian network assumption (see Figure 2).", "By learning with the observed triplets with above model, we can infer the missing triplets, especially give interpretable relationship between entities and concepts.", "Since p ( r | e ) can be approximated by the information from OIE corpus, the core of the above problem becomes to learn the part of the network of p ( c | e ) .", "The difficulty of learning p ( c | e ) is the unknown structure of the Bayesian network.", "Due to sparsity of real-world knowledge base, the target network would be sparse.", "But the sparse structure must be known beforehand for probability learning.", "In this paper, we employ the Bayesian Network Structure Learning (BNSL) technique to explore the connections between relations and concepts.", "Due to the large number of variables (i.e., entities, relations and concepts) in open domain facts and concept graphs, we develop an approximate algorithm to learn the network structure.", "Due to the sparsity of the relationships between relations and concepts, we decompose the problem into several sub-problems, with each sub-problem containing only one concept variable.", "Then for each concept variable, we identify possible related relations and apply a BNSL algorithm to discover the network structure between them.", "Finally, we use the learned network for concept discovery.", "The procedure is shown in Algorithm 1.", "We will state the key steps in detail in the next sub-sections.", "Given a concept c C , we first collect all its entities E c E from the concept graph.", "Then we can obtain a set of facts F c that contain these entities.", "Since an entity can appear in a fact as a subject or an object, we split the facts F c into subject-view facts F c,s and object-view facts F c,o .", "If we make use of all the relations under the subject or object view, it would be inefficient or event impossible to learn the sparse network structure with a large number of relation variables.", "Hence, based on the facts, we select possible related relations to the concept c to reduce the complexity of the problem.", "There are various strategies which can be applied for the relation selection.", "We can assume that a relation is highly related to the concept if it appears many times in the fact set F c .", "In this way, we can Algorithm 1: BNSL for concept discovery Input: Texts D and a concept graph G .", "Output: Valid entity-concept pairs.", "/* OIE step: */ 1 Extract open domain facts F from D ; /* Concept discovery step: */ 2 for each concept c C do 3 Get entities E c of this concept; 4 Select facts F c including E c ; /* Subject view step: */ 5 Split F c into subject-view facts F c,s ; 6 Select top K relations R c,s from F c,s ; 7 Get entity-relation data X c,s ; /* Object view step: */ 8 Repeat step 5 to get object-view F c,o ; 9 Repeat step 6 to get R c,o from F c,o ; 10 Repeat step 7 to get X c,o ; /* BNSL training step: */ 11 Feed X c,s and X c,o into BNSL; 12 Get a network structure S c for c ; 13 end for /* BNSL prediction step: */ 14 Predict on new entities; 15 Return valid entity-concept pairs; count the frequencies of relations for each view and select the top K as the most relevant ones with a concept.", "We call it TF selection since we measure the relevance of a relation according to its frequency.", "We can also select relations according to the TFIDF measurement (Wu et al., 2008).", "For each view, we select the most relevant K relations for the concept c .", "We denote them as R c,s R for the subject-view facts and R c,o R for the object-view facts.", "In summary, for each concept, we construct two sub-problems for the BNSL task.", "One is from the subject view and the other is from the object view.", "Under each view, the sub-problem contains one concept and at most K relations.", "The goal is to learn a network structure from the concept and corresponding relations.", "Given a sub-problem for a concept c , we first obtain the corresponding data observations and then feed them as the input of BNSL for interpretable relationship discoveries.", "For each concept, we can learn a Bayesian network structure from its top subject-view or object view relations.", "The data observations X c,s with TF relation selection for the subject-view of the concept c are generated as follows: for each entity e E c , we use 1 to be the concept observation, meaning that the entity e is an instance of concept c .", "We use the times of the subject e and a top relation r R c,s appearing together in facts F c,s as a relation observation for e and r .", "The K relation observations and the concept observation together become the positive data observations for c .", "In order to learn meaningful network structures, we generate an equal number of negative data observations for c .", "We first randomly sample the same number of entities from E c (cid:48) = { e i : e i E \\ E c } as negative entities of c .", "We use 0 as the concept observation for negative entities.", "Then for each negative entity e (cid:48) , we count the times of the subject e (cid:48) and a relation r R c,s appearing in all the collected facts as a relation observation for e (cid:48) and r .", "The K relation observations and the concept observation together become the negative data observations for c .", "X c,s consists of both the positive and negative data observations.", "Similarly, we can obtain the data observations X c,o for the object view.", "In this paper, we employ the widely-used Chow-Liu tree building algorithm (Chow and Liu, 1968) as the BNSL method.", "This algorithm approximates the underlying distributions of variables as a dependency tree, which is a graph where each node only has one parent and cycles are not allowed.", "It will first calculate the mutual information between each pair of nodes (i.e., variables), and then take the maximum spanning tree of that matrix as the approximation.", "While this will only provide a rough approximation of the underlying data, it provides good results for many applications (Suzuki, 2010; Tavassolipour et al., 2014; Hassan-Moghaddam and Jovanovic, 2018; Ding et al., 2019), especially when you need to know the most important influ-encer on each variable.", "In addition, this algorithm becomes extremely efficient when it deals with to a large number of variables.", "Since both the subject and object views reflect some properties of entities, we can concatenate the subject-view relations and object-view relations together for a more complete representation of entities.", "The concatenated data can be forwarded into BNSL for a more comprehensive result of interpretable relationship discovery.", "Given q concept variables and K relevant relations for each concept, the number of parameters in BNSL is at most q K .", "After we learn a network structure for each concept, we can learn the concept of a new entity e easily.", "We first identify the open domain facts with e as its subject or object, and then feed the observation of relations for a concept c into the network to calculate the probability of p ( c | e ) .", "We still use the open domain entity Anderson and its two facts introduced in Section 1 as an example to show how BNSL works.", "Assume we have two open domain concepts, English presenter and Japanese presenter.", "Given the entity Anderson and its open domain relations host and winner of a British Comedy Award as input of BNSL, the output is the probabilities that Anderson belongs to each concept.", "BNSL will predict a higher probability for Anderson having the concept English presenter than having Japanese presenter.", "With the learned relationship between relations and concepts from BNSL, we indirectly associate entities with their concepts and give interpretations to the question why the entity is associated with those concepts in open domain.", "The hypernymy detection task aims to identify concepts for entities in open domain.", "It is helpful for us to evaluate the quality of the learned relationships from BNSL.", "In this section, we conduct extensive experiments to evaluate the performance of BNSL.", "We test the performance of our proposed method on two public datasets, one is in English and the other is in Chinese.", "For the English dataset, we use 15 million high-precision OIE facts 1 , the Microsoft concept graph 2 and 7 .", "87 million Wikipedia sentences 3 for our experiments.", "Since there are more than 5 million concepts in the English dataset and most of them have few entities, we focus on those concepts with more than 50 entities in the experiments.", "For the Chinese dataset, we use sentences and the corresponding facts 4 in (Sun et al., 2018b).", "The concept graph is also built by Baidu Baike.", "Table 2 shows the statistics of the concept 1 http://reverb.cs.washington.edu 2 https://concept.research.microsoft.", "com/Home/Download 3 https://www.kaggle.com/mikeortman/ wikipedia-sentences 4 https://ai.baidu.com/broad/download?", "dataset=saoke ConceptGraphs Dataset # entities # concepts # overlaps % overlaps English 12,501,527 5,376,526 613,454 27.10% Chinese 9,230,727 3,245 475,507 48.14% Facts Dataset # facts # subjects # objects # predicates English 14,728,268 1,396,793 1,698,028 664,746 Chinese 37,309,458 624,632 550,404 10,145 Table 2: Statistics of concept graphs and facts.", "In open domain facts, each mention of a subject or object is considered as an open domain entity.", "So we naturally map an entity in open domain facts and concept graphs by the same mention.", "In Table 2, the column # of overlap is about the number of fact entities appearing in the concept graph and the last column is the percentage of fact entities in the concept graph.", "With the predicates as relations for the open domain facts, we build the Bayesian network structure learning method to bridge the gap between relations in open domain facts and concepts in the concept graph.", "In the experiment, we compare with the state-of-the-art model HypeNet (Shwartz et al., 2016) for hypernymy detection.", "HypeNet improves the detection of entity-concept pairs with an integrated path-based and distributional method.", "An entity and a concept must appear together in a sentence so that HypeNet can extract lexico-syntactic dependency paths for training and prediction.", "However, only less than 11% of entity-concept pairs co-occur in Wikipedia sentences in reality (Table 1).", "Therefore, we compare BNSL with HypeNet only on the entity-concept pairs that co-appear in sentences.", "In addition, we compare BNSL with recurrent neural networks (RNNs).", "We apply attention-based Bi-LSTM (Zhou et al., 2016) and derive three versions of RNNs as baseline methods,", "RNN(f), RNN(sen) and", "RNN(e).", "RNN(f) determines the concepts of an entity according to the facts containing the entity, while RNN(sen) by the sentences containing the co-appearance of an entity and a concept.", "Specifically, each entity in", "RNN(f) is represented by its associated facts.", "Each fact is a sequence of subject, predict and object.", "Each subject, predict and object vector is fed in sequence into", "RNN(f), resulting a fact embedding vector.", "The averaged fact vector becomes the entitys feature for concept classification.", "Similar to HypeNet, RNN(sen) requires the entity-concept pairs co-appearing in sentences.", "Different from RNN(sen),", "RNN(e) focuses on sentences containing the entity only.", "Based on the sentences,", "RNN(e) aims to learn which concept an entity belongs to.", "We follow HypeNet and RNN to use pre-trained GloVe embeddings (Pennington et al., 2014) for initialization.", "Besides, we compare BNSL with traditional support vector machines (SVM) with linear kernel.", "The input features for SVM and BNSL are the same, i.e., the top K relations for each concept.", "Here we set K = 5 .", "During testing, all methods are evaluated on the same testing entities.", "we calculate the accuracy, precision, recall and F1-score over the prediction results for evaluation.", "We split the data into 80% of training and 20% of testing.", "For English, the total numbers of training and testing data are 504,731 and 123,880, respectively; whereas for Chinese, the numbers are 5,169,220 and 1,289,382, respectively.", "In this section, we show the evaluation performance on the task of concept discovery with the learned interpretable relationships from open domain fact.", "Table 3 and Table 4 list the results for co-occurred and non co-occurred entity-concept pairs in sentences respectively.", "In the tables, (s) and (o) mean the performance only under the subject and the object view, respectively.", "RNN(f), BNSL and SVM present the prediction performance with the concatenation of both the subject and object views.", "As is mentioned in the previous section, we can use TF or TFIDF for the most relevant relation selection.", "We test both strategies for BNSL and SVM.", "For the English dataset, TFIDF performs much better than TF while the result is the opposite for the Chinese dataset.", "In this section, we analyze the results of BNSL and SVM with TFIDF for the English dataset.", "For the Chinese dataset, we report the performance of BNSL and SVM with TF.", "We will show more results for the relation selection in the next section.", "For the co-occurred entity-concept pairs in sentences, BNSL(s) performs the best for both datasets.", "Surprisingly, SVM performs much better than HypeNet with an improvement of around 10% on accuracy for both datasets as is shown in Table 3.", "In addition, SVM achieves better results compared to RNN(sen).", "The reason that HypeNet or RNN(sen) cannot perform well may be that the information expressed from the sentences are too diverse.", "HypeNet or RNN(sen) cannot capture meaningful patterns from sentences for the task of concept discovery.", "Since", "RNN(e) further ignores the concept information during the sentence collection step, it cannot perform well compared with RNN(sen).", "In contrast, information extracted from open domain facts are much more concentrated about concepts.", "Furthermore, the most relevant relations associated with entities help filtering out noise.", "Therefore, SVM can achieve a much better result than sentence-based baselines.", "Though SVM does well on the co-occurred data, BNSL outperforms SVM with all the four evaluation metrics.", "By learning interpretable relationships between relations and concepts, BNSL captures the most important knowledge about concepts and further exploits their dependencies to help improve the concept discovery task.", "However, the concatenation of subject and object views for BNSL cannot help improve the performance for both datasets.", "Similar phenomena can be observed for", "RNN(f) and SVM.", "Specifically, the results under the subject view are usually better than those of the object view, implying that when people narrate facts, they may pay more attention to selecting suitable predicate for subjects, rather for objects.", "Table 4 lists the performances of", "RNN(e),", "RNN(f), SVM and BNSL on non co-occurred data.", "We can observe a similar trend compared to the results on co-occurred data.", "Since HypeNet and BNSL make use of different information sources (natural language sentences for HypeNet and open domain facts for BNSL), we try to ensemble them to improve the performance further.", "We first train HypeNet and BNSL independently.", "Then we can obtain prediction probabilities of entity-concept pairs from HypeNet and BNSL separately.", "We select the probabilities with higher values as the final predictions.", "The last row in Table 3 shows the performance of ensembling HypeNet and BNSL.", "We denote it as B + H. It can be seen that B + H achieves the best accuracy, recall and F1-scores on the co-occurred data.", "It reveals that interpretable relationships extracted from open domain facts are complementary to natural language sentences in helping concept discovery.", "Studying meaningful knowledge from open domain facts provides an alternative perspective to build concept graphs and this paper starts the first trial.", "ent relation selection strategies will influence the performance of BNSL and SVM methods.", "Table 5 is the performance of TF and TFIDF relation selection on the entire data for both English and Chinese.", "We observe that TFIDF selection performs better on English while TF is better on Chinese.", "However, BNSL always outperforms SVM regardless of the views or the relation selections.", "In addition, since SVM performs much better than the neural network based HypeNet and RNN, we try to ensemble it with BNSL to improve the performance further.", "We consider the prediction probabilities of SVM as a new variable and incorporate it into BNSL for network structure learning.", "We denote the model as BNSL + SVM.", "For comparison, we ensemble SVM with BNSL by taking the results of BNSL as one new feature dimension to SVM.", "We name it as SVM + BNSL.", "It can be seen from Table 5 that the ensemble of BNSL and SVM outperforms single models on both datasets.", "Especially, BNSL + SVM does better than SVM + BNSL, revealing that BNSL has a better capability of exploring meaningful knowledge from other sources.", "Furthermore, we evaluate how BNSL performs with different numbers of relations.", "Figure 3 shows the results of BNSL(s) by setting relation numbers from 1 to 20.", "TFIDF relation selection is used for the English dataset and TF for Chinese.", "We can observe that BNSL performs best when we select the top 5 relations and the results become stable with more than 5 relations.", "In reality, the open domain facts or co-occurring sentences associated with entity-concept pairs are usually missing, making the input information for concept discovery extremely sparse.", "In this section, we study how BNSL performs with the sparse input.", "Given a set of entities, we first extract the corresponding facts (or sentences) under each concept.", "For both datasets, we get around 30 million entity-concept pairs for testing and more than 97% do not have the corresponding fact information with the top K relations, making the prediction of BNSL very challenging.", "Furthermore, both datasets have a large number of fine-grained concepts, making the task more difficult.", "For the missing data, we feed an empty fact or sentence into BNSL and other models for training and testing.", "Also, we observe that RNN does not performs as well compared with other methods and in particular RNN(sen) performs the worst when the input is extremely sparse.", "In Figure 4, we report the improvement of F1-score over RNN(sen).", "We can observe that HypeNet, SVM and BNSL can achieve much better performance, showing their robustness with missing values.", "In addition, B + H can still achieve the best result.", "It further confirms that open domain facts and natural language sentences are comple-English 0% 2.5% 5% 7.5% 10% 12.5% I m p r o v e m e n t o n F 1 HypeNet RNN", "In this paper, we investigate the task of learning interpretable relationships between entities, relations and concepts from open domain facts to help enriching and refining concept graphs.", "The Bayesian network structures are learned from open domain facts as the discovered meaningful dependencies between relations of facts and concepts of entities.", "Experimental results on an English dataset and a Chinese dataset reveal that the learned network structures can better identify concepts for entities based on the relations of entities from open domain facts, which will further help building a more complete concept graph." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "objective", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "objective", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "result", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain" ]
[ "We present a simple approach for text infilling , the task of predicting missing spans of text at any position in a document.", "While infilling could enable rich functionality especially for writing assistance tools, more attention has been devoted to language modelinga special case of infilling where text is predicted at the end of a document.", "In this paper, we aim to extend the capabilities of language models (LMs) to the more general task of infilling.", "To this end, we train (or fine-tune) off-the-shelf LMs on sequences containing the concatenation of artificially-masked text and the text which was masked.", "We show that this approach, which we call infilling by language modeling , can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics.", "Furthermore, we show that humans have difficulty identifying sentences infilled by our approach as machine-generated in the domain of short stories.", "Text infilling is the task of predicting missing spans of text which are consistent with the preceding and subsequent text.", "1 Systems capable of infilling have the potential to enable rich applications such as assisting humans in editing or revising text (Shih et al., 2019), connecting fragmented ideas (AI21, 2019), and restoring ancient documents (Assael et al., 2019).", "Rather than targeting a particular application, our goal here is to provide a general, flexible, and simple infilling framework which can convincingly infill in a variety of domains.", "ing remarkably coherent text (Zellers et al., 2019; See et al., 2019), (2) efficient at generating text, and (3) conceptually simple, but cannot infill effectively as they can only leverage context in a single direction (usually the past).", "On the other hand, strategies such as BERT (Devlin et al., 2019) and SpanBERT (Joshi et al., 2019) are able to infill using both preceding and subsequent text.", "However, their use of bidirectional attention limits their infilling capabilities to fixed-length spans.", "This is problematic asfor many applicationswe may not know the length of a missing span a priori .", "Zhu et al. (2019) propose a method capable of infilling variable-length spans, but it uses a specialized architecture and hence cannot easily leverage large-scale pre-trained models.", "In this work, we present infilling by language modeling (ILM), a simple framework which enables LMs to infill variable-length spans while preserving their aforementioned benefits: generation quality, efficient sampling, and conceptual simplicity.", "Our framework involves a straightforward formulation of the infilling task which, as we demonstrate, can be learned effectively by existing LM architectures.", "As shown in Figure 1, our approach concatenates artificially-masked text with the text which was masked, and adopts a standard LM training (or fine-tuning) procedure on such examples.", "Once trained, infilling can be performed for a document with blanks by using the LM to generate text and then replacing the blanks with this text.", "In addition to its conceptual simplicity, our experiments show that ILM enables off-the-shelf LMs to infill effectively.", "Furthermore, we find that infilling performance improves when starting from a large-scale pre-trained LM (as opposed to training from scratch), suggesting an additional benefit of using our model-agnostic framework compared to approaches which require specialized architectures.", "We provide an interactive web demo of models trained under our framework.", "This demo can infill multiple variable-length spans with different granularities (e.g. words, n-grams, and sentences) on the domains of short stories, scientific abstracts, and song lyrics: https://chrisdonahue.com/ilm .", "All code, data, and trained models are available at https://github.com/chrisdonahue/ilm and also on the CodaLab platform at https: //worksheets.codalab.org/worksheets/ 0x9987b5d9cce74cf4b2a5f84b54ee447b .", "The task of infilling is to take incomplete text x containing one or more missing spans, and return completed text x .", "Let [ blank ] be a placeholder for a contiguous sequence (span) of one or more missing tokens.", "Then, incomplete text x is a sequence of tokens some of which are [ blank ].", "In order to map x to x , an infilling strategy must specify both how many and which tokens to generate for each [ blank ].", "Note that there may be many reasonable x for a given x .", "Hence, we are interested in learning a distribution p ( x | x ) .", "In this section, we describe our ILM framework.", "We first outline a simple reparametrization of the infilling task.", "Then, we define a procedure for automatically generating suitable training examples which can be fed to an off-the-shelf LM.", "Fedus et al. (2018) explore an infilling framework where LMs are trained on concatenations of x and x , i.e., they use LMs to directly predict x given x .", "While their approach is effective at infilling individual words, it is somewhat redundant as the model must predict the unmasked text in x .", "Additionally, a model is not guaranteed to exactly reproduce the unmasked text.", "Instead, we make the trivial observation that it suffices to predict only the missing spans y which will replace the [ blank ] tokens in x .", "We can then construct x by simply replacing [ blank ] tokens in x with predicted spans y in a deterministic fashion.", "In order to handle multiple variable-length spans, we pose y as the concatenation of all missing spans separated by special [ answer ] tokens (one [ answer ] per [ blank ]) (Figure 1).", "We can thus cast infilling as learning p ( y | x ) without loss of generality.", "Given a corpus consisting of complete text examples, our framework first manufactures infilling examples and then trains an LM on these examples.", "To produce an infilling example for a given x , we first sample an x from a stochastic function Mask ( x ) which randomly replaces some number of spans in x with [ blank ] tokens.", "Then, we concatenate together the spans which were replaced separated by [ answer ] tokensto form a training target y .", "Finally, we construct the complete infilling example by concatenating x , [ sep ], and y (see Figure 2 for a complete example).", "We train (or fine-tune) LMs on these infilling examples using standard LM training methodology, yielding models of the form p ( y | x ) .", "Specifically, we train GPT-2 (Radford et al., 2019) off the shelf, but any LM can potentially be used.", "This framework has several advantages.", "First, it incurs almost no computational overhead compared to language modeling.", "Specifically, if there are k missing spans in x , the concatenation of x and y contains only 2 k +1 more tokens than x (one [ blank ] and one [ answer ] per missing span plus one [ sep ]).", "As k is usually small (averaging around 2 per example in our experiments), sequence lengths remain similar to those encountered for the same x during language modeling.", "In contrast, using LMs to directly predict x from x as in Fedus et al. (2018) effectively doubles the sequence length of x .", "This is particularly problematic when considering models like GPT-2 whose memory usage grows quadratically with sequence length.", "Second, our framework requires minimal change (three additional tokens) to an existing LM's vocabulary.", "Finally, because the entirety of x is in the past when predicting y , the ILM framework combines the ability to attend to incorporate context on both sides of a blank with the simplicity of decoding from LMs.", "We design our experiments to determine if training an off-the-shelf LM architecture with our ILM framework can produce effective infilling models for a variety of datasets.", "Specifically, we train on three datasets of different sizes and semantics (details in Appendix A): short STORIES (Mostafazadeh et al., 2016), CS paper ABSTRACTS , and song LYRICS .", "A benefit of the ILM framework is that it can be trained to infill spans corrupted by arbitrary mask functions.", "Here, we explore a mask function which simultaneously trains models to infill different granularities of text; specifically, words, n-grams, sentences, paragraphs, and documents.", "By using a unique special token per granularity (e.g. [ blank word ]), this mask function offers users coarse but intuitive control over the length of the spans to be infilled.", "We configure our mask function to mask each token in a given document with around 15 % probability, echoing the configuration of Devlin et al. (2019).", "However, instead of masking individual tokens uniformly at random, we perform a preorder traversal of the granularity hierarchy tree, randomly masking entire subtrees with 3 % probability.", "For the datasets we consider, this results in a marginal token mask rate of about 15 % (details in Appendix B).", "While we train to infill several different granularities, we primarily evaluate and discuss the ability of our models to infill sentences for brevity.", "Quantitative results of our models on other granularities can be found in Appendix D, and granularity functionality can also be explored in our web demo.", "(Appendix C) while varying the infilling strategy and dataset.", "In addition to our proposed ILM strategy for infilling, we consider three baseline strategies: (1) language modeling (LM; infilling based only on past context), (2) reverse language modeling (LM-Rev; infilling based only on future context), and (3) language modeling based on all available context (LM-All).", "LM-All simply concatenates x and x together as in Fedus et al. (2018).", "LM-All represents arguably the simplest way one could conceive of infilling with LMs, but results in long sequence lengths.", "Training examples for all strategies are depicted in Figure 2.", "For each strategy, we also vary whether training is initialized from the pre-trained GPT-2 model or from scratch.", "Despite discrepancies between the pre-training and our fine-tuning for most infilling strategies, all of the infilling experiments initialized from the pre-trained checkpoint performed better than their from-scratch counterparts.", "This indicates that ILM can effectively leverage large-scale language modeling pre-training to improve infilling performance.", "Henceforth, we will only discuss the models initialized from the pre-trained checkpoint, though we report quantitative performance for all models in Appendix D. For the models trained on STORIES and ABSTRACTS , we trained models to convergence using early stopping based on the validation set perplexity (PPL) of each model computed only on the masked tokens.", "These models took about a day to reach STOABSLYR Length LM 18.3 27.9 27.7 1.00 LM-Rev 27.1 46.5 34.3 1.00 LM-All 15.6 22.3 21.4 1.81 ILM 15.6 22.4 22.6 1.01 Table 1: Quantitative evaluation results.", "their early stopping criteria on a single GPU.", "For the larger LYRICS dataset, we trained models for 2 epochs (about two days on a single GPU).", "We evaluate the quantitative performance of our models on the sentence infilling task by measuring PPL on test data.", "3 In this setting, a sentence is selected at random and masked out, and we measure the likelihood assigned by a model to the masked sentence in the context of the rest of the document.", "Regardless of differences in the ordering and number of tokens that each strategy uses to represent a test example, PPL is always computed only for the span of tokens comprising the original sentence (e.g. green tokens in Figure 2).", "Table 1 shows that across all datasets, ILM outperforms models which see only past or future context (LM and LM-Rev respectively), implying that our proposed framework is able to take advantage of bidirectional context despite using unidirectional models.", "Additionally, while one might expect LM-All to outperform ILM because its training examples more closely resemble those of standard LMs, ILM achieves similar performance to LM-All.", "This indicates that GPT-2 is able to effectively learn the syntax of ILM examples and achieve reasonable infilling performance with shorter sequences (and hence with much less memory usage).", "mod-3 Overlap-based metrics such as BLEU score (Papineni et al., 2002) are not appropriate for evaluating infilling as there are many realistic infills that have no word-level overlap with the original, e.g., a sandwich instead of leftover pasta.", "eling compared to the models which were trained only on language modeling (Appendix D.1).", "This suggests that ILM does not just repurpose LMs to infill, but rather extends their capabilities while maintaining their original functionality.", "In addition to our quantitative evaluation, we seek to evaluate the qualitative performance of ILM.", "To this end, we sample a story from the STORIES test set and randomly replace one of its five human-written sentences with a model output.", "Then, we task human annotators on Amazon Mechanical Turk with identifying which of the sentences in a story was machine-generated (details in Appendix E).", "We compare our ILM model to three baseline infilling strategies: an LM (context beyond the replaced sentence was discarded), the best model (self-attention; SA) from Zhu et al. (2019), and the pre-trained BERT (base) model (Devlin et al., 2019).", "All approaches except for BERT were first fine-tuned on the STORIES dataset.", "To infill using BERT, we replace the tokens representing the original sentence with mask tokens, and then generate text by replacing mask tokens one at a time (con-ditioning on previously-generated tokens).", "While vocabulary differences make it is less useful to compare PPL for the SA and BERT baselines to our GPT-2-based strategies, we can still meaningfully compare them in this human evaluation setting.", "For each approach we compute a score , which we define as the percentage of examples where the annotator did not correctly identify the machine-generated sentence.", "Therefore, a higher score implies a better (more natural, human-like) model.", "We collect 100 responses for each model and report the scores in Table 2, with qualitative examples in Figure 3 and Appendix E. Of the four strategies, ILM achieves the highest score, implying that sentences infilled by ILM are harder for humans to recognize as fake than those produced by other strategies.", "Somewhat surprisingly, we observed that despite only observing past context the LM model performed better than BERT and SA.", "BERT may have performed poorly due to the intrinsic difficulty of finding convincing infills with a precise length in tokens.", "SA may have performed poorly because, unlike LM and ILM, it was not initialized from a large-scaled pre-trained LM.", "Methodology.", "A number of systems have the capability to infill but have practical drawbacks.", "Many systems are unable to automatically determine span length, and thus, can only infill fixed-length spans (Fedus et al., 2018; Devlin et al., 2019; Yang et al., 2019; Joshi et al., 2019; Gu et al., 2019; Liu et al., 2019).", "Methods such as BERT present additional challenges during inference (Wang and Cho, 2019).", "Rudinger et al. (2015) frame narrative cloze as a generation task and employ language models, but they only consider one infill of a fixed length.", "Zhu et al. (2019); Shen et al. (2020) infill multiple variable-length sequences, but these approaches require the masked context to be iteratively updated and reprocessed to fill in blanks one a time.", "In contrast, our approach appends infilled text to the context and does not require reprocessing the entire input sequence for each blank.", "AI21 (2019) train an LM which can fill in the middle of a paragraph given the first and last sentencesour work generalizes to such capabilities.", "Task.", "The cloze task (Taylor, 1953) evaluates language proficiency by asking systems to fill in randomly-deleted words by examining context.", "Cloze has been extended in the forms of discourse (Deyes, 1984) and narrative cloze (Cham-bers and Jurafsky, 2008), which remove phrases and narrative events respectively.", "Recently, cloze has been used not only for evaluation, but also to improve text generation quality (Fedus et al., 2018) and transfer learning (Devlin et al., 2019) (under the name masked language modeling).", "Text infilling can be thought of as generalizing the cloze task from single words to spans of unknown length.", "Raf-fel et al. (2019) explore infilling as a pre-training objective to improve downstream performance on inference tasks; our work focuses on generation.", "Story generation.", "Recent work seeks to generate stories given a title and storyline (Yao et al., 2019), entities (Clark et al., 2018), premise (Fan et al., 2018), or surrounding context and rare words (Ippolito et al., 2019).", "Our work differs in that we aim to build systems capable of making predictions based only on text context, rather than aspects specific to stories (e.g. storyline).", "We presented a simple strategy for the task of infilling which leverages language models.", "Our approach is capable of infilling sentences which humans have difficulty recognizing as machine-generated.", "Furthermore, we demonstrated that our infilling framework is effective when starting from large-scale pre-trained LMs, which may be useful in limited data settings.", "In future work, we plan to incorporate these features into co-creation systems which assist humans in the writing process.", "We hope that our work encourages more investigation of infilling, which may be a key missing element of current writing assistance tools.", "This work was funded by DARPA CwC under ARO prime contract no.", "W911NF-15-1-0462.", "We thank all reviewers for their helpful comments." ]
[ "method", "abstain", "objective", "method", "result", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "result", "result", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "objective", "method", "method", "objective", "abstain", "method", "other", "other", "other" ]
[ "Low-shot relation extraction (RE) aims to recognize novel relations with very few or even no samples, which is critical in real scenario application.", "Few-shot and zero-shot RE are two representative low-shot RE tasks, which seem to be with similar target but require totally different underlying abilities.", "In this paper, we propose Multi-Choice Matching Networks to unify low-shot relation extraction.", "To fill in the gap between zero-shot and few-shot RE, we propose the triplet-paraphrase meta-training, which leverages triplet paraphrase to pre-train zero-shot label matching ability and uses meta-learning paradigm to learn few-shot instance summarizing ability.", "Experimental results on three different low-shot RE tasks show that the proposed method outperforms strong baselines by a large margin, and achieve the best performance on few-shot RE leaderboard 1 .", "Relation extraction (RE) aims to extract the relation between two given entities in the context.", "The most popular approaches to build RE models are based on supervised learning (Zeng et al., 2014; Baldini Soares et al., 2019).", "Despite the su-perior performance, supervised relation extraction approaches severely suffer from the data bottleneck, which restricts their application to more relation types in real scenarios.", "Consequently, low-shot relation extraction has become a recent research hotspot in RE area.", "There are two mainstream learning paradigms widely explored in low-shot relation extraction, namely zero-shot RE (Levy et al., 2017) and few-shot RE (Han et al., 2018).", "Few-shot relation extraction aims to identify instances of novel relation type with only a few illustrative instances, while zero-shot RE is more progressive, which only uses external Corresponding authors.", "knowledge and the name or definition of the novel relations to recognize them.", "Because low-shot RE only requires very limited manually annotated data, it can effectively alleviate data bottlenecks in conventional RE and therefore attached great attention.", "However, even with similar goals, zero-shot RE and few-shot RE actually require different fundamental abilities.", "Specifically, zero-shot RE is built on label semantic matching ability, which requires models to sufficiently exploit the label semantic of given novel relations, and matches relations and queries based on their underlying semantics.", "While few-shot RE is built on instance semantic summarizing ability, which requires a model to quickly generalize to novel relations by summarizing critical information from few-shot instances.", "Due to this fundamental difference, current state-of-the-art architectures are separately learned to deal with these two low-shot RE tasks.", "For zero-shot RE, the most popular solution is to transform it into a textual entailment (Obamuyide and Vlachos, 2018; Sainz et al., 2021), word prediction (Brown et al., 2020) or MRC problem (Levy et al., 2017; Bragg et al., 2021) and use external resources from these tasks to pre-training the label semantic matching ability.", "However, the divergence between relation extraction and these tasks will inevitably undermine the performance.", "Besides, MRC and tex-5785 tual entailment architecture can only deal with one novel relation each time, which significantly increases the computational and storage cost of deploying such models in real-world scenarios.", "For few-shot RE, current methods mainly focus on summarizing better prototypes from a few illustrative instances (Snell et al., 2017), or learning a model that can generalize to novel types within a few steps (Finn et al., 2017).", "These approaches require few-shot examples to fine-tune or summarize prototypes, and therefore can not be directly applied to zero-shot RE.", "As a result, current relation extraction models can not be effectively and efficiently to apply to all low-shot RE settings.", "In this paper, we propose to unify low-shot relation extraction by returning to the essence of relation extraction.", "Fundamentally, relation extraction can be viewed as a multiple choice task.", "Given two entities in context, a RE system needs to match the most appropriate relation or others for none-of-the-above from a set of pre-defined relation types.", "The information required to accomplish the multi-choice matching can be summarized from either the surface form of relation name or from few-shot instances.", "Motivated by this, we propose Multi-Choice Matching Network (MCMN) for unified low-shot RE, which is shown in Figure 2.", "Specifically, MCMN converts all candidate relation descriptions into a multi-choice prompt.", "Then the input instance is concatenated with the multi-choice prompt and passes through a pre-trained encoder to obtain the semantic representations of the input instance and candidate relations.", "Finally, MCMN conduct relation extraction by directly matching the relation representations and the instance representation.", "To equip MCMN with both label semantic matching ability and instance semantic summarizing ability, we propose to pre-train MCMN via triplet-paraphrase meta pre-training, which contains the following two critical components: 1) a text-triple-text paraphrase module, which can generate large-scale pseudo relation extraction data to pre-train the label semantic matching ability of MCMN; 2) a meta-learning style training algorithm, which enriches MCMN with instance semantic summarizing ability to quickly generalize across different relation extraction tasks.", "Specifically, given large-scale raw texts, triplet-paraphrase first extracts (subject, predicate, object) triplets via OpenIE (Cui et al., 2018) toolkit.", "Then based on the extracted triplets, paraphrases of the original texts is generated using an RDF-to-Text generation model.", "In this way, we can obtain large-scale pseudo annotations by collecting the generated sentences and the predicate in the triples.", "Such corpus can be used to effectively pre-train the label semantic matching ability of MCMN by matching the paraphrases to the corresponding predicate.", "Furthermore, to enrich MCMN with the instance semantic summarizing ability, such pre-training is conducted in a meta-learning paradigm.", "That is, MCMN is asked to learn different relation extraction tasks at each iteration, so that the MCMN can not over-fit the pre-training corpus by directly memorizing specific target relations.", "To evaluate our methods, we conduct experiments on three fairly different RE tasks: zero-shot RE, few-shot RE, and few-shot RE with none-of-the-above relation.", "Experiments show that the proposed method outperform previous methods on all these three tasks.", "Our source code is available at https://github.com/fc-liu/MCMN .", "The main contributions of this work are: We propose MCMN, a unified architecture for low-shot relation extraction by fundamentally formulating relation extract using a multi-choice matching paradigm.", "We propose to pre-train MCMN with triplet-paraphrase meta training, which enriches MCMN with label semantic matching ability and instance semantic summarizing ability for both zero-shot RE and few-shot RE.", "We comprehensively study the performance of MCNN on three different relation extraction tasks, including zero-shot, few-shot, and few-shot with none-of-the-above relation extraction, where MCMN outperforms strong baseline models.", "In this section, we formulate relation extraction task and the low-shot RE settings including zero-shot RE and few-shot learning RE.", "Relation Extraction.", "Suppose the input text T = [ t 1 , t 2 , ..., tn ] contains n tokens, e 1 = [ i, j ] and e 2 = [ k, l ] indicate the entity pair spans, where 1 i j, j < k l and l n .", "A relation instance is defined as x = ( T, e 1 , e 2 ) .", "For example, the 5786 [choice] employee of [choice] ceo of [choice] others [sep] Tim Cook is the CEO of Apple .", "tuple ( Tim Cook is the CEO of Apple Inc. , Tim Cook , Apple Inc. ) is a relation instance.", "The aim of relation extraction is to learn a mapping function: f : x y , where y is the relation class.", "For example, we want mapping ( Tim Cook is the CEO of Apple Inc. , Tim Cook , Apple Inc. ) to its relation class CEO_of .", "Traditional RE tasks typically pre-define the class space Y and annotate a large set of instances to train the model.", "However, in real scenarios, the targeting relation types vary in different tasks, and most of the novel relations lack annotations, rendering the supervised paradigms inapplicable.", "In that regard, how to transfer models to novel tasks becomes critical.", "relation extraction requires models to recognize novel relations with very few samples.", "There are two mainstream low-shot RE tasks, including: Zero-shot RE.", "This task aims to conduct relation extraction without any annotated instance other than some external knowledge z (or side informa-tion), such as relation descriptions.", "Models are supposed to transfer the knowledge and extract the targeting relation y t for input instance x through only the external knowledge.", "Few-shot RE.", "This task aims to conduct relation extraction with only a few annotated instances per novel relations.", "Each few-shot RE task contains a support set S = S 1 , ..., SN for N novel relations.", "And for relation i , S i = S 0 i , ..., S Ki contains K annotated instances.", "Models are supposed to learn to transfer the knowledge and extract the targeting relation y t for instance x through the N -way K shot support set.", "In this section, we introduce our multi-choice matching networks (MCMN).", "Different from previous unifying models, MCMN adopts a much more efficient and lightweight decoding module.", "Following are the detailed descriptions.", "Fundamentally, relation extraction can be viewed as a multiple choice task.", "Inspired by recent advances of prompt learning (Brown et al., 2020; Schick and Schtze, 2021), we construct a multi-choice prompt for each relation extraction task by directly concatenate all relation names or descriptions.", "Formally, the multi-choice prompts are in the following form: [C] rel 1 [C] rel 2 ... [C] rel N where [C] is the placeholder separator for the following relation.", "For example in Figure 2, the target RE task contains three novel relations: em-ployee_of , ceo_of , and others , of which the relation descriptions are then concatenated altogether to form the multi-choice prompt [C] employee of [C] ceo of [C] others.", "After obtaining the multi-choice prompt, we then feed it accompanied with the input sentence into the instance encoder, and the representations at separator [C] is regarded as the representation of its following relation.", "Before instance encoding, we concatenate the multi-choice prompt with each input instance into a single sentence, and separate them with a [SEP] token.", "Besides, we follow (Baldini Soares et al., 2019) and wrap the given entity pair with [e1], [/e1], [e2] and [/e2] respectively.", "For the example in Figure 2, the entire input to encoder is: [CLS] [C] employee of [C] ceo of [C] others [SEP] [e1] Tim Cook [/e1] is the CEO of [e2] Apple [/e2] . [SEP].", "Then we encode the entire sentence x through a Transformer (Vaswani et al., 2017) encoder: h [ CLS ] , h [ C ] , ..., h [ SEP ] = H ( x ) , (1) where h is the output embedding for each token in x , d is the dimension of hidden states.", "These token embeddings are then used for multi-choice matching and model prediction.", "The multi-choice matching module matches the input instance to the corresponding relation.", "For each relation type, we use hidden states of [C] marker to represent each following relation: h rel i = h [ C ] i , (2) where h rel i is the representation for relation i and h [ C ] i is the hidden state for the i th [C] token.", "For the input text, we simply average hidden states of [e1] and [e2] to obtain the instance representation X : X = avg ( h [ e 1] , h [ e 2] ) .", "(3) Then we perform matching operation between the instance and each relation: D ( x, y i ) = X h rel i 2 .", "(4) In this equation, we adopt the Euclidean distance to measure the similarity, and the corresponding probability for each relation is: P ( y i | x ; ) = exp( D ( x, y i )) (cid:80) Nj =1 exp( D ( x, y j )) .", "(5) Finally, we choose the relation y with the maximal probability as the prediction: y = arg max i P ( y i | x ; ) .", "(6) 3.4 Training Loss We adopt an end-to-end training manner by minimizing the following loss function: L ( x,y ) ( ) = N (cid:88) i =1 I ( y i ) log P ( y i | x i ; ) , (7) where I ( . ) equals 1 if y i is the golden class, otherwise I ( . ) = 0 .", "The three-period training process will be detailed described in the following section.", "As mentioned above, the required abilities for zero-shot and few-shot are different.", "In this paper, we propose triplet-paraphrase meta pre-training, which jointly learn the label semantic matching ability required by zero-shot RE and instance summarizing ability required by few-shot RE.", "Following is the detailed description of the pre-training framework.", "To endow the label semantic matching ability to MCMN, it is required to incorporate large-scale data of both relational sentences and relation types to pre-train the model.", "Unfortunately, the highly limited relation types in existing RE datasets may lead to overfitting on specific relations and impair the generalization of MCMN.", "In this paper, we propose the triplet-paraphrase to generate large-scale pre-training data for MCMN from raw texts.", "The overall procedure of triplet-paraphrase module is demonstrated in Figure", "3(a), which extracts predicates from large-scaled raw texts as the relation descriptions.", "Then we utilize the extracted relational triplets to generate paraphrase sentences for further multi-choice matching pre-training.", "The elaboration is presented below.", "which includes the subject, predicate, and object.", "The predicate in a sentence corresponds to property or relation between the subject and object, which can be regarded as a concrete expression of one relationship.", "Therefore, To extract large-scaled triplets from open domain texts, we use OpenIE model 2 to extract on article collections of Wikipedia.", "Considering the example sentence: The service traces its history to an online service known as PlayNET.", "OpenIE model extracts all the possible triplets: ( an online service, known as, PlayNET ) and ( The service, traces, its history ).", "We collect all extracted predicates from raw texts to represent the corresponding relations, preventing the models from overfitting specific relation types.", "These triplets are further used for paraphrase generation and pre-training.", "Paraphrase Generation.", "One drawback of matching predicate as the relation is that the predicate extracted by OpenIE is commonly a span from current sentence, which may lead models to take the shortcut by directly matching through words co-occurrence.", "To eliminate this shortcut, we follow several recent works (Agarwal et al., 2021; Liu et al., 2021) to generate paraphrase texts to match the predicate.", "Specifically, for extracted triplets, we first wrap them with special markers [H], [R], [T] correspond to subject, predicate and object.", "Then we input the wrapped triplet texts to generate the paraphrase texts.", "In our implementation, we adopt T5 3 (Raffel et al., 2020) as the generator, and pretrain it on WebNLG dataset (Gardent et al., 2017).", "For example, we wrap ( an online service, known as, PlayNET ) to [H] an online service [R] known as [T] PlayNET then generate the paraphrase text playnet is an online service.", "After generating the paraphrase, we then match it to the corresponding predicate for pre-training.", "Each instance in the pre-training batch contains the paraphrase text and the corresponding predicate span.", "In addition, as shown in Figure", "3(a), we concatenate all predicates in the current mini-batch as the multi-choice prompt and follow the training loss in Equation 7 to pre-train MCMN, where I ( y i ) equals to 1 when y i is the corresponding predicate, otherwise, I ( y i ) = 0 .", "2 https://github.com/dair-iitd/OpenIE-standalone 3 https://github.com/UKPLab/plms-graph2text Algorithm 1 MCMN for Few-shot Prediction Require: n : fine-tuning epochs in online period Require: : meta-learned model parameters Require: S : support set, x q : query instance Require: : learning rate 1: = # save original model 2: for epoch in range (n) do 3: # compute loss of the support set: 4: LS = E ( x,y ) S L ( x,y ) ( ) 5: # update model parameters: 6: LS 7: end for 8: y = f ( x q ) # predict the query instance 9: = # restore the original model 10: return y 4.3 Online Task Adaptation In online learning or testing period, we adopt different adaptation strategies for different low-shot tasks.", "For zero-shot RE, we directly use the trained MCMN to conduct the task.", "For few-shot RE, we perform an online task meta-training on the support set, as shown in Algorithm 1.", "For each few-shot task with support set S and query instance x q , we first update the model with all support instances: E ( x,y ) S L ( x,y ) ( ) , (8) where is the learning rate, L ( ( x,y ) ( )) is the loss defined in Equation 7.", "To avoid overfitting, we use an early-stop criterion controlled by an adaptation epoch threshold that once the adaptation epoch is over the threshold, we exit the online fine-tuning and give the prediction for current query instance x q : y = f ( x q ) .", "Finally, we restore the model parameter = and repeat the procedure to the next task.", "We conduct experiments on three low-shot relation extraction tasks: zero-shot RE (Bragg et al., 2021), few-shot RE (Bragg et al., 2021) and the more challenging few-shot RE with none-of-the-above (NOTA) (Gao et al., 2019b).", "These tasks are all conducted based on FewRel dataset (Han et al., 2018), which is constructed through distantly aligning WikiData triplets to Wikipedia articles.", "In total, FewRel dataset consists of 100 relation types and 5789 Model Zero-shot Few-shot Avg.", "700 instances per type.", "Standard FewRel settings adopt a split of 64/16/20 fraction corresponding to train/validation/test set, where the train and validation sets are publicly accessible while the test set is not.", "Following are the detailed settings for each evaluation task.", "Zeroand Few-shot Relation Extraction Settings.", "We follow the standard Flex benchmark settings, which separate the train and validation sets from FewRel into a train set of 65 relations, a validation set of 5 relations and a test set of 10 relations.", "The test tasks are sampled and processed through the FLEX official toolkit 4 .", "Few-shot RE with NOTA Relation Settings.", "A drawback of conventional few-shot RE tasks is that they neglect the existence of other relations, that is all query instances are assumed to express one of the given relations in the support set.", "Gao et al. (2019b) point out this problem and add the none-of-the-above (NOTA) relation to consider the situation where query instance does not express any of the given relations.", "In our experiment, we follow the default settings of FewRel benchmark and evaluate our methods on 5-way 1/5-shot tasks with a 15% or 50% NOTA rate.", "Baseline Methods.", "For zero-shot and few-shot RE tasks, we compare our model with UniFew (Bragg et al., 2021), a unified few-shot learning model based on T5 (Raffel et al., 2020).", "This model converts each few-shot classification 4 https://github.com/allenai/flex task into a machine reading comprehension format and predicts the results through generation.", "With a pre-training period on large-scaled MRC data, this model reaches strong performance on both zero-and few-shot tasks.", "For few-shot RE with NOTA relation task, we compare our model with Bert-Pair (Gao et al., 2019b), an instance-pair matching framework for few-shot RE tasks.", "This model computes a similarity and a dissimilarity score simultaneously between query instance and each support instance, then aggregates the similarity score for each relation and dissimilarity score for NOTA relation.", "And the results of CNN and BERT based prototypical networks from Gao et al. (2019b) are also reported.", "Evaluation Metrics.", "For zero-shot and few-shot RE tasks, we follow FLEX benchmark and report the accuracy, confidence interval and standard deviation correspondingly.", "All these results reported are from the official Flex toolkits.", "For few-shot RE with NOTA relation task, we follow FewRel 2.0 benchmark and report the corresponding accuracy for four different settings.", "In the triplet-paraphrase construction period, we extract relation triplets from articles in Wikipedia and generate the counterpart paraphrase texts.", "Overall, we generate about one million triplet and paraphrase text pairs.", "In triplet-paraphrase meta-training periods, we use a learning rate of 5e-6, weight decay of 1e-6, dropout rate of 0.5, and a linear learning schedule with a 0.95 weight decay.", "In the online task meta-training period, we use learning rate of 5e-6, and the adaptation epoch of 1 or 2 for FewRel NOTA tasks, epochs of 45 for FLEX tasks, while keep other hyperparameters the same.", "We use RoBERTa-large (Liu et al., 2019) to initialize our model.", "Furthermore, to better endow the low-shot capability to our model, we adopt annotated FewRel data (Han et al., 2018) as an additionally supervised meta-training procedure.", "Table 1 shows the overall results on three different", "RE tasks.", "From this table, we can see that: MCMN with triplet-paraphrase pre-training outperforms previous methods in all three RE tasks and achieve state-of-the-art performance.", "Compared with the strong baseline methods, MCMN achieves remarkable performance improvements.", "In zero-shot and few-shot RE tasks, MCNN with triplet paraphrase pre-training outperforms the baseline methods by at least 1.8% in average.", "In few-shot RE with NOTA task, our method outperforms previous best method by at least 4.99% in average and achieve the best performance in the leaderboard.", "Our triplet-paraphrase pre-training achieves promising results on low-shot RE tasks.", "Comparing with other pre-training strategies such as UniFew model pre-trained with large annotated MRC datasets, triplet-paraphrase pre-training achieves much better performance on zero-shot RE tasks.", "Besides, triplet-paraphrase can further enhance MCMN to achieves the new state-of-the-art results on all three low-shot RE tasks with supervised meta-training procedure, which are detailed analyzed in the next section.", "MCMN performs more robust than previous methods.", "In zero-shot and few-shot tasks, our methods perform a lower standard deviation and more shallow confidence interval than baseline methods, which means the prediction of our methods is more stable across different tasks.", "In this section, we conduct several experiments for in-depth analysis of our methods.", "Ablation Studies on Zeroand Few-shot RE Tasks.", "To evaluate the effect of each part of our methods on zeroand few-shot RE tasks, we conduct separate experiments on triplet-paraphrase Model Zero-shot Few-shot Avg.", "pre-training, MCMN and MCMN without triplet-paraphrase pre-training on Flex test set.", "As shown in Table 2, we can see that the pure triplet-paraphrase pre-training model outperforms RoBERTa-large model with a remarkable margin as well as leverages the MCMN model with an improvement of at least 1.9% compared with MCMN without triplet-paraphrase pre-training on both zero-shot and few-shot settings.", "These results demonstrate that triplet-paraphrase pre-training method can significantly improve the generalization and performance of our model, and the framework of multi-choice matching network is quite applicable in low-shot RE tasks.", "Besides, we notice the performance of pure triplet-paraphrase pretraining model is lower than MCMN without triplet-paraphrase pre-training.", "To study this issue, we analyze the triplet-paraphrase data, and find that many of the generated texts still consist of words in predicates, though the expression is quite different from the original sentences.", "This may still lead to the shortcut learning problem.", "On top of that, the expression of predicates is much different from the relation name, and the negative predicates are much easier to distinguish than the real test cases.", "These issues altogether result in poor performance.", "Fortunately, the triplet-paraphrase pre-training period can properly initialize MCMN and leverage the final performance.", "We also conduct detailed analyses of our methods in few-shot NOTA RE tasks.", "As shown in Table 3, the pure triplet-paraphrase pre-trained model can also boost the performance of roberta-large initialized model and leverage the supervised meta-trained MCMN by at least 0.9% in average.", "Although we do not consider the NOTA relation in the triplet-paraphrase pre-training period, this period can also contribute to the further supervised meta-training period, which indicates that the matching-pattern learned in triplet-paraphrase pre-training 5791 Model Few-shot with NOTA Avg.", "period is generalized and robust to down-stream tasks.", "Besides, we notice that in NA rate of 0.5 tasks, the pure triplet-paraphrase pre-trained model suffers from serious performance drops.", "This may be caused by the large proportion of negative instances in test tasks.", "Fortunately, this issue can be alleviated by the online adaptation period.", "Zero-shot NOTA RE tasks.", "This experiment studies the zero-shot performance of our methods on FewRel NOTA tasks.", "From Table 3, we surprisingly found that our methods also outperform previous state-of-the-art few-shot NOTA models even in zero-shot conditions.", "This also indicates that our methods are effective in low-shot RE tasks and are robust enough across different settings.", "Computing Efficiency of Multi-Choice Matching Networks.", "This experiment compares the computing efficiency of our method with MRC-based method.", "Each model is tested on the Flex test set, including both zero-shot and few-shot RE tasks.", "Models in zero-shot setting only need inference while both models in few-shot setting require fine-tuning on the support set which involves time-consuming back-propagation operations.", "For fair comparison, we use a single TITAN RTX GPU for each model and keep other computing environments the same.", "As a result, UniFew takes 647 minutes (more than 10 hours) to finish the test prediction, while our method takes about 80 minutes to obtain the results in Table 1, which improves the speed of roughly an order of magnitude.", "The main reason for such an efficiency discrepancy is that UniFew, as a generative model, involves an autoregressive decoder to generate the results, whereas our method directly matches the relation and instance representations to give the results.", "Zhou et al., 2016) heavily depends on large amount of annotated data.", "However, the bottleneck on data annotation severely limits the adaptation of these supervised methods to real scenarios.", "Recent works reply to this dilemma from the perspective of low-shot learning, which mainly focuses on zeroand few-shot RE tasks.", "In this work, we shed light on three representative sub-fields tasks, including zero-shot RE, few-shot RE and few-shot RE with NOTA relation to evaluate our methods Zero-shot Relation Extraction.", "Levy et al. (2017) firstly introduce the zero-shot relation extraction task and adjust the machine reading comprehension (MRC)-based paradigm for it.", "Following this line, other MRC-based methods have been proposed (Cetoli, 2020; Bragg et al., 2021).", "Another paradigm for zero-shot RE is matching-based (Socher et al., 2013), which falls into the text-entailment-based methods (Obamuyide and Vlachos, 2018; Sainz et al., 2021), and the representation matching-based methods (Chen and Li, 2021; Dong et al., 2021).", "Text-entailment-based methods concatenate the relation description with the input sentence to assess whether they entail the same semantic relationship; Representation matching-based methods separately encode the relation and instance into the same semantic space but are not capable of handling the NOTA relation.", "Few-shot Relation Extraction.", "Han et al. (2018) firstly propose the few-shot relation extraction task and adopt several meta-learning methods (Munkhdalai and Yu, 2017; Snell et al., 2017; Satorras and Estrach, 2018; Mishra et al., 2018) for it.", "Recent works on few-shot RE mostly centers around the metric-based methods (Vinyals et al., 2016), such as prototype-based methods (Bal-dini Soares et al., 2019; Ye and Ling, 2019; Gao et al., 2019a) and meta learning-based methods (Finn et al., 2017).", "Besides, Gao et al. (2019b) extend the FewRel challenge with few-shot domain 5792 adaptation (DA) and none-of-the-above (NOTA) tasks, which are more challenging and closer to real-world application.", "Few-shot RE with NOTA.", "Although NOTA relation is common in conventional supervised RE tasks (Zhang et al., 2017), it is quite different in few-shot scenarios due to the label inconsistency problem.", "As an example, consider an instance that expresses relation r.", "In task A, relation r is not included in the support set, and thus model learns the semantic mapping between this instance and the NOTA relation.", "But in another task B where relation r is included in the support set, the model learned from task A may continue to match this instance to NOTA relation.", "Because of the difficulty, attempts to resolve this problem are scarce.", "To the best of our knowledge, Bert-Pair (Gao et al., 2019b) is the only public method for this task, and our work is the first method to unify the zero-shot, few-shot and few-shot with NOTA tasks.", "In this paper, we propose Multi-Choice Matching Networks to unify low-shot relation extraction.", "MCMN introduces a multi-choice prompt to formulate relation extraction as in a multi-choice paradigm.", "To equip MCMN with different zero-shot and few-shot abilities, we propose the triplet-paraphrase meta pre-training, which leverages triplet paraphrase to pre-train zero-shot label matching ability and uses meta-learning paradigm to learn few-shot instance summarizing ability.", "Experimental results on three different RE tasks show MCMN outperforms strong baseline models by large margins.", "We thank all reviewers for their insightful suggestions.", "Moreover, this research work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No.", "XDA27020200, and the National Natural Science Foundation of China under Grants no. 62106251 and 62076233.", "This paper has no particular ethic consideration." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "other", "objective", "objective", "result", "method", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "other", "other", "other", "abstain" ]
[ "Document-level MT models are still far from satisfactory.", "Existing work extend translation unit from single sentence to multiple sentences.", "However, study shows that when we further enlarge the translation unit to a whole document, supervised training of Transformer can fail.", "In this paper, we find such failure is not caused by overfitting, but by sticking around local minima during training.", "Our analysis shows that the increased complexity of target-to-source attention is a reason for the failure.", "As a solution, we propose G-Transformer, introducing locality assumption as an inductive bias into Transformer, reducing the hypothesis space of the attention from target to source.", "Experiments show that G-Transformer converges faster and more stably than Transformer, achieving new state-of-the-art BLEU scores for both non-pretraining and pre-training settings on three benchmark datasets.", "Document-level machine translation (MT) has received increasing research attention (Gong et al., 2011; Hardmeier et al., 2013; Garcia et al., 2015; Miculicich et al., 2018a; Maruf et al., 2019; Liu et al., 2020).", "It is a more practically useful task compared to sentence-level MT because typical inputs in MT applications are text documents rather than individual sentences.", "A salient difference between document-level MT and sentence-level MT is that for the former, much larger inter-sentential context should be considered when translating each sentence, which include discourse structures such as anaphora, lexical cohesion, etc.", "Studies show that human translators consider such contexts when conducting document translation (Hardmeier, 2014; Laubli et al., 2018).", "Despite that neural models achieve competitive performances on sentence* Corresponding author.", "Existing methods can be mainly classified into two categories.", "The first category translates a document sentence by sentence using a sequence-to-sequence neural model (Zhang et al., 2018; Miculicich et al., 2018b; Maruf et al., 2019; Zheng et al., 2020).", "Document-level context is integrated into sentence-translation by introducing additional context encoder.", "The structure of such a model is shown in Figure", "1(a).", "These methods suffer from two limitations.", "First, the context needs to be encoded separately for translating each sentence, which adds to the runtime complexity.", "Second, more importantly, information exchange cannot be made between the current sentence and its document context in the same encoding module.", "The second category extends the translation unit from a single sentence to multiple sentences (Tiedemann and Scherrer, 2017; Agrawal et al., 2018; Zhang et al., 2020) and the whole document (Junczys-Dowmunt, 2019; Liu et al., 2020).", "Recently, it has been shown that when the translation unit increases from one sentence to four sentences, the performance improves (Zhang et al., 2020; Scherrer et al., 2019).", "However, when the whole document is encoded as a single unit for sequence to sequence translation, direct supervised training has been shown to fail (Liu et al., 2020).", "As a solution, either large-scale pre-training (Liu et al., 2020) or data augmentation (Junczys-Dowmunt, 2019) has been used as a solution, leading to improved performance.", "These methods are shown in Figure", "1(b).", "One limitation of such methods is that they require much more training time due to the necessity of data augmentation.", "Intuitively, encoding the whole input document as a single unit allows the best integration of context information when translating the current sentence.", "However, little work has been done investigating the underlying reason why it is difficult to train such a document-level NMT model.", "One remote clue is that as the input sequence grows larger, the input becomes more sparse (Pouget-Abadie et al., 2014; Koehn and Knowles, 2017).", "To gain more understanding, we make dedicated experiments on the influence of input length, data scale and model size for Transformer (Section 3), finding that a Transformer model can fail to converge when training with long sequences, small datasets, or big model size.", "We further find that for the failed cases, the model gets stuck at local minima during training.", "In such situation, the attention weights from the decoder to the encoder are flat, with large entropy values.", "This can be because that larger input sequences increase the challenge for focusing on a local span to translate when generating each target word.", "In other words, the hypothesis space for target-to-source attention is increased.", "Given the above observations, we investigate a novel extension of Transformer, by restricting self-attention and target-to-source attention to a local context using a guidance mechanism.", "As shown in Figure", "1(c), while we still encode the input document as a single unit, group tags 1 (cid:13) 2 (cid:13) 3 (cid:13) are assigned to sentences to differentiate their positions.", "Target-to-source attention is guided by matching the tag of target sentence to the tags of source sentences when translating each sentence, so that the hypothesis space of attention is reduced.", "Intuitively, the group tags serve as a constraint on attention, which is useful for differentiating the current sentence and its context sentences.", "Our model, named G-Transformer, can be thus viewed as a combination of the method in Figure", "1(a) and Figure", "1(b), which fully separate and fully integrates a sentence being translated with its document level context, respectively.", "We evaluate our model on three commonly used document-level MT datasets for English-German translation, covering domains of TED talks, News, and Europarl from small to large.", "Experiments show that G-Transformer converges faster and more stably than Transformer on different settings, obtaining the state-of-the-art results under both non-pretraining and pre-training settings.", "To our knowledge, we are the first to realize a truly document-by-document translation model.", "We release our code and model at https://github.com/baoguangsheng/g-transformer.", "We evaluate Transformer and G-Transformer on the widely adopted benchmark datasets (Maruf et al., 2019), including three domains for English-German (En-De) translation.", "TED.", "The corpus is transcriptions of TED talks from IWSLT 2017.", "Each talk is used as a document, aligned at the sentence level.", "tst2016-2017 is used for testing, and the rest for development.", "News.", "This corpus uses News Commentary v11 for training, which is document-delimited and sentence-aligned.", "newstest2015 is used for development, and newstest2016 for testing.", "Europarl.", "The corpus is extracted from Europarl v7, where sentences are segmented and aligned using additional information.", "The train, dev and test sets are randomly split from the corpus.", "The detailed statistics of these corpora are shown in Table 1.", "We pre-process the documents by splitting them into instances with up-to 512 tokens, taking a sentence as one instance if its length exceeds 512 tokens.", "We tokenize and truecase the sentences with MOSES (Koehn et al., 2007) tools, applying BPE (Sennrich et al., 2016) with 30000 merging operations.", "Base Model.", "Following the standard Transformer base model (Vaswani et al., 2017), we use 6 layers, 8 heads, 512 dimension outputs, and 2048 Language Dataset #Sentences #Documents #Instances Avg #Sents/Inst Avg #Tokens/Inst train/dev/test train/dev/test train/dev/test train/dev/test train/dev/test En-De TED 0.21M/9K/2.3K 1.7K/92/22 11K/483/123 18.3/18.5/18.3 436/428/429 News 0.24M/2K/3K 6K/80/154 18.5K/172/263 12.8/12.6/11.3 380/355/321 Europarl 1.67M/3.6K/5.1K 118K/239/359 162K/346/498 10.3/10.4/10.3 320/326/323 Table 1: En-De datasets for evaluation.", "Big Model.", "We follow the standard Transformer big model (Vaswani et al., 2017), using 6 layers, 16 heads, 1024 dimension outputs, and 4096 dimension hidden vectors.", "Large Model.", "We use the same settings of BART large model (Lewis et al., 2020), which involves 12 layers, 16 heads, 1024 dimension outputs, and 4096 dimension hidden vectors.", "We use s-BLEU and d-BLEU (Liu et al., 2020) as the metrics .", "The detailed descriptions are in Appendix A. 3 Transformer and Long Inputs We empirically study Transformer (see Appendix B) on the datasets.", "We run each experiment five times using different random seeds, reporting the average score for comparison.", "Input Length.", "We use the Base model and fixed dataset for this comparison.", "We split both the training and testing documents from Europarl dataset into instances with input length of 64, 128, 256, 512, and 1024 tokens, respectively.", "For fair comparison, we remove the training documents with a length of less than 768 tokens, which may favour small input length.", "The results are shown in Figure 2a.", "When the input length increases from 256 tokens to 512 tokens, the BLEU score drops dramatically from 30.5 to 2.3, indicating failed training with 512 and 1024 tokens.", "It demonstrates the difficulty when dealing with long inputs of Trans-2 4 6 8 10 12 0K 2K 4K 6K 8K 10K 12K L o ss Steps Train Valid", "Data Scale.", "We use the Base model and a fixed input length of 512 tokens.", "For each setting, we randomly sample a training dataset of the expected size from the full dataset of Europarl.", "The results are shown in Figure 2b.", "The performance increases sharply when the data scale increases from 20K to 40K.", "When data scale is equal or less than 20K, the BLEU scores are under 3, which is unreasonably low, indicating that with a fixed model size and input length, the smaller dataset can also cause the failure of the training process.", "For data scale more than 40K, the BLEU scores show a wide dynamic range, suggesting that the training process is unstable.", "Model Size.", "We test Transformer with different model sizes, using the full dataset of Europarl and a fixed input length of 512 tokens.", "Transformer-Base can be trained successfully, giving a reasonable BLEU score.", "However, the training of the Big and Large models failed, resulting in very low BLEU scores under 3. It demonstrates that the increased model size can also cause the failure with a fixed input length and data scale.", "The results confirm the intuition that the performance will drop with longer inputs, smaller datasets, or bigger models.", "However, the BLEU scores show a strong discontinuity with the change of input length, data scale, or model size, falling into two discrete clusters.", "One is successfully trained cases with d-BLEU scores above 10, and the other is failed cases with d-BLEU scores under 3. 7.6 7.8 8 8.2 8.4 0K 2K 4K 6K 8K 10K 12K E n t r o p y ( b i t ) Steps Train Valid", "Training Convergence.", "Looking into the failed models, we find that they have a similar pattern on loss curves.", "As an example of the model trained on 20K instances shown in Figure 3a, although the training loss continually decreases during training process, the validation loss sticks at the level of 7, reaching a minimum value at around 9K training steps.", "In comparison, the successfully trained models share another pattern.", "Taking the model trained on 40K instances as an example, the loss curves demonstrate two stages, which is shown in Figure 3b.", "In the first stage, the validation loss similar to the failed cases has a converging trend to the level of 7.", "In the second stage, after 13K training steps, the validation loss falls suddenly, indicating that the model may escape successfully from local minima.", "From the two stages of the learning curve, we conclude that the real problem, contradicting our first intuition, is not about overfitting, but about local minima.", "Attention Distribution.", "We further look into the attention distribution of the failed models, observing that the attentions from target to source are widely spread over all tokens.", "As Figure 4a shows, the distribution entropy is high for about 8.14 bits on validation.", "In contrast, as shown in Figure 4b, the successfully trained model has a much lower attention entropy of about 6.0 bits on validation.", "Furthermore, we can see that before 13K training Source: < s > the Commission shares ... of the European Union institutional framework .", "steps, the entropy sticks at a plateau, confirming with the observation of the local minima in Figure 3b.", "It indicates that the early stage of the training process for Transformer is difficult.", "Figure 5 shows the self-attention distributions of the successfully trained models.", "The attention entropy of both the encoder and the decoder drops fast at the beginning, leading to a shrinkage of the attention range.", "But then the attention entropy gradually increases, indicating an expansion of the attention range.", "Such back-and-forth oscillation of the attention range may also result in unstable training and slow down the training process.", "The above experiments show that training failure on Transformer can be caused by local minima.", "Additionally, the oscillation of attention range may make it worse.", "During training process, the attention module needs to identify relevant tokens from whole sequence to attend to.", "Assuming that the sequence length is N , the complexity of the attention distribution increases when N grows from sentence-level to document-level.", "We propose to use locality properties (Rizzi, 2013; Hardmeier, 2014; Jawahar et al., 2019) of both the language itself and the translation task as a constraint in Transformer, regulating the hypothesis space of the self-attention and target-to-source attention, using a simple group tag method.", "An example of G-Transformer is shown in Figure 6, where the input document contains more than 3 sentences.", "As can be seen from the figure, G-Transformer extends Transformer by augmenting the input and output with group tags (Bao and Zhang, 2021).", "In particular, each token is assigned a group tag, indicating its sentential index.", "While source group tags can be assigned deterministically, target tags are assigned dynamically according to whether a generated sentence is complete.", "Starting from 1, target words copy group tags from its predecessor unless the previous token is < /s > , in which case the tag increases by 1.", "The tags serve as a locality constraint, encouraging target-to-source attention to concentrate on the current source sentence being translated.", "can be written as Y = arg max YP ( Y | X ) , (1) and G-Transformer extends it by having Y = arg Y max Y,G YP ( Y, GY | X, GX ) , (2) where GX and GY denotes the two sequences of group tags GX = { g i = k if w i sent Xk else 0 }| | X | i =1 , GY = { g j = k if w j sent Yk else 0 }| | Y | j =1 , (3)", "where sent k represents the k -th sentence of X or Y .", "For the example shown in Figure 6, GX = { 1 , ..., 1 , 2 , ..., 2 , 3 , ..., 3 , 4 , ... } and GY = { 1 , ..., 1 , 2 , ..., 2 , 3 , ..., 3 , 4 , ... } .", "Group tags influence the auto-regressive translation process by interfering with the attention mechanism, which we show in the next section.", "In G-Transformer, we use the group-tag sequence GX and GY for representing the alignment between X and Y , and for generating the localized contextual representation of X and Y .", "An attention module can be seen as a function mapping a query and a set of key-value pairs to an output (Vaswani et al., 2017).", "The query, key, value, and output are all vectors.", "The output is computed by summing the values with corresponding attention weights, which are calculated by matching the query and the keys.", "Formally, given a set of queries, keys, and values, we pack them into matrix Q , K , and V , respectively.", "We compute the matrix outputs Attention ( Q, K, V ) = softmax (cid:18) QKT d k (cid:19) V, (4) where d k is the dimensions of the key vector.", "allows a model to gather information from different representation subspaces", "We update Eq 4 using group-tags, naming it group attention (GroupAttn).", "In addition to inputs Q , K , and V , two sequences of group-tag inputs are involved, where GQ corresponds to Q and GK corresponds to K .", "We have args = ( Q, K, V, GQ , GK ) , GroupAttn ( args ) = softmax (cid:18) QKT d k + M ( GQ , GK ) (cid:19) V, (6) where function M ( ) works as an attention mask, excluding all tokens outside the sentence.", "Specifi-cally, M ( ) gives a big negative number to make softmax close to 0 for the tokens with a different group tag compared to current token M ( GQ , GK ) = min (1 , abs ( GQITK IQGTK )) , (7) where IK and IQ are constant vectors with value 1 on all dimensions, that IK has dimensions equal to the length of GK and IQ has dimensions equal to the length of GQ .", "The constant value can typically be 1 e 8 .", "Encoder.", "For each layer a group multi-head attention module is used for self-attention, assigning the same group-tag sequence for the key and the value that GQ = GK = GX .", "Decoder.", "We use one group multi-head attention module for self-attention and another group multihead attention module for cross-attention.", "Similar to the encoder, we assign the same group-tag sequence to the key and value of the self-attention, that GQ = GK = GY , but use different group-tag sequences for cross-attention that GQ = GY and GK = GX .", "Complexity.", "Consider a document with M sentences and N tokens, where each sentence contains N/M tokens on average.", "The complexities of both the self-attention and cross-attention in Transformer are O ( N 2 ) .", "In contrast, the complexity of group attention in G-Transformer is O ( N 2 /M ) given the fact that the attention is restricted to a local sentence.", "Theoretically, since the average length N/M of sentences tends to be constant, the time and memory complexities of group attention are approximately O ( N ) , making training and inference on very long inputs feasible.", "We use only group attention on lower layers for local sentence representation, and combined attention on top layers for integrating local and global context information.", "We use the standard multihead attention in Eq 5 for global context, naming it global multi-head attention (GlobalMHA).", "Group multi-head attention in Eq 8 and global multi-head attention are combined using a gate-sum module (Zhang et al., 2016; Tu et al., 2017) HL = GroupMHA ( Q, K, V, GQ , GK ) , HG = GlobalMHA ( Q, K, V ) , g = sigmoid ([ HL , HG ] W + b ) , H = HL (cid:12) g + HG (cid:12) (1 g ) , (10) where W and b are linear projection parameters, and (cid:12) denotes element-wise multiplication.", "Previous study (Jawahar et al., 2019) shows that the lower layers of Transformer catch more local syntactic relations, while the higher layers represent longer distance relations.", "Based on these find-ings, we use combined attention only on the top layers for integrating local and global context.", "By this design, on lower layers, the sentences are isolated from each other, while on top layers, the cross-sentence interactions are enabled.", "Our experiments show that the top 2 layers with global attention are sufficient for document-level NMT, and more layers neither help nor harm the performance.", "During decoding, we generate group-tag sequence GY according to the predicted token, starting with 1 at the first < s > and increasing 1 after each < /s > .", "We use beam search and apply the maximum length constraint on each sentence.", "We generate the whole document from start to end in one beam search process, using a default beam size of 5. 5 G-Transformer Results We compare G-Transformer with Transformer baselines and previous document-level NMT models on both non-pretraining and pre-training settings.", "The detailed descriptions about these training settings are in Appendix C.1.", "We make statistical significance test according to Collins et al. (2005).", "As shown in Table 2, the sentence-level Transformer outperforms previous document-level models on News and Europarl.", "Compared to this strong baseline, our randomly initialized model of G-Transformer improves the s-BLEU by 0.81 point on the large dataset Europarl.", "The results on the small datasets TED and News are worse, indicating overfitting with long inputs.", "When G-Transformer is trained by fine-tuning the sentence-level Transformer, the performance improves on the three datasets by 0.3, 0.33, and 1.02 s-BLEU points, respectively.", "Different from the baseline of document-level Transformer, G-Transformer can be successfully trained on small TED and News.", "On Europarl, G-Transformer outperforms Transformer by 0.77 d-BLEU point, and G-Transformer fine-tuned on sentence-level Transformer enlarges the gap to 0.98 d-BLEU point.", "G-Transformer outperforms previous document-level MT models on News and Europarl with a significant margin.", "Compared to the best recent model Hyrbid-Context, G-Transformer improves the s-BLEU on Europarl by 1.99.", "These results suggest that in contrast to previous short-context models, sequence-to-sequence model taking the whole document as input is a promising direction.", "There is relatively little existing work about document-level MT using pre-training.", "Although Flat-Transformer+BERT gives a state-of-the-art scores on TED and Europarl, the score on News is worse than previous non-pretraining model HAN (Miculicich et al., 2018b).", "G-Transformer+BERT improves the scores by margin of 0.20, 1.62, and 0.47 s-BLEU points on TED, News, and Europarl, respectively.", "It shows that with a better contextual representation, we can further improve document-level MT on pretraining settings.", "We further build much stronger Transformer baselines by fine-tuning on mBART25 (Liu et al., 2020).", "Taking advantage of sequence-to-sequence pre-training, the sentence-level Transformer gives much better s-BLEUs of 27.78, 29.90, and 31.87, respectively.", "G-Transformer fine-tuned on mBART25 improves the performance by 0.28, 0.44, and 0.87 s-BLEU, respectively.", "Compared to the document-level Transformer baseline, G-Transformer gives 1.74, 1.22, and 0.31 higher d-BLEU points, respectively.", "It demonstrates that even with well-trained sequence-to-sequence model, the locality bias can still enhance the performance.", "We evaluate G-Transformer ad Transformer on various input length, data scale, and model size to better understand that to what extent it has solved the convergence problem of Transformer.", "Input Length.", "The results are shown in Figure 7a.", "Unlike Transformer, which fails to train on long input, G-Transformer shows stable scores for inputs containing 512 and 1024 tokens, suggesting that with the help of locality bias, a long input does not impact the performance obviously.", "Data Scale.", "As shown in Figure 7b, overall G-Transformer has a smooth curve of performance on the data scale from 1.25K to 160K.", "The variances of the scores are much lower than Transformer, indicating stable training of G-Transformer.", "Additionally, G-Transformer outperforms Transformer by a large margin on all the settings.", "Model Size.", "Unlike Transformer, which fails to train on Big and Large model settings, G-Transformer shows stable scores on different model sizes.", "As shown in Appendix C.2, although performance on small datasets TED and News drops largely for Big and Large model, the performance on large dataset Europarl only decreases by 0.10 d-BLEU points for the Big model and 0.66 for the Large model.", "Loss.", "Looking into the training process of the above experiments, we see that both the training and validation losses of G-Transformer converge much faster than Transformer, using almost half time to reach the same level of loss.", "Furthermore, the validation loss of G-Transformer converges to much lower values.", "These observations demonstrate that G-Transformer converges faster and better.", "Attention Distribution.", "Benefiting from the separate group attention and global attention, G-Transformer avoids the oscillation of attention Method TED News Europarl Drop G-Transformer (fnt.) 25.12 25.52 32.39 target-side context 25.05 25.41 32.16 -0.14 source-side context 24.56 24.58 31.39 -0.70 Table 3: Impact of source-side and target-side context reporting in s-BLEU.", "range, which happens to Transformer.", "As shown in Figure 8a, Transformer sticks at the plateau area for about 13K training steps, but G-Transformer shows a quick and monotonic convergence, reaching the stable level using about 1/4 of the time that Transformer takes.", "Through Figure 8b, we can find that G-Transformer also has a smooth and stable curve for the convergence of self-attention distribution.", "These observations imply that the potential conflict of local sentence and document context can be mitigated by G-Transformer.", "Document Context.", "We study the contribution of the source-side and target-side context by removing the cross-sentential attention in Eq 10 from the encoder and the decoder gradually.", "The results are shown in Table 3. We take the G-Transformer fine-tuned on the sentence-level Transformer as our starting point.", "When we disable the target-side context, the performance decreases by 0.14 s-BLEU point on average, which indicates that the target-side context does impact translation performance significantly.", "When we further remove the source-side context, the performance decrease by 0.49, 0.83, and 0.77 s-BLEU point on TED, News, and Europarl, respectively, which indicates that the source-side context is relatively more important for document-level MT. To further understand the impact of the source-side context, we conduct an experiment on automatic evaluation on discourse phenomena which rely on source context.", "We use the human labeled evaluation set (Voita et al., 2019b) on English-Method TED News Europarl Drop G-Transformer (rnd.) 25.84 25.23 33.87 word-dropout 25.49 24.65 33.70 -0.37 language locality 22.47 22.41 33.63 -1.78 translation locality 0.76 0.60 33.10 -14.68 Table 5: Contribution of locality bias and word-dropout reporting in d-BLEU.", "Russion (En-Ru) for deixis and ellipsis.", "We follow the Transformer concat baseline (Voita et al., 2019b) and use both 6M sentence pairs and 1.5M document pairs from OpenSubtitles2018 (Lison et al., 2018) to train our model.", "The results are shown in Table 4. G-Transformer outperforms Transformer baseline concat (Voita et al., 2019b) with a large margin on three discourse features, indicating a better leverage of the source-side context.", "When compared to previous model LSTM-T, G-Transformer achieves a better ellipsis on both infl.", "and VP.", "However, the score on deixis is still lower, which indicates a potential direction that we can investigate in further study.", "Word-dropout.", "As shown in Table 5, word-dropout (Appendix C.1) contributes about 0.37 d-BLEU on average.", "Its contribution to TED and News is obvious in 0.35 and 0.58 d-BLEU, respectively.", "However, for large dataset Europarl, the contribution drops to 0.17, suggesting that with sufficient data, word-dropout may not be necessary.", "Locality Bias.", "In G-Transformer, we introduce locality bias to the language modeling of source and target, and locality bias to the translation between source and target.", "We try to understand these biases by removing them from G-Transformer.", "When all the biases removed, the model downgrades to a document-level Transformer.", "The results are shown in Table 5. Relatively speaking, the contribution of language locality bias is about 1.78 d-BLEU on average.", "While the translation locality bias contributes for about 14.68 d-BLEU on average, showing critical impact on the model convergence on small datasets.", "These results suggest that the locality bias may be the key to train whole-document MT models, especially when the data is insufficient.", "Combined Attention.", "In G-Transformer, we enable only the top K layers with combined attention.", "On Europarl7, G-Transformer gives 33.75, 33.87, and 33.84 d-BLEU with top 1, 2, and 3 layers with combined attention, respectively, showing that K = 2 is sufficient.", "Furthermore, we study the effect of group and global attention separately.", "As shown in Table 6, when we replace the combined attention on top 2 layers with group attention, the performance drops by 0.22, 0.09, and 0.75 d-BLEU on TED, News, and Europarl, respectively.", "When we replace the combined attention with global attention, the performance decrease is enlarged to 0.84, 0.69, and 1.00 d-BLEU, respectively.", "These results demonstrate the necessity of combined attention for integrating local and global context information.", "The unit of translation has evolved from word (Brown et al., 1993; Vogel et al., 1996) to phrase (Koehn et al., 2003; Chiang, 2005, 2007) and further to sentence (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) in the MT literature.", "The trend shows that larger units of translation, when represented properly, can lead to improved translation quality.", "A line of document-level MT extends translation unit to multiple sentences (Tiedemann and Scherrer, 2017; Agrawal et al., 2018; Zhang et al., 2020; Ma et al., 2020).", "However, these approaches are limited within a short context of maximum four sentences.", "Recent studies extend the translation unit to whole document (Junczys-Dowmunt, 2019; Liu et al., 2020), using large augmented dataset or pretrained models.", "Liu et al. (2020) shows that Transformer trained directly on document-level dataset can fail, resulting in unreasonably low BLEU scores.", "Following these studies, we also model translation on the whole document.", "We solve the training challenge using a novel locality bias with group tags.", "Another line of work make document-level machine translation sentence by sentence, using additional components to represent the context (Maruf and Haffari, 2018; Zheng et al., 2020; Zhang et al., 2018; Miculicich et al., 2018b; Maruf et al., 2019; Yang et al., 2019).", "Different from these approaches, G-Transformer uses a generic design for both source and context, translating whole document in one beam search instead of sentence-by-sentence.", "Some methods use a two-pass strategy, generating sentence translation first, integrating context information through a post-editing model (Voita et al., 2019a; Yu et al., 2020).", "In contrast, G-Transformer uses a single model, which reduces the complexity for both training and inference.", "The locality bias we introduce to G-Transformer is different from the ones in Longformer (Beltagy et al., 2020) and Reformer (Kitaev et al., 2020) in the sense that we discuss locality in the context of representing the alignment between source sentences and target sentences in document-level MT. Specifically, Longformer introduces locality only to self-attention, while G-Transformer also introduces locality to cross-attention, which is shown to be the key for the success of G-Transformer.", "Reformer, basically same as Transformer, searches for attention targets in the whole sequence, while G-Transformer mainly restricts the attention inside a local sentence.", "In addition, the motivations are different.", "While Longformer and Reformer focus on the time and memory complexities, we focus on attention patterns in cases where a translation model fails to converge during training.", "We investigated the main reasons for Transformer training failure in document-level MT, finding that target-to-source attention is a key factor.", "According to the observation, we designed a simple extension of the standard Transformer architecture, using group tags for attention guiding.", "Experiments show that the resulting G-Transformer converges fast and stably on small and large data, giving the state-of-the-art results compared to existing models under both pre-training and random initialization settings.", "We would like to thank the anonymous reviewers for their valuable feedback.", "We thank Westlake University High-Performance Computing Center for supporting on GPU resources.", "This work is supported by grants from Alibaba Group Inc. and Sichuan Lan-bridge Information Technology Co.,Ltd." ]
[ "abstain", "abstain", "result", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "method", "abstain", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "method", "abstain", "other", "other", "other" ]
[ "Story generation is an open-ended and subjective task, which poses a challenge for evaluating story generation models.", "We present CHOOSE YOUR OWNADVENTURE , a collaborative writing setup for pairwise model evaluation.", "Two models generate suggestions to people as they write a short story; we ask writers to choose one of the two suggestions, and we observe which model's suggestions they prefer.", "The setup also allows further analysis based on the revisions people make to the suggestions.", "We show that these measures, combined with automatic metrics, provide an informative picture of the models' performance, both in cases where the differences in generation methods are small (nucleus vs. topk sampling) and large (GPT2 vs. Fusion models).", "Systems that automatically generate text suggestions to human authors have emerged as a new application of natural language generation models.", "Evaluating such models, however, is challenging.", "Typically, writers rate a single system's quality after some period of use, for example while authoring an entire story or poem (e.g., Clark et al., 2018; Ghazvininejad et al., 2017).", "A model's quality is measured using Likert scale scores, sometimes combined with additional analysis, like the type or quantity of writer edits (e.g., Roemmele and Gordon, 2015; Akoury et al., 2020).", "In contrast, a pairwise system evaluation where evaluators are given two suggestions at the same time and asked to choose between them would allow researchers to compare generation models directly.", "Comparative evaluations have been shown to produce more reliable and consistent results than Likert-scale ratings (Callison-Burch et al., 2007; Kiritchenko and Mohammad, 2017), and they have been used to evaluate natural language generation systems for translation and dialogue (Otani et al., 2016; Sedoc et al., 2019).", "We propose CHOOSE YOUR OWNADVENTURE (CYOA), a protocol for pairwise evaluations of collaborative writing models, focusing on story generation.", "Instead of scoring a single model, we compare two models.", "At fixed points during the writing process, each generates a suggestion, and writers choose one to continue their story (see Fig. 1).", "The result is utterance-level feedback on which model's generated text writers prefer at that point in the story.", "Along with the writer's revisions to the generated suggestions and comparisons between the generated and human-authored portions of the story, this evidence can help a researcher answer the following questions about their model:", "to human-authored text?", "In this paper, we show how CYOA can answer these questions and provide insights into story model behavior, both in cases when the expected differences in text quality are large (e.g., the text is generated with two different models; 3) and when they are small (e.g., the text is generated with the same model but using two different sampling methods; 4).", "CYOA allows human and automatic evaluations to be collected simultaneously; we run standard automatic evaluations of text quality on the collaboratively-generated text and get results consistent with previous analyses of statically-generated text.", "CYOA is useful to both NLG researchers and story writers; writers report being happy with the stories they write with the system and that the paired suggestions help them come up with new ideas.", "We release a template website for CYOA and the evaluation script 1 to support future story and collaborative writing evaluation work.", "OWNADVENTURECYOA evaluates a pair of story generation models by having people select and interact with text generated by each of the models as they write a story.", "Both models generate suggestions for the writer at the same point in the story, and the writer must choose between the two suggestions, forcing a pairwise comparison of the two models.", "By having multiple people write stories with the two models, we can aggregate their preferences and interactions with the suggestions and analyze them to provide feedback on the two models.", "To allow the writers control over the story while still encouraging them to use the suggestions, CYOA uses a turn-taking writing process, with writers alternating between writing by themselves and then receiving suggestions to continue the story (Swanson and Gordon, 2012; Clark et al., 2018).", "The writer begins the story by writing the first sentence alone; an image (Fig. 2 in App. A) is provided as an optional prompt to help them get started.", "Once the writer submits the writing from their turn, two models each generate a suggestion to continue the story, which are presented to the writer in random order.", "As shown in Fig. 1, the 1 github.com/eaclark07/cyoa writer then chooses which of the suggestions they prefer and edits it as they wish before adding it to the story.", "It is then the writer's turn to write alone again.", "This process repeats 5 times, at which point the story is finished and submitted.", "Each turn in the story has to be between 20 and 260 characters for it to be submitted to the story.", "Other than length, there is no restriction on how writers can edit the suggestions; they can delete the suggestion entirely or submit it as-is.", "When editing a computer-generated suggestion, the writer can change their mind and select the other model's suggestion instead, but once a writer submits a turn, they cannot go back to edit it later.", "After the finished story is submitted, participants are asked Likert-scale and open-ended questions about the system and the suggestions they received.", "We asked participants to indicate on a 5-point Likert scale (ranging from Strongly Disagree to Strongly Agree) how much they agreed with the following statements: I'm happy with my final story.", "I felt the system and I were working collaboratively to write the story.", "I thought having the suggestions was useful while writing the story.", "The suggestions connected to what had happened in the story so far.", "The suggestions helped me come up with new ideas.", "What were you looking for in the suggestions?", "We chose these questions for this project to capture people's reactions to the overall writing setup and a general sense of areas for improving story generation models.", "However, these questions could be eliminated or adjusted to fit the evaluation goals of the researcher.", "A demo of CYOA is at homes.", "cs.washington.edu/~eaclark7/ multi-model-demo .", "From the writing setup, we collect the generated suggestions from each model, the writers' preferences between the two models, and the revisions they make to the generated text.", "We analyze these sources of information to answer three questions NLP practitioners have when evaluating their models.", "There are many analyses researchers could run with the data gathered from CYOA beyond those listed here; we include some examples.", "(Q1)", "Is my model better at generating story suggestions than a baseline model?", "CYOA reports how many of the model's suggestions people chose to work with vs. the baseline's suggestions.", "We further break this down by the suggestion round (15) to see if the writers' preferences change over the course of the story.", "Another option would be to break down the writers' preferences by writer attributes, e.g., to analyze the effect of the author on the stories or desired suggestions (August et al., 2020).", "(Q2)", "How useful are the models' suggestions?", "We analyze the revisions writers make to the suggestions to see how much of the generated text they find useful for continuing their story.", "We use three metrics to see how much of the original text is preserved after a writer's revisions.", "Levenshtein edit distance measures the number of character insertions, deletions, and substitutions the writers made, and Jaccard similarity measures the proportion of tokens that are shared between the original and the edited text.", "User Story Edit Ratings (USER; Akoury et al., 2020) 2 measures similarity by recursively counting the longest contiguous substrings between the edited and the original text.", "These edit-based metrics capture exact matches between the texts, measuring how much of the generated content makes it to the final story in the strictest sense.", "However, other metrics could be used if the researcher is interested in capturing broader notions of similarity, e.g., embedding-based measures like cosine similarity or BERTScore (Zhang et al., 2020).", "(Q3)", "How do the models' generated texts compare to human-authored text?", "Pairwise comparison gives us the models' relative quality; comparing them to human-authored text gives an idea of their absolute quality.", "To do this, we take the parts of the story the writer wrote alone (i.e., the turns without generated suggestions) and compare it to the generated text.", "We look at average sentence length (a common proxy for text complexity in stories; See et al., 2019; Roemmele et al., 2017) and distinctn , a measure of repetition (Li et al., 2016).", "As in See et al. (2019), we also look the concreteness of the text's nouns and verbs, us-2 github.com/dojoteef/storium-frontend ing the concreteness ratings from Brysbaert et al. (2014).", "If the system is being used to evaluate a model that focuses on a specific aspect of stories, e.g., events or characters, this analysis could be extended to compare how these specific elements are introduced and referenced in the machine-generated vs. human-authored text.", "We first test CYOA with two popular story generation models: (1) FUSION , the fusion model from Fan et al. (2018), which uses a fusion mechanism to combine two convolutional sequence-to-sequence models; and (2) GPT 2, the small GPT2 model (Rad-ford et al., 2019) finetuned on story data and using topk sampling (Fan et al., 2018).", "We compare FUSION and GPT 2 to see how CYOA can evaluate two models with different underlying architectures; they are also both common story generation baselines (See et al., 2019; Xu et al., 2020; Rashkin et al., 2020).", "To train the models, we use the WritingPrompts dataset (Fan et al., 2018), a collection of writing prompts from Reddit paired with stories.", "During the CYOA evaluation, both models generate their suggestions conditioned on the whole story written so far.", "(Data and model details in App. B and C.)", "We run CYOA on Amazon Mechanical Turk with 105 Turkers to compare the two models.", "Each Turker can only complete the task once.", "Turkers are required to have over 1,000 tasks approved, have an 95% approval rate, and be from the United States, and they are paid $2.50 for participating in the study.", "The study was approved by our instiu-tion's Institutional Review Board.", "(Q1)", "Table 1 shows that, of the 525 suggestion pairs, Turkers significantly 4 preferred the GPT 2 suggestions over FUSION , choosing them 65.7% of the time.", "Breaking it down by suggestion round 1 5, the writers' preference for the GPT 2 was largest at the beginning of the story and decreased over the course of the story.", "To understand why, we look at how writers edited the suggestions and how the generated text compared to human-authored text.", "(Q2)", "In Table 2, all three edit metrics show that writers used significantly 5 more of the accepted GPT 2 suggestion text in their story than the accepted FUSION suggestion text.", "When we break down the scores by round, we see that this is true regardless of where in writer is in the story (see Table 7 in App. D.1).", "Taken with the pairwise results, this points to GPT 2 as the better collaborative story generation model.", "FUSION , perhaps due to its hierarchical structure, did not generate as many useful suggestions as GPT 2 in the interactive setting.", "(Q3)", "Finally, we look at how the generated text compares to the story text the writers wrote alone.", "From Table 3, we see that GPT 2 generates shorter, more concrete, and more repetitive suggestions than FUSION .", "Both models generate shorter sentences than people, and GPT 2 generates more concrete nouns and verbs than FUSION , corroborating the analysis of See et al. (2019).", "GPT 2 generated the most repetitive text, which may explain why it is chosen less frequently as the story goes on.", "FUSION 's subhuman level of repetition indicates it often fails to refer back to the story context, as illustrated by the low Likert-scale scores for The suggestions connected to what had happened in the story so far.", "(Fig. 3 in App. D.2).", "Our second experiment compares text generated from GPT 2 but now using different sampling strategies: TOP-K (as in 3) and NUCLEUS sampling (Holtzman et al., 2020).", "(Model details in App. C.)", "Here we expect to see narrower differences in the generated text than we did in 3.", "Comparing TOP-K vs. NUCLEUS focuses on CYO A's ability to compare models with fine-grained differences.", "103 5 Mann-Whitney U test: p < 0 .", "(Q1)", "Table 4 shows Turkers preferred the TOPK suggestions over the NUCLEUS suggestions for 53.4% of the 515 suggestion pairs writers received; as expected, a smaller difference than in 3 and not significant.", "7 Again, the writers' preference for TOP-K decreased over the course of the story, with NUCLEUS slightly more popular by the end.", "(Q2)", "In Table 5, all three metrics show that writers used more of the NUCLEUS -sampled text than the TOP-K -sampled text, though the difference is not significant.", "8 Despite writers' slight preference for TOP-K -sampled suggestions, when they choose NUCLEUS -sampled suggestions, they preserve more of the generated text.", "Table 8 (App. D.1) shows that difference is largest at the beginning and end of the story.", "This suggests TOP-K 's safer suggestions may be less useful, especially when starting or finishing the task.", "to the same requirements.", "7 Binomial test: p = 0 .", "07 .", "8 Mann-Whitney U test: p = 0 .", "19 (ED), p = 0 .", "23 (JS), and p = 0 .", "27 (USER).", "(Q3)", "Table 6 shows that TOP-K -generated text is shorter, more concrete, and more repetitive than NUCLEUS -generated text.", "NUCLEUS 's text comes closer to human-levels of repetition, consistent with the findings of Holtzman et al. (2020) and Akoury et al. (2020).", "CYOA benefits writers as well as researchers.", "The results of the writer feedback across both experiments indicate that writers enjoy the paired-suggestion writing experience, regardless of which models they wrote with.", "The Likert-scale responses were particularly positive for I'm happy with my final story.", "( FUSION vs. GPT 2: mean = 3 . 83 , NUCLEUS vs. TOP-K : mean = 3 . 84 ) and The suggestions helped me come up with new ideas.", "( FUSION vs. GPT 2: mean = 3 . 80 , NUCLEUS vs. TOP-K : mean = 3 . 79 ).", "This compares favorably to single-suggestion collaborative story writing systems that use a similar writing process; Clark et al. (2018) report writers gave a mean score of 3.28 9 for happiness with the story they wrote with their collaborative writing system.", "Full Likert-scale results are in App.", "D.2.", "The positive reactions from participants indicate this format could work well on alternative crowdsourcing platforms, like LabintheWild, 10 or launched as an independent writing game, similar to Akoury et al. (2020).", "Collaborative writing systems have been developed in domains like poetry (Ghazvininejad et al., 2017), slogans (Clark et al., 2018), and stories (Roemmele", "and Gordon, 2015; Goldfarb-Tarrant et al., 2019; Akoury et al., 2020).", "Like Storium (Akoury et al., 2020), we focus on the potential to use these systems as evaluation platforms.", "However, we suggest using paired suggestions in collaborative writing systems to directly compare generation models.", "ChatEval (Sedoc et al., 2019) collects human evaluations for paired chatbot utterances and Otani et al. (2016) for paired translations, but the generated text is static.", "By having writers interact with dynamically generated suggestions, collaborative writing systems reward helpful and robust generation models, underemphasized attributes in current evaluations (Zellers et al., 2021; Ethayarajh and Jurafsky, 2020).", "CYOA allows researchers to collect human and automatic evaluations for story generation models in a single collaborative writing task.", "The paired suggestions allow direct comparisons between two models, and automatic-metric comparisons among generated text, its revisions, and the human-authored portions provide additional insight.", "We expect CYOA evaluations to accelerate progress on applications for collaborative writing between humans and machines.", "This research was supported in part by a NSF graduate research fellowship and the DARPA CwC program through ARO (W911NF-15-1-0543).", "The authors would also like to thank the ARK group, Ari Holtzman, Nader Akoury, and Yejin Choi for their help and feedback, the reviewers for their helpful comments, and the participants who took part in our study." ]
[ "abstain", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "method", "other", "other", "abstain", "abstain", "abstain", "other", "other" ]
[ "Named Entity Recognition (NER) for low-resource languages is a both practical and challenging research problem.", "This paper addresses zero-shot transfer for cross-lingual NER, especially when the amount of source-language training data is also limited.", "The paper first proposes a simple but effective labeled sequence translation method to translate source-language training data to target languages and avoids problems such as word order change and entity span determination.", "With the source-language data as well as the translated data, a generation-based multilingual data augmentation method is introduced to further increase diversity by generating synthetic labeled data in multiple languages.", "These augmented data enable the language model based NER models to generalize better with both the language-specific features from the target-language synthetic data and the language-independent features from multilingual synthetic data.", "An extensive set of experiments were conducted to demonstrate encouraging cross-lingual transfer performance of the new research on a wide variety of target languages.", "1 1 Introduction Named entity recognition (NER) aims to identify and classify entities in a text into predefined types, which is an essential tool for information extraction.", "It has also been proven to be useful in various downstream natural language processing (NLP) tasks, including information retrieval (Banerjee et al., 2019), question answering (Fabbri et al., 2020) and text summarization (Nallapati et al., 2016).", "However, except for some resource-rich languages Equal contribution, order decided by coin flip.", "Linlin Liu and Bosheng Ding are under the Joint PhD Program between Alibaba and Nanyang Technological University.", "(e.g., English, German), training sets for most of the other languages are still very limited.", "Moreover, it is usually expensive and time-consuming to annotate such data, particularly for low-resource languages (Kruengkrai et al., 2020).", "Therefore, zero-shot cross-lingual NER has attracted growing interest recently, especially with the influx of deep learning methods (Mayhew et al., 2017; Joty et al., 2017; Jain et al., 2019; Bari et al., 2021).", "Existing approaches to cross-lingual NER can be roughly grouped into two main categories: instance-based transfer via machine translation (MT) and label projection (Mayhew et al., 2017; Jain et al., 2019), and model-based transfer with aligned cross-lingual word representations or pretrained multilingual language models (Joty et al., 2017; Baumann, 2019; Wang et al., 2020; Conneau et al., 2020; Bari et al., 2021).", "Recently, Wu et al. (2020) unify instance-based and model-based transfer via knowledge distillation.", "These recent methods have demonstrated promising zero-shot cross-lingual NER performance.", "However, most of them assume the availability of a considerable amount of training data in the source language.", "When we reduce the size of the training data, we observe significant performance decrease.", "For instance-based transfer, decreasing training set size also amplifies the negative impact of the noise introduced by MT and label projection.", "For model-based transfer, although the large-scale pretrained multilingual language models (LM) (Con-neau et al., 2020; Liu et al., 2020) have achieved state-of-the-art performance on many cross-lingual transfer tasks, simply fine-tuning them on a small training set is prone to over-fitting (Wu et al., 2018; Si et al., 2020; Kou et al., 2020).", "To address the above problems under the setting of low-resource cross-lingual NER, we propose a multilingual data augmentation (MulDA) framework to make better use of the cross-lingual generalization ability of the pretrained multilingual LMs.", "Specifically, we consider a low-resource setting for cross-lingual NER, where there is very limited source-language training data and no target-language train/dev data.", "Such setting is practical and useful in many real scenarios.", "Our proposed framework seeks the initial help from the instance-based transfer (i.e., translate train) paradigm (Li et al., 2020; Fang et al., 2020).", "We first introduce a novel labeled sequence translation method to translate the training data to the target language as well as to other languages.", "This allows us to finetune the LM based NER model on multilingual data rather than on the source-language data only, which helps prevent over-fitting on the language-specific features.", "One commonly used tool for translation is the off-the-shelf Google translate system 2 , which supports more than 100 languages.", "Alternatively, there are also many pretrained MT models conveniently accessible, e.g., more than 1,000 MarianMT (Junczys-Dowmunt et al., 2018; Kim et al., 2019) models have been released on the Hugging Face model hub.", "3 Note that the instance-based transfer methods add limited semantic variety to the training set, since they only translate entities and the corresponding contexts to a different language.", "In contrast, data augmentation has been proven to be a successful method for tackling the data scarcity problem.", "Inspired by a recent monolingual data augmentation method (Ding et al., 2020), we propose a generation-based multilingual data augmentation method to increase the diversity, where LMs are trained on multilingual labeled data and then used to generate more synthetic training data.", "We conduct extensive experiments and analysis to verify the effectiveness of our methods.", "Our main contributions can be summarized as follows: We propose a simple but effective labeled sequence translation method to translate the source training data to a desired language.", "Compared with exiting methods, our labeled sentence translation approach leverages placeholders for label projection, which effectively avoids many issues faced during word alignment, such as word order change, entity span determination, noise-sensitive similarity metrics and so on.", "augmentation method for NER, which leverages the multilingual language models to add more diversity to the training data.", "Through empirical experiments, we observe that when fine-tuning pretrained multilingual LMs for low-resource cross-lingual NER, translations to more languages can also be used as an effective data augmentation method, which helps improve performance of both the source and the target languages.", "We propose a multilingual data augmentation framework that leverages the advantages of both instance-based and model-based transfer for cross-lingual NER.", "In our framework, a novel labeled sequence translation method is first introduced to translate the annotated training data from the source language S to a set of target languages T = { T 1 , . . . , T n } .", "Then language models are trained on {D S , DT 1 , ..., DT n } to generate multilingual synthetic data, where DS is the source-language training data, and DT i is the translated data in language T i .", "Finally, we post-process and filter the augmented data to train multilingual NER models for inference on target-language test sets.", "We leverage labeled sequence translation for the training data of the source language to generate multilingual NER training data, which can also be viewed a method for data augmentation.", "Prior methods (Jain et al., 2019; Li et al., 2020) usually perform translation and label projection in two separate steps: 1) translate source-language training sentences to the target language; 2) propagate labels from the source training data to the translated sentences via word-to-word/phrase-to-phrase mapping with alignment models or algorithms.", "However, these methods suffer from a few label projection problems, such as word order change, word-span determination (Li et al., 2020), and so on.", "An alternative to avoid the label projection problems is word-by-word translation (Xie et al., 2018), but often at the sacrifice of the translation quality.", "We address the problems identified above by first replacing named entities with contextual placeholders before sentence translation, and then after translation, we replace placeholders in translated Labeled sentence in the source language: [PER Jamie Valentine] was born in [LOC London].", "1. Translate sentence with placeholders: src: PER0 was born in LOC1.", "tgt: PER0 nacio en LOC1.", "LOC1 src: Jamie Valentine was born in [London].", "tgt: Jamie Valentine nacio en [Londres].", "2. Translate entities with context: PER0 src: [Jamie Valentine] was born in London.", "tgt: [Jamie Valentine] nacio en Londres.", "An illustration of the method is shown in Figure", "1. Assume a sentence XS = { x 1 , . . . , x M } DS and the corresponding NER tags { y 1 , . . . , y M } are given, where x i 's are the sentence tokens and M is the sentence length.", "Let { E 1 , . . . , E n } denote the predefined named entity types.", "Our method first replaces all entities in { x 1 , . . . , x M } with placeholders ( src of step 1 in Figure 1).", "Placeholders Ek are reconstructed tokens with the corresponding entity type E as prefix and the index of the entity k as suffix.", "Assume { x i , . . . , x j } is the k th entity in the source sentence, and the corresponding type is E z , then we can replace the entity with the placeholder E z k to get { . . . , x i 1 , E z k, x j +1 , . . . } .", "We use XS to denote the generated sentence after replacing all entities with placeholders.", "XS is fed into an MT model to get the translation XT in the target language T .", "With such design, the placeholder prefix E can provide the MT model 4 with relevant contextual information about the entities, so that the model can translate the sentence with reasonably good quality.", "Besides, we observe most of placeholders are unchanged after translation, 5 which can be used to help locate the position of entities.", "In the second step, we translate each entity 4 When the MT model use subword vocabularies.", "B-PER Jamie E-PER Valentine was born in S-LOC London", "with the corresponding context.", "More specifi-cally, we use brackets to mark the span of each entity and translate it to the target language successively, one at a time ( src of step 2 in Figure 1).", "For example, to translate entity { x i , . . . , x j } , we feed { . . . , x i 1 , [ x i , . . . , x j ] , x j +1 , . . . } into the MT model.", "Then we can get entity translations by extracting the square bracket marked tokens from the translated sentences.", "We translate the entities directly if the square brackets are not found.", "Finally, we can replace placeholders in XT (ob-tained from the first step) with the corresponding entity translations (obtained from the second step) and copy placeholder prefix as entity labels to generate the synthetic training data in the target language (step 3 in Figure 1).", "We tested the proposed method with Google translate and the MarianMT (Junczys-Dowmunt et al., 2018; Kim et al., 2019) models, and we found that both produce high quality synthetic data as we had expected.", "Although labeled sequence translation generates high quality multilingual NER training data, it adds limited variety since translation does not introduce new entities or contexts.", "Inspired by DAGA (Ding et al., 2020), we propose a generation-based multilingual data augmentation method to add more diversity to the training data.", "DAGA is a monolingual data augmentation method designed for sequence labeling tasks, which has been shown to be able to add significant diversity to the training data.", "As the example shown in Figure 2, it first linearizes labeled sequences by adding the entity type before sentence tokens.", "Then an LSTM-based LM (LSTM-LM) is trained on the linearized sequences in an autoregressive way, after which the begin-of-sentence token [BOS] is fed into the LSTM-LM to generate synthetic training data autoregressively.", "The monolingual LSTM-LM of DAGA is trained in a similar way as the example shown in Figure 3, except that there is no language tag [en] .", "To extend this method for multilingual data augmentation, we add special tokens at the beginning of each sentence to indicate the language that it belongs to.", "The source-language data and the multilingual data obtained via translation are concatenated to train/finetune multilingual LMs with a shared vocabulary (as shown in Figure 5).", "Given a labeled sequence { x 1 , . . . , x M } from the multilingual training data, the LMs are trained to maximize the probability p ( x 1 , . . . , x M ) in Eq.", "1: p ( x 1 , . . . , x M ) = M (cid:89) t =1 p ( x t | x <t ) (1) where is the parameter to optimize, and p ( x t | x <t ) is the probability of the next token given the previous tokens in the sequence, which is usually computed with the softmax function.", "Figure 3 shows an example of how the multilingual LSTM-LM is trained in the autoregressive way.", "After training the LSTM-LM, we can feed the [BOS] token and a language token to the model to generate synthetic training data for the specified language.", "Besides, to leverage the cross-lingual generalization ability of large scale pretrained multilingual LMs, we also finetune a recent state-of-the-art seq2seq model mBART (Liu et al., 2020), which is pretrained with multilingual denoising tasks.", "Sentence permutation and word-span masking are the two noise injection methods used to add noise to original sentence X = { x 1 , . . . , x M } to output g ( X ) , where g ( . ) is used to denote the noise injection function.", "After encoding g ( X ) with the Transformer encoder, the Transformer decoder is trained to generate the original sequence X autore-gressively by maximizing Eq.", "1. Denoising word-span masked sequences is the most relevant to our data augmentation method, since only small modifications are required to make our finetuning task as consistent to the pretraining task as possible.", "More specifically, we design our finetuning task with the following changes: 1) use the linearized labeled sequences (as shown in Figure 5) as input X ; 2) modify g ( . ) to mask random trailing sub-sequences such that g ( X ) = { x 1 , . . . , x z , [ mask ] } , where 1 z | X | is a random integer.", "After finetuning with such task, we can conveniently feed a randomly masked sequence { x 1 , . . . , x z , [ mask ] } into mBART to generate synthetic data.", "Figure 4 shows a more concrete example to illustrate how mBART is finetuned with the linearized sequences in our work.", "Unlabeled multilingual sentences are usually easy to get, for example, data from the Wikimedia 6 .", "To make better use of these unlabeled multilingual data, we propose a semi-supervised method to prepare more pseudo labeled data for finetuning multilingual LMs.", "Inspired by self-training (Zoph et al., 2020; Xie et al., 2020), we use the NER model trained on the multilingual translated data to annotate the unlabeled sentences.", "After that, we use two additional NER models trained with different random seeds to filter the annotated data by removing those with different tag predictions.", "We also design several straightforward methods to post-process and filter the augmented data generated by the LMs:", "Convert the generated labeled sequences to the same format as gold data by separating sentence tokens and NER tags.", "Use the NER model trained on the multilingual translated data to label the generated sequences (after tag removal).", "Then compare the tags generated by the LM and NER model predictions, and remove the sentences with inconsistencies.", "We conduct experiments to evaluate the effectiveness of the proposed multilingual data augmentation framework.", "Firstly, we compare our labeled sequence translation method with the previous instance-based transfer (i.e., translate train) methods.", "Following that, we show the benefit of adding multilingual translations.", "Then we continue 6 https://dumps.wikimedia.org/ Figure 4: Finetune mBART with the linearized sequences.", "[BOS] [en] B-PER Jamie E-PER Valentine was born in S-LOC London.", "[BOS] [de] B-PER Jamie E-PER Valentine wurde in S-LOC London geboren.", "[BOS] [es] B-PER Jamie E-PER Valentine nacio en S-LOC Londres.", "[BOS] [nl] B-PER Jamie E-PER Valentine werd geboren in S-LOC Londen.", "...", "to evaluate the generation-based multilingual data augmentation method by comparing cross-lingual NER performance of the models trained on monolingual, bilingual, and multilingual augmented data, respectively.", "Finally, we further evaluate our methods on a wider range of distant languages.", "We use the most typical Transformer-based NER model 7 in our experiments, which is implemented by adding a randomly initialized feed forward layer to the Transformer final layer for label classification.", "Specifically, to demonstrate that our framework can help achieve additional performance gain even on the top of the state-of-the-art multilingual LMs, the checkpoint of the pretrained XLM-R large (Conneau et al., 2020) model is used to initialize our NER models.", "We finetune the NER model on the translated target-language data to compare our labeled sequence translation method (2.1) with the existing instance-based transfer methods.", "Experimental settings The CoNLL02/03 NER dataset (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) is used for evaluation, which contains data in four different languages: English, German, Dutch and Spanish.", "All of the data are annotated with the same set of NER tags.", "We follow the steps described in 2.1 to translate En-7 Similar to the token classification model in https://github.com/huggingface/transformers.", "glish train data to the other three languages.", "Following Jain et al. (2019) and Li et al. (2020), Google translation system is used in the experiments.", "Since our NER model is more powerful than those used by Jain et al. (2019) and Li et al. (2020), we reproduce their results with XLM-R large for a fair comparison.", "All of the NER models are finetuned on the translated target-language sentences only for 10 epochs with the best model selected using English dev data, and then evaluated on the target-language original test data.", "Results We present the results in Table", "1. As we can see, our method outperforms the best baseline method by 2.90 and 2.97 on German and Dutch respectively, and by 2.23 on average.", "Since our models are only finetuned with the data generated by the labeled sequence translation method, the results directly demonstrate the effectiveness of our method.", "Moreover, compared with the two recent baseline methods (Jain et al., 2019; Li et al., 2020), our method does not rely on complex label projection algorithms and is much easier to implement.", "After showing that our labeled sequence translation method can generate high quality labeled data in the target language, in this section, we run experiments", "experiments to verify the hypothesis that multilingual", "multilingual translation may help improve the cross-lingual transfer performance of multilingual LMs in low resource scenarios.", "Experimental settings We use the same NER dataset as above.", "In order to simulate low resource scenarios, we randomly sample 500, 1k and 2k sentences from the gold English train set.", "Our labeled sequence translation method is used to translate the sampled data to pseudo labeled data in the three target languages, German, Spanish and Dutch.", "To better demonstrate how the training data affects cross-lingual NER performance, we train the NER model on four different conditions: 1) En : train the models on English data only; 2) Tgt-Tran : train the models on the pseudo labeled data in a certain target language only; 3) En + Tgt-Tran : train the models on the combination of English data and pseudo labeled target-language data; 4) En + Multi-Tran : train one single model on the combination of English data and pseudo labeled data in all three target languages.", "We find filter-ing the translated sentences can further improve cross-lingual transfer performance, so we use an NER model trained on the sampled English data to label the translated sentences, count the number of entities in each sentence different from NER model predictions, and then remove the top 20% sentences with the most inconsistent entities.", "This is similar to the third step described in 2.4, except that we remove all the inconsistent sentences from the augmented data, since the LMs can be used to generate a large number of candidate sentences.", "We set max number of epochs to 10 and use 500 sentences randomly sampled from the English dev data to select the best models for each setting.", "Then the best models are evaluated on the original target language test sets.", "Results Table 2 compares the cross-lingual NER performance of the models trained on the different training sets.", "Although the performances of En and Tgt-Tran are relatively bad in most of the cases, combining them can always boost the performance significantly, especially when the dataset size is small.", "Adding multilingual translated data further improves cross-lingual performance by more than 1% on average when English data size is 1k or less.", "Therefore, multilingual translation can be used as an effective data augmentation approach in the low resource scenarios of cross-lingual NER.", "Moreover, En Size Method de es nl avg 500 En 60.18 55.68 66.09 60.65 Tgt-Tran 59.97 53.53 60.39 57.96 En + Tgt-Tran 69.16 64.57 71.40 68.38 En + Multi-Tran 70.40 65.70 72.20 69.43 1k En 68.95 67.3 73.43 69.89 Tgt-Tran 70.3 67.22 73.98 70.50 En + Tgt-Tran 73.63 69.81 75.83 73.09 En + Multi-Tran 73.42 72.71 76.74 74.29 2k En 69.47 75.2 77.64 74.10 Tgt-Tran 71.93 72.94 77.95 74.27 En + Tgt-Tran 74.45 75.88 78.40 76.24 En + Multi-Tran 75.91 76.04 77.85 76.60 Table 2: Cross-lingual NER performance of the models trained on different combinations of training sets.", "the trained single model with En + Multi-Tran can be applied to all target languages.", "Besides, we also observe that multilingual translated data can even help improve NER performance of the source language.", "Table 3 summarizes English test data results for the above settings.", "Tgt-Tran (avg) is the average English results of the models trained on three different Tgt-Tran of German, Spanish and Dutch respectively.", "En + Tgt-Tran (avg) is the average for combining En with each of the three different Tgt-Tran.", "As we can see, adding additional translated data consistently improves English NER performance.", "Particularly, En + Multi-Tran achieves the best performance.", "Therefore, we can also use multilingual translated data to improve low-resource monolingual NER performance.", "In this section, we run experiments to verify whether applying generation-based data augmentation methods to the multilingual translated data can further improve cross-lingual performance in the low resource scenarios.", "Experimental settings We follow the steps described in 2.2 to implement the proposed data augmentation framework on top of LSTM-LM (Kru-engkrai, 2019) and mBART (Liu et al., 2020) sep-500 1k 2k Method de es nl avg de es nl avg de es nl avg En + Multi-Tran 70.40 65.70 72.20 69.43 73.42 72.71 76.74 74.29 75.91 76.04 77.85 76.60 MulDA-LSTM 70.04 67.38 72.81 70.08 74.80 74.27 77.21 75.42 76.05 76.05 78.46 76.85 MulDA-mBART 72.37 68.19 74.59 71.72 75.04 74.56 77.78 75.79 77.54 76.32 78.21 77.36 En + Tgt-Tran 69.16 64.57 71.40 68.38 73.63 69.81 75.83 73.09 74.45 75.88 78.40 76.24 BiDA-LSTM 72.51 68.77 72.65 71.31 74.97 73.69 77.51 75.39 76.59 76.47 78.97 77.34 Table 4: Cross-lingual NER results of models trained on multilingual augmented data.", "arately, and then use them to augment the data processed in 3.2.", "We concatenate English gold data and the filtered multilingual translated data to train/finetune the modified LMs, where LSTM-LM is trained from scratch and mBART is intialized with the mBART CC25 checkpoint 8 for finetuning.", "mBART CC25 is a model with 12 encoder and decoder layers trained on 25 languages.", "We follow the steps described in 2.4 to post-process the augmented data, and concatenate them with the corresponding English gold and translated multilingual data to train the NER models.", "The size of the augmented data used in each setting is the same as the size of the corresponding English gold data.", "MulDA-LSTM and MulDA-mBART are used to denote the methods that use LSTM-LM and mBART augmented data respectively.", "In addition, we also report a bilingual version of our method, denoted with BiDA-LSTM, which performs data augmentation on English and the translated target-language data only.", "We follow the same settings as above to evaluate cross-lingual performance of the NER models trained on different data.", "Results Average results of 5 runs are reported in Table 4.", "Note that MulDA-LSTM and MulDA-mBART train a single model for all the target languages in each setting, while BiDA-LSTM trains one model for each target language in each setting.", "Therefore, we compare BiDA-LSTM with 8 https://github.com/pytorch/fairseq/blob/master/ examples/mbart/README.md En + Tgt-Tran only.", "As we can see, the proposed multilingual data augmentation methods further improve cross-lingual NER performance consistently.", "For the 1k and 2k setting, MulDA-LSTM achieves comparable average performance as BiDA-LSTM.", "Experimental settings The Wikiann NER data (Pan et al., 2017) processed by Hu et al. (2020) is used in these experiments.", "1k English sentences ( DS 1 k ) are sampled from the gold train data to simulate the low resource scenarios.", "We also assume MT models are not available for all of the target languages, so we only translate the sampled English sentences to 6 target languages: ar, fr, it, ja, tr and zh.", "DT trans is used to denote the translated target-language sentences by following steps described in 2.1.", "The low quality translated sentences are filtered out in the same way as 3.2.", "To evaluate our method in the semi-supervised setting, we also sample 5,000 sentences from the training data of the 6 target languages and then remove the NER tags to create unlabeled data D Tunlabeled .", "We follow the steps described in 2.3 to annotate D Tunlabeled with one NER model trained on {D S 1 k , D Ttrans } , and then filter the pseudo labeled data with two other NER models trained on the same data but with different random seeds.", "We use D Tsemi to denote the data generated with this semi-supervised approach.", "Finally, we concatenate {D S 1 k , D Ttrans , D Tsemi } to generate augmented data D Taug following the steps in 2.2 and 2.4.", "With the augmented data above, we train NER models on the concatenated data of {D S 1 k , D Ttrans , D Taug } for cross-lingual NER evaluation.", "We also train an NER model on {D S 1 k , D Ttrans , D Tsemi } for comparison, denoted as Weak Tagger .", "The other settings are same as the above experiments.", "Results We summarize the results in Table", "6. Tran-Train is the average performance of the 6 languages that have corresponding training data translated from English.", "Zero Shot is the average performance of the other target languages.", "MulDA-LSTM demonstrates promising performance improvements on both the Tran-Train and Zero Shot languages.", "The performance of MulDA-mBART is slightly lower, one possible reason is the noise introduced by the sentences labeled at character level.", "We follow the gold data format to label translated zh and ja sequences at character level, which is inconsistent with how mBART is pretrained.", "Please refer to Table 5 for the detailed cross-lingual NER results of each language.", "The label projection step of the previous methods needs to locate the entities and determine their boundaries, which is vulnerable to many problems, such as word order change, long entities, etc.", "Our method effectively avoids these problems with placeholders.", "In the two examples shown in Figure 6, Jain et al. (2019) either labeled only part of the whole entity or incorrectly split the entity into two, Li et al. (2020) incorrectly split the entities into two in both examples, while our method can correctly map the labels.", "...(ORG Association for Relations Across the Taiwan Straits) ...", "Jain et al. (2019): ...(ORG Vereinigung fur Beziehungen) uber die Taiwanstrae ...", "Li et al. (2020): ...(ORG Vereinigung fur Beziehungen) uber (ORG die Taiwanstrae) ...", "Ours: ...(ORG Vereinigung fur Beziehungen uber die Taiwanstrae) ...", "Ours: ...(LOC Mittlerer Westen der USA) ...", "the NER tags can be viewed as a shared vocabulary between different languages.", "As a result, we find that some generated sentences contain tokens from multiple languages, which are useful to help improve cross-lingual transfer (Tan and Joty, 2021).", "Two examples are shown in Figure", "7. 4 Related Work Cross-lingual NER There has been growing interest in cross-lingual NER.", "Prior approaches can be grouped into two main categories, instance-based transfer and model-based transfer.", "Instance-based transfer translates source-language training data to target language, and then apply label projection to annotate the translated data (Tiedemann et al., 2014; Jain et al., 2019).", "Instead of MT, some earlier approaches also use parallel corpora to construct pseudo training data in the target language (Yarowsky et al., 2001; Fu et al., 2014).", "To minimize resource requirement, Mayhew et al. (2017) and Xie et al. (2018) design frameworks that only rely on word-to-word/phrase-to-phrase translation with bilingual dictionaries.", "Besides, there are also many studies on improving label projection quality with additional feature or better mapping methods (Tsai et al., 2016; Li et al., 2020).", "Different from these methods, our labeled sentence translation approach leverages placeholders to determine the position of entities after translation, which effectively avoids many issues during label projection, such as word order change, entity span determination, noise-sensitive similarity metrics and so on.", "Model-based transfer directly applies the model trained on the source language to the target-language test data (Tackstrom et al., 2012; Ni et al., 2017; Joty et al., 2017; Chaudhary et al., 2018), which heavily relies on the quality of cross-lingual representations.", "Recent methods have achieved significant performance improvement by fine-tuning large scale pretrained multilingual LMs (Devlin et al., 2019; Keung et al., 2019; Conneau et al., 2020).", "Besides, there are also some approaches that combine instance-based and model-based transfer (Xu et al., 2020; Wu et al., 2020).", "Compared with these methods, our approach leverages MT models and LMs to add more diversity to the training data, and prevents over-fitting on language-specific features by fine-tuning NER models on multilingual data.", "Data augmentation Data augmentation (Simard et al., 1998) adds more diversity to training data to help improve model generalization, which has been widely used in many fields, such as computer vision (Zhang et al., 2018), speech (Cui et al., 2015; Park et al., 2019), NLP (Wang and Eisner, 2016; Sun et al., 2020) and so on.", "For NLP, back translation (Sennrich et al., 2016) is one of the most successful data augmentation approaches, which translates target-language monolingual data to the source language to generate more parallel data for MT model training.", "Other popular approaches include synonym replacement (Kobayashi, 2018), random deletion/swap/insertion (Sun et al., 2020; Kumar et al., 2020), generation (Ding et al., 2020), etc.", "Data augmentation has also been proven to be useful in the cross-lingual settings (Zhang et al., 2019; Singh et al., 2020; Riabi et al., 2020; Qin et al., 2020; Bari et al., 2021; Mohiuddin et al., 2021), but most of the exiting methods overlook the better utilization of multilingual training data when such resources are available.", "NER.", "Our labeled sequence translation method effectively avoids many label projection related problems by leveraging placeholders during MT. Our generation-based multilingual data augmentation method generates high quality synthetic training data to add more diversity.", "The proposed framework has demonstrated encouraging performance improvement in various low-resource settings and across a wide range of target languages.", "This research is partly supported by the Alibaba-NTU Singapore Joint Research Institute, Nanyang Technological University.", "Linlin Liu would like to thank the support from Interdisciplinary Graduate School, Nanyang Technological University.", "We would like to thank the help from our Alibaba colleagues, Ruidan He and Qingyu Tan in this work as well." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "other", "objective", "method", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "objective", "result", "objective", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other" ]
[ "Off-topic spoken response detection, the task aiming at predicting whether a response is off-topic for the corresponding prompt, is important for an automated speaking assessment system.", "In many real-world educational applications, off-topic spoken response detectors are required to achieve high recall for off-topic responses not only on seen prompts but also on prompts that are unseen during training.", "In this paper, we propose a novel approach for off-topic spoken response detection with high off-topic recall on both seen and unseen prompts.", "We introduce a new model, Gated Convolutional Bidirectional Attention-based Model (GCBiA), which applies bi-attention mechanism and convolutions to extract topic words of prompts and key-phrases of responses, and introduces gated unit and residual connections between major layers to better represent the relevance of responses and prompts.", "Moreover, a new negative sampling method is proposed to augment training data.", "Experiment results demonstrate that our novel approach can achieve significant improvements in detecting off-topic responses with extremely high on-topic recall, for both seen and unseen prompts.", "Off-topic spoken response detection is a crucial task in an automated assessment system.", "The task is to predict whether the response is off-topic for the corresponding question prompt.", "Table 1 shows an example of on-topic and off-topic responses for a prompt.", "Off-topic examples in human-rated data is often too sparse to train an automated scoring system to reject off-topic responses.", "Consequently, automated scoring systems tend to be more vulnerable than human raters to scoring inaccurately due to off-topic responses ( Lochbaum et al., 2013; Higgins and Heilman, 2014).", "To ensure the validity of speaking assessment scores, it is necessary to have a mechanism to flag off-topic responses before scores are reported (Wang et al., 2019).", "In our educational application, we use the automated speaking assessment system to help L2 learners prepare for the IELTS speaking test.", "We do see a higher rate of off-topic responses in freemium features as some users just play with the system.", "In such a scenario, accurate off-topic detection is extremely important for building trust and converting trial users to paid customers.", "Prompt: What kind of flowers do you like?", "On-topic: I like iris and it has different meaning of it a wide is the white and um and the size of a as a ride is means the ride means love but I can not speak.", "Off-topic: Sometimes I would like to invite my friends to my home and we can play the Chinese chess dishes this is my favorite games at what I was child.", "Initially, many researchers used vector space model (VSM) ( Louis and Higgins, 2010; Yoon and Xie, 2014; Evanini and Wang, 2014) to assess the semantic similarity between responses and prompts.", "In recent years, with the blooming of deep neural networks (DNN) in natural language processing (NLP), many DNN-based approaches were applied to detect off-topic responses.", "Malinin et al. (2016) used the topic adapted Recurrent Neural Network language model (RNN-LM) to rank the topic-conditional probabilities of a response sentence.", "A limitation of this approach is that the model can not detect off-topic responses for new question prompt which was not seen in training data ( unseen prompt ).", "Later, off-topic response detection was considered as a binary classifica-tion task using end-to-end DNN models.", "Malinin et al. (2017) proposed the first end-to-end DNN method, attention-based RNN (Att-RNN) model, on off-topic response detection task.", "They used a Bi-LSTM embedding of the prompt combined with an attention mechanism to attend over the response to model the relevance.", "CNNs may perform better than RNNs in some NLP tasks which require key-phrase recognition as in some sentiment detection and question-answer matching issues (Yin et al., 2017).", "Lee et al. (2017) proposed a siamese CNN to learn semantic differences between on-topic response-questions and off-topic response-questions.", "Wang et al. (2019) proposed an approach based on similarity grids and deep CNN.", "However, the cold-start problem of off-topic response detection has not been handled well by the aforementioned approaches.", "It is not until enough training data of unseen prompts are accumulated that good performance could be achieved.", "Besides, these methods draw little attention to the vital on-topic false-alarm problem for a production system.", "I.e., extremely high recall of on-topic responses is also required to make real-user-facing systems applicable.", "In this paper, to address the issues mentioned above, a novel approach named Gated Convolutional Bidirectional Attention-based Model (GCBiA) and a negative sampling method to augment training data are proposed.", "The key motivation behind our model GCBiA is as follows: convolution structure captures the key information, like salient n-gram features (Young et al., 2018) of the prompt and the response, while the bi-attention mechanism provides complementary interaction information between prompts and responses.", "Following R-Net (Wang et al., 2017) in machine comprehension, we add the gated unit as a relevance layer to filter out the important part of a response regarding the prompt.", "These modules contribute to obtaining better semantic matching representation between prompts and responses, which is beneficial for both seen and unseen prompts.", "Additionally, we add residual connections (He et al., 2016) in our model to keep the original information of each major layer.", "To alleviate the cold-start problem on unseen prompts, a new negative sampling data augmentation method is considered.", "We compare our approach with Att-RNN model and G-Att-RNN (our strong baseline model based on Att-RNN).", "Experiment results show that GCBiA outperforms these methods both on seen and unseen prompts benchmark conditioned on extremely high on-topic response recall (0.999).", "Moreover, the model trained with negative sampling augmented data achieves 88.2 average off-topic recall on seen prompts and 69.1 average off-topic recall on unseen prompts, respectively.", "follows: We propose an effective model framework of five major layers on off-topic response detection task.", "The bi-attention mechanism and convolutions are applied to the focus on both topic words in prompts and key-phrase in responses.", "The gated unit as a relevance layer can enhance the relevance of prompts and responses.", "Besides, residual connections for each layer were widely used to learn additional feature mapping.", "Good semantic matching representation is obtained by these modules on both seen and unseen prompts.", "The GCBiA model achieves significant improvements by +24.0 and +7.0 off-topic recall on average unseen and seen prompts respectively, comparing to the baseline method.", "To explore the essence of our proposed model, we conduct visualization analysis from two perspectives: bi-attention visualization and semantic matching representation visualization to reveal important information on how our model works.", "To improve our result on unseen prompts further, we propose a novel negative sampling data augmentation method to enrich training data by shuffling words from the negative sample in off-topic response detection task.", "It allows the GCBiA model to achieve higher averaging off-topic recall on unseen prompts.", "The off-topic response detection task is defined as follows in this paper.", "Given a question prompt with n words XP = { x Pt } nt =1 and the response sentence with m words XR = { x Rt } mt =1 , output one class o = 1 as on-topic or o = 0 as off-topic.", "model GCBiA (shown in Figure 1) consists of the following five major layers:", "to a vector space using a pre-trained word embedding model.", "Contextual Encoder Layer utilizes contextual information from surrounding words to reinforce the embedding of the words.", "These first two layers are applied to both prompts and responses.", "Attention Layer uses the attention mechanism in both directions, prompt-to-response and response-to-prompt, which provides complementary information to each other.", "Relevance Layer captures the important parts of the response regarding a prompt via the gated unit.", "In detail, each layer is illustrated as follows: 1. Word Embedding Layer.", "We first convert words to respective trainable word embeddings, initialized by pre-trained Glove (Pen-nington et al., 2014).", "The embeddings of prompts WP = { w Pt } nt =1 and responses WR = { w Rt } mt =1 are passed directly to the next contextual encoder layer.", "2. Contextual Encoder Layer.", "A stack of convolutional layers are employed to extract salient n-gram features from prompts and responses, aiming at creating an informative latent semantic representation of prompts and responses for the next layer.", "The l -th convolutional layer with one filter is represented as c li in Equation (1), where W R k d , b R d .", "We ensure that the output of each stack matches the input length by padding the input of each stack.", "The number of convolutional layers l is 7, the kernel size k is 7 and the number of filters in each convolutional layer is 128.", "c li = f ( W l [ c l 1 i k/ 2 , ..., c l 1 i + k/ 2 ] + b l ) (1) After the convolutional representation of prompts UP and responses UR in Equation (2-3) are obtained, a max pooling layer to extract the fixed-length vector is performed, seen in Equation (4-5).", "Max-pooling can keep the most salient n-gram features across the whole prompt/response.", "UP = CONV ( WP ) (2) UR = CONV ( WR ) (3) v P = maxpooling ( UP ) (4) v R = maxpooling ( UR ) (5) 3. Attention Layer.", "In this layer, the attention mechanism is used in both directions, prompt-to-response and response-to-prompt, which provides complementary information to each other.", "However, unlike bi-attention applied to question answering and machine comprehension, including QANet (Yu et al., 2018), BiDAF (Seo et al., 2016) and BiDAF++ (Choi et al., 2018), we use max-pooling of CNN representation on prompt/response to summarize the prompt/response into a fixed-size vector.", "Prompt-to-Response Attention.", "Prompt-to-Response attention implicitly models which response words are more related to the whole prompt, which is crucial to assess the relevance of responses and prompts.", "Given max pooling vector v P of the prompt and CNN representation UR = { u Rt } mt =1 of the response, together with WP = { w Pt } nt =1 and WR = { w Rt } mt =1 , Prompt-to-Response attention c R is then calculated in Equation (6-10), where the similarity function used is trilinear function (Yu et al., 2018) and residual connections are used.", "Response-to-Prompt Attention.", "Similarly, Response-to-Prompt attention implicitly models which prompt words are more related to the whole response.", "The calculation of Response-to-Prompt attention, seen in Equation (11-15), is close to Prompt-to-Response Figure 1: An overview of GCBiA.", "4. Relevance Layer.", "To capture the important parts of responses and attend to the ones relevant to the prompts, we use one gated unit in this layer seen in Equation (16-17).", "This gated unit focuses on the relation between the prompt and the response.", "Only relevant parts of each side can remain after the sigmoid operation.", "The input of this layer is ( c R = [ c R , v R ] , c P = [ c P , v P ]) , which uses residual connections of the previous two layers.", "5. Output Layer.", "The fixed-length semantic matching vector produced by the previous layer and the previous second layer vector, are fed into the last output layer.", "It consists of one normalization layer, one dropout, two fully connected layers, and one softmax layer.", "The output distribution indicates the relevance of the prompt and the response.", "We classify the output into two categories on-topic or off-topic through the threshold.", "Different threshold is chosen for the different prompt to make sure the on-topic recall of the prompt meets the lowest requirement, such as 0.999 for the online product system in our study.", "Data from our IELTS speaking test mobile app 1 was used for training and testing in this paper.", "There are three parts in the IELTS 2 test: Part1 focuses on general questions about test-takers and a range of familiar topics, such as home, family, work, studies, and interests.", "In Part2, test-takers will be asked to talk about a particular topic.", "Discussion of more abstract ideas and issues about Part2 will occur in Part3.", "Here is an example from our IELTS speaking test mobile app, seen in Table 2. All responses from test-takers were generated from our automatic speech recognition (ASR) system, which will be briefly introduced in Section 3.2.", "Responses for a target prompt collected in our paid service were used as its on-topic training examples, and responses from the other prompts were used as the off-topic training examples for the target prompt.", "It is a reasonable setup because most of the responses in our paid service are on-topic (we labeled about 5K responses collected under our paid service and found only 1.3% of them are off-topic) and a certain level of noise in the training is acceptable.", "The test data was produced in the same way as the training data except that human validation was further introduced to ensure its validity.", "To ensure the authenticity of our train and test data further, we filter short responses for each part.", "The length of words from each response in Part1, Part2, and Part3 should be over 15, 50, and 15, respectively.", "responses to each prompt is 822.", "The number of on-topic and off-topic responses are 564.3K and 551.3K in training data.", "We divide the test data into two parts: seen benchmark and unseen benchmark.", "Prompts of the seen benchmark can appear in train data, while prompts of unseen benchmark cannot.", "The seen benchmark consists of 33.6K responses from 156 prompts, including 17.7K on-topic responses and 15.9K off-topic responses, and the average number of responses of each prompt is 216.", "In the unseen benchmark, there are 10.1K responses from 50 prompts, including 5.0K on-topic responses and 5.1K off-topic responses, and the average number of responses of each prompt is 202.", "A hybrid deep neural network DNN-HMM system is used for ASR.", "The acoustic model contains 17 sub-sampled time-delay neural network layers with low-rank matrix factorization (TDNNF) (Povey et al., 2018), and is trained on over 8000 hours of speech, using the lattice-free MMI (Povey et al., 2016) recipe in Kaldi 3 toolkit.", "A tri-gram LM with Kneser-Ney smoothing is trained using the SRILM 4 toolkit and applied at first pass decoding to generate word lattices.", "An RNN-LM (Mikolov et al., 2010) is applied to re-scoring the lattices to achieve the final recognition results.", "The ASR system achieves a word error rate of around 13% on our 50 hours ASR test set.", "We use two assessment metrics in this paper: Average Off-topic Recall (AOR) and Prompt Ratio over Recall0.3 (PRR3).", "AOR denotes the average number of off-topic responses recall of all prompts (156 prompts on the seen benchmark and 50 prompts on the unseen benchmark).", "PPR3 denotes the ratio of prompts whose off-topic recall is over 0.3.", "Here is a case of AOR and PPR3 on seen benchmark: three prompts have 102, 102, and 102 off-topic responses, respectively.", "Suppose that we have recalled 100, 90 and 30 off-topic responses for the three prompts, off-topic recall of each prompt is 100/102=98.0%, 90/102=88.2%, and 30/102=29.4%.", "In this case AOR=(100/102 + 90/102 + 30/102)/3=71.9%, and PPR3=2/3=66.7%.", "To ensure that the off-topic detection is applicable 3 http://kaldi-asr.org 4 http://www.speech.sri.com/projects/srilm/ Data #Prompt #Resp.", "in real scenes, high on-topic recall (0.999 in this paper) is required.", "We give restriction that the on-topic recall on each prompt should be over 0.999 when calculating AOR and PPR3.", "The model is implemented by Keras 5 .", "We use pre-trained Glove as word embedding, the dimension of which is 300.", "The train and dev batch size are 1024 and 512.", "The kernel size, filter number, and block size of CNN are 7, 128, and 7 by tuning on the dev set.", "The fix-length of prompts and responses are 40 and 280 according to the length distribution of prompts and responses in the training data.", "Nadam (Dozat, 2016) is used as our optimizer with a learning rate of 0.002.", "The loss function is binary cross-entropy.", "The epoch size is 20, and we apply early-stop when dev loss has not been improving for three epochs.", "We carried out experiments on both seen benchmark and unseen benchmark mentioned in Section 3.1.", "As is shown in Table 4, Att-RNN is our baseline model.", "To make the evaluation more convincing, we built a stronger baseline model G-Att-RNN based on Att-RNN by adding residual connections with each layer.", "Additionally, we add a gated unit as the relevance layer for our baseline model G-Att-RNN.", "Compared with Att-RNN, our baseline model G-Att-RNN achieved significant improvements on both seen (by +3.2 PPR3 points and +4.6 AOR points) and unseen benchmark (by +22.0 PPR3 points and +17.1 AOR).", "From Table 4, comparing with Att-RNN baseline, we can see that our approach GCBiA can achieve impressive improvements by +36.0 PPR3 points and +24.0 AOR points on the unseen benchmark, as well as +9.0 PPR3 points and +7.0 AOR points on the seen benchmark.", "Meanwhile, our approach significantly outperforms G-Att-RNN by 5 https://keras.io/ +14.0 PPR3 points and + 6.9 AOR points on the unseen benchmark, as well as +5.8 PPR3 points and +2.4 AOR points on the seen benchmark.", "As gated unit and residual connections have been proved useful in Section 4.1, we conducted ablation analysis on seen and unseen benchmarks, seen in Table 4, to further study how other components contribute", "contribute to the performance based on G-Att-RNN.", "Because topic words of the prompt were focused on, the bi-attention mechanism is beneficial to replace the uni-attention by adding response-to-prompt attention, with +2.0 PPR3 points and +1.6 AOR points improvements on the unseen benchmark, as well as +2.6 PPR3 points and +1.5 AOR points on the seen benchmark.", "Besides, CNN with average-pooling applied to substitute RNN is also useful on the unseen benchmark by +10.0 PPR3 and +4.0 AOR points improvement.", "Though a little drop (-1.7% on seen AOR) in performance was caused by CNN with average-pooling, CNN with max-pooling can achieve improvements on the seen benchmark by +2.6 PPR3 and + 2.5 AOR in return.", "In general, CNN is more suitable than RNN for the contextual encoder layer in our model framework, for seen and unseen prompts.", "Finally, we also ben-efit from the residual connections for the gated unit with +2.8 AOR points improvement on the unseen benchmark.", "In this section, we analyzed the essence of our model from two perspectives.", "One is the bi-attention mechanism visualization and the other is the dimension reduction analysis of the semantic matching representation.", "More details are illustrated as follows: Bi-Attention Visualization.", "Figure 2 gives the visualization of the bi-attention mechanism.", "Bi-attention mechanism can capture the interrogative what and topic words spare time of prompt what do you do in your spare time seen in subfigure 2a , capture the key-phrases usually watch Systems Model Seen Unseen PPR3 AOR PPR3 AOR Malinin et al., 2017 Att-RNN 84.6 72.2 32.0 21.0 Our baseline model G-Att-RNN 87.8 76.8 54.0 38.1 This work + Bi-Attention 90.4 78.3 56.0 39.7 + RNN CNN 89.7 76.6 66.0 43.7 + maxpooling 92.3 79.1 68.0 42.2 + Res-conn in gated unit (GCBiA) 93.6 79.2 68.0 45.0 Table 4: The comparison of different models based on over 0.999 on-topic recall on seen and unseen benchmarks.", "movies and shopping of the response seen in subfigure 2b, and capture the key-phrases change name and name seen in subfigure 2c.", "Due to the increased focus on the prompt, bi-attention is more beneficial for assessing the relevance of responses and prompts by matching the key phrases or words between them.", "The response in subfigure 2b is classified as on-topic, while the response in subfigure 2c is classified as off-topic.", "Semantic Matching Representation Visualization.", "As the output vector of the relevance layer using the gated unit can better represent the relevance of prompts and responses, the semantic matching representation was obtained from the relevance layer.", "With the help of t-SNE (Maaten and Hinton, 2008), the visualization result was shown in Figure 3. Subfigure 3a tells the true response distribution of one prompt, describe a special meal that you have had, what the meal was, who you had this meal with and explain why this meal was special, which has a clear-semantic topic meal.", "Meanwhile, subfigure 3b reveals the response distribution using our semantic matching representation on the same prompt as subfigure 3a .", "We can see that semantic matching representation of our model maintains good performance on this kind of prompt, which has one clear-semantic topic to limit the discussion in one scope.", "Additionally, some prompts are open to discuss, which are divergent.", "Given a case of the prompt what do you do in your spare time, and we can observe its true response distribution in subfigure 3c .", "Compared with it in subfigure 3c , our model tends to predict responses on-topic, seen in subfigure 3d , because high on-topic recall (0.999) is limited.", "To investigate the impact of training data size, we conduct some experiments with varying sizes of training data.", "In figure 4, we find that the larger the training data size, the better the performance.", "0.999 on-topic recall.", "To augment training data and strengthen the generalization of the off-topic response detection", "model for unseen prompts, we proposed a new and effective negative sampling method for off-topic response detection task.", "Comparing with the previous method of generating only one negative sample for each positive one, we generated two.", "The first one is chosen randomly as before, and the second one consists of words shuffled from the first one.", "This method contributes to the diversity of negative samples of training data.", "The size of our training data reaches 1.67M, compared with 1.12M in the previous negative sampling method.", "To make training data balanced, we gave the weight of positive and negative samples: 1 and 0.5, respectively.", "As is shown in Table 5, a significant performance improvement (+9.0 seen AOR and +24.1 unseen AOR) is achieved by this negative sampling method.", "Our model GCBiA equipped with negative sampling augmentation can achieve 88.2% and 69.1% average off-topic response recall on seen and unseen prompts, conditioned on 0.999 on-topic recall.", "In this paper, we conducted a series of work around the task of off-topic response detection.", "First of all, a model framework of five major layers was proposed, within which bi-attention mechanism and convolutions were used to well capture the topic words of prompts and key-phrase of responses, and gated unit as relevance layer was applied to better obtaining semantic matching representation, as well as residual connections with each major layer.", "Moreover, the visualization analysis of the off-topic model was given to study the essence of the model.", "Finally, a novel negative sampling augmentation method was introduced to augment off-topic training data.", "We verified the effectiveness of our approach and achieved significant improvements on both seen and unseen test data.", "We are grateful to our colleague Bin Wang for helping with the ASR system.", "We thank our colleague Puyu Chen for proofreading.", "Last but not least, we thank the anonymous reviewers for their invaluable comments." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "method", "result", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "result", "method", "abstain", "abstain", "abstain", "result", "other", "other", "other" ]
[ "In this paper, we consider advancing web-scale knowledge extraction and alignment by integrating OpenIE extractions in the form of (subject, predicate, object) triples with Knowledge Bases (KB).", "Traditional techniques from universal schema and from schema mapping fall in two extremes: either they perform instance-level inference relying on embedding for (subject, object) pairs, thus cannot handle pairs absent in any existing triples; or they perform predicate-level mapping and completely ignore background evidence from individual entities, thus cannot achieve satisfying quality.", "We propose OpenKI to handle sparsity of OpenIE extractions by performing instance-level inference: for each entity, we encode the rich information in its neighborhood in both KB and OpenIE extractions, and leverage this information in relation inference by exploring different methods of aggregation and attention.", "In order to handle unseen entities, our model is designed without creating entity-specific parameters.", "Extensive experiments show that this method not only significantly improves state-of-the-art for conventional OpenIE extractions like ReVerb, but also boosts the performance on OpenIE from semi-structured data, where new entity pairs are abundant and data are fairly sparse.", "Web-scale knowledge extraction and alignment has been a vision held by different communities for decades.", "The Natural Language Processing (NLP) community has been focusing on knowledge extraction from texts.", "They apply either closed information extraction according to an ontology (Mintz et al., 2009; Zhou et al., 2005), restricting to a subset of relations pre-defined in the ontology, or open information extraction (OpenIE) This work was performed while at Amazon.", "to extract free-text relations (Banko et al., 2007; Fader et al., 2011), leaving the relations unaligned and thus potentially duplicated.", "The Database (DB) community has been focusing on aligning relational data or WebTables (Cafarella et al., 2008) by schema mapping (Rahm and Bernstein, 2001), but the quality is far below adequate for assuring correct data integration.", "We propose advancing progress in this direction by applying knowledge integration from OpenIE extractions.", "OpenIE extracts SPO (subject, predicate, object) triples, where each element is a text phrase, such as E1: (Robin Hood, Full Cast and Crew, Leonardo Decaprio) and E2 : (Ang Lee, was named best director for, Brokeback) .", "OpenIE has been studied for text extraction extensively (Yates et al., 2007; Fader et al., 2011; Mausam et al., 2012), and also for semi-structured sources (Bronzi et al., 2013), thus serves an effective tool for web-scale knowledge extraction.", "The remaining problem is to align text-phrase predicates 1 from OpenIE to knowledge bases (KB).", "Knowledge integration answers the following question: given an OpenIE extraction ( s, p, o ) , how can one populate an existing KB using relations in the pre-defined ontology?", "The problem of knowledge integration is not completely new.", "The DB community has been solving the problem using schema mapping techniques, identifying mappings from a source schema (OpenIE extractions in our context) to a target schema (KB ontology in our context) (Rahm and Bernstein, 2001).", "Existing solutions consider predicate-level ( i.e. , attribute) similarity on names, types, descriptions, instances, and so on, and generate mappings like email mapped to email-1 We also need to align text-phrase entities, which falls in the area of entity linking (Dredze et al., 2010; Ji et al., 2014); it is out of scope of this paper and we refer readers to relevant references. address; first name and last name together mapped to full name.", "However, for our example Full Cast and Crew, which is a union of multiple KB relations such as directed by, written by, and actor, it is very hard to determine a mapping at the predicate level.", "On the other hand, the NLP community has proposed Universal Schema (Riedel et al., 2013) to apply instance-level inference from both OpenIE extractions and knowledge in existing knowledge bases: given a set of extractions regarding an entity pair ( s, o ) and also information of each entity, infer new relations for this pair.", "One drawback of this method is that it cannot handle unseen entities and entity pairs.", "Also, the technique tends to overfit when the data is sparse due to large number of parameters for entities and entity pairs.", "Unfortunately, in the majority of the real extractions we examined in our experiments, we can find only 1.4 textual triples on average between the subject and object.", "The latest proposal Rowless Universal Schema (Verga et al., 2017) removes the entity-specific parameters and makes the inference directly between predicates and relations, thereby allowing us to reason about unseen entity pairs.", "However, it completely ignores the entities themselves, so in a sense falls back to predicate-level decisions, especially when only one text predicate is observed.", "In this paper we propose a solution that leverages information about the individual entities whenever possible, and falls back to predicate-level decisions only when both involved entities are new.", "Continuing with our example E1 if we know from existing knowledge that Leonardo is a famous actor and has rarely directed or written a movie, we can decide with a high confidence that this predicate maps to @film.actor in this triple, even if our knowledge graph knows nothing about the new movie Robin Hood.", "In particular, we make three contributions in this paper.", "1. We design an embedding for each entity by exploring rich signals from its neighboring relations and predicates in KB and OpenIE.", "This embedding provides a soft constraint on which relations the entities are likely to be involved in, while keeping our model free from creating new entity-specific parameters so allowing us to handle unseen entities during inference.", "2. Inspired by predicate-level mapping from schema mapping and instance-level inference from universal schema, we design a joint model that leverages the neighborhood embedding of entities and relations with different methods of aggregation and attention.", "3. Through extensive experiments on various OpenIE extractions and KB, we show that our method improves over state-of-the-arts by 33.5% on average across different datasets.", "In the rest of the paper, we define the problem formally in Section 2, present our method in Section 3, describe experimental results in Section 4, and discuss related work in Section 5.", "Problem Statement.", "Given", "(i) an existing knowledge base KB of triples ( s, p, o ) where s, o EKB (set of KB entities) and p RKB (set of KB relations), and", "(ii) a set of instances ( s (cid:48) , p (cid:48) , o (cid:48) ) from OpenIE extraction ( s (cid:48) and o (cid:48) may not belong to EKB , and p (cid:48) are text predicates) 2 : predict score ( s (cid:48) , p, o (cid:48) ) where p RKB .", "For example, given E1 and E2 as OpenIE extractions and background knowledge bases (KB) like IMDB, we want to predict @film.actor relation given E1 and @film.directed by relation given E2 as the target KB relations between the participating entities.", "Particularly, we want to perform this relation inference at instance-level , which can be different for different entities sharing the same predicate.", "Table 1 introduces important notations used in this paper.", "Universal Schema (F-Model) (Riedel et al., 2013) is modeled as a matrix f actorization task where entity pairs, e.g., (RobinHood, Leonardo Decaprio) form the rows, and relations from OpenIE and KB form the columns (e.g., @film.actor, Full Cast and Crew).", "During training, we observe some positive entries in the matrix and the objective is to predict the missing cells at test time.", "2 In this paper, a relation' always refers to a KB relation, whereas a predicate' refers to an OpenIE textual relation.", "where, v s,o R d is the embedding vector of the entity pair (subject, object), v p is the embedding vector of a KB relation or OpenIE predicate, and the triple score is obtained by their dot product.", "The parameters v p and v s,o are randomly initialized and learned via gradient descent.", "One of the drawbacks of universal schema is the explicit modeling of entity pairs using free parameters v s,o .", "Therefore, it cannot model unseen entities.", "This also makes the model overfit on our data as the number of OpenIE text predicates observed with each entity pair is rather small (1.4 on average in our datasets).", "Universal Schema (E-Model) (Riedel et al., 2013) considers e ntity-level information, thus decomposing the scoring function from the F-model as follows: SE ( s, p, o ) = S subj ( s, p ) + S obj ( p, o ) = v s v subj T p + v o v obj T p (1) where each relation is represented by two vectors corresponding to its argument type for a subject or an object.", "The final score is an additive summation over the subject and object scores S subj and S obj that implicitly contain the argument type information of the predicate p .", "Thus, a joint Fand E-model of SF + E = SF + SE can perform relation inference at instance-level considering the entity information.", "Although the E-model captures rich information about entities, it still cannot deal with unseen entities due to the entity-specific free parameters v s and v o .", "Rowless Universal Schema (Rowless) (Verga et al., 2017) handles new entities as follows.", "It considers all relations in KB and OpenIE that the subject s and object o co-participates in (denoted by R ( s, o ) ), and represents the entity pair with an aggregation over embeddings of these relations.", "v Rowlesss,o = Agg p (cid:48) R ( s,o ) ( v p (cid:48) ) S Rowless ( s, p, o ) = v Rowlesss,o v Tp (2) Agg ( . ) is an aggregation function like average pooling, max pooling, hard attention (Rowless MaxR) or soft attention given query relations (Rowless Attention) (Verga et al., 2017).", "The Rowless model ignores the individual information of entities, and therefore falls back to making predicate-level decisions in a sense, especially when there are only a few OpenIE predicates for an entity pair.", "We propose OpenKI for instance-level relation inference such that it", "(i) captures rich information about each entity from its neighborhood KB relations and text predicates to serve as background knowledge and generalizes to unseen entities by not learning any entity-specific parameters (only KB relations and OpenIE predicates are parameterized)", "(ii) considers both shared predicates and entity neighborhood information to encode entity pair information.", "Figure 1 shows the architecture of our model.", "The core of our model is the Entity Neighborhood Encoder.", "Recall that Rowless Universal Schema represents each entity pair with common relations shared by this pair.", "However, it misses critical information when entities do not only occur in the current entity pair, but also interact with other entities.", "This entity neighborhood can be regarded as a soft and fine-grained entity type information that could help infer relations when observed text predicates are ambiguous (polysemous), noisy (low quality of data source) or low-frequency (sparsity of language representation).", "3 Our aim is to incorporate this entity neighborhood information into our model for instance-level relation inference while keeping it free of entity-specific parameters.", "To do this, for each entity, we leverage all its neighboring KB relations and OpenIE predicates for relation inference.", "We aggregate their embeddings to obtain two scores for the subject and object separately in our ENE model.", "The subject score S ENEsubj for an entity considers the aggregated embedding of its participating KB relations and OpenIE predicates where it serves as a subject (similar for the object score S ENEobj ): 3 Note that, the notion of entity neighborhood is different from the Neighborhood model in the Universal Schema work (Riedel et al., 2013).", "Our entity neighborhood captures information of each entity, whereas their Neighborhood model leverages prediction from similar predicates.", "v aggsubj = Agg p (cid:48) R ( s, ) ( v subjp (cid:48) ) S ENEsubj ( s, p ) = v aggsubj v subj T p v aggobj = Agg p (cid:48) R ( ,o ) ( v objp (cid:48) ) S ENEobj ( p, o ) = v aggobj v obj T p (3) R ( s, . ) denotes all neighboring relations and predicates of the subject s (similar for the object).", "v subjp and v objp are the only free parameters in ENE.", "These are randomly initialized and then learned via gradient descent.", "We choose average pooling as our aggregation function to capture the proportion of different relation and predicate types within the target entity's neighborhood.", "Given multiple predicates between a subject and an object, only some of them are important for predicting the target KB relation between them.", "For example, in Figure 1, the predicate Executive Director is more important than Full Cast & Crew to predict the KB relation @film.directed by between Life of Pi and Ang Lee.", "We first present a query-based attention mechanism from earlier work, and then present our own solution with a neighborhood attention and combining both in a dual attention mechanism.", "The first attention mechanism uses a query relation q (i.e., the target relation we may want to predict) to find out the importance (weight) w p | q of different predicates p with respect to q with v p and v q as the corresponding relation embeddings.", "w p | q = exp( v q v Tp ) (cid:80) p (cid:48) exp( v q v Tp (cid:48) ) Thus, given each query relation q , the model tries to find evidence from predicates that are most relevant to the query.", "Similar techniques have been used in (Verga et al., 2017).", "We can also use hard attention (referred as MaxR) instead of soft attention where the maximum weight is replaced with one and others with zero.", "One potential shortcoming of this attention mechanism is its sensitivity to noise, whereby it may magnify sparsely observed predicates between entities.", "In this attention mechanism, we use the subject and object's neighborhood information as a filter to remove unrelated predicates.", "Intuitively, the entity representation generated by the ENE from its neighboring relations can be regarded as a soft and fine-grained entity type information.", "Consider the embedding vectors v aggsubj and v aggobj in Equation 3 that are aggregated from the entity's neighboring predicates and relations using an aggregation function.", "We compute the similarity w p | Nb between an entity's neighborhood information given by the above embeddings and a text predicate p to enforce a soft and fine-grained argument type constraint over the text predicate: w p | Nb = exp( v aggsubj v subj T p + v aggobj v obj T p ) (cid:80) p (cid:48) exp( v aggsubj v subj T p (cid:48) + v aggobj v obj T p (cid:48) ) Finally, we combine both the query-dependent and neighborhood-based attention into a Dual Attention mechanism: w p | q + Nb = w p | q w p | Nb w p = w p | q + Nb (cid:80) p (cid:48) w p (cid:48) | q + Nb And the score function is given by: S Att ( s, q, o ) = Agg p R ( s,o ) ( v p ) v Tq = (cid:0) (cid:88) p w p v p (cid:1) v Tq (4) 3.3 Joint Model: OpenKI All of the above models capture different types of features.", "Given a target triple ( s, p, o ) , we combine scores from Eq.", "3 and Eq.", "4 in our final OpenKI model.", "It aggregates the neighborhood information of s and o and also uses an attention mechanism to focus on the important predicates between s and o .", "Refer to Figure 1 for an illustration.", "The final score of ( s, p, o ) is given by: score ( s, p, o ) = f 1 ( S Att ( s, p, o )) ReLU ( 1 ) + f 2 ( S ENEsubj ( s, p )) ReLU ( 2 ) + f 3 ( S ENEobj ( p, o )) ReLU ( 3 ) where f i ( X ) = ( a i X + b i ) normalizes different scores to a comparable distribution.", "ReLU ( i ) enforces non-negative weights that allow scores to only contribute to the final model without canceling each other.", "a i , b i , i are free parameters that are learned during the back propagation gradient descent process.", "Our task is posed as a ranking problem.", "Given an entity pair, we want the observed KB relations between them to have higher scores than the unobserved ones.", "Thus, a pair-wise ranking based loss function is used to train our model: L ( s, p pos , p neg , o ) = max(0 , score ( s, p pos , o ) + score ( s, p neg , o )) where p pos refers to a positive relation, p neg refers to a uniformly sampled negative relation, and is the margin hyper-parameter.", "We optimize the loss function using Adam (Kingma and Ba, 2014).", "The training process uses early stop according to the validation set.", "Subject and object argument types of relations help in filtering out a large number of candidate relations that do not meet the argument type, and therefore serve as useful constraints for relation inference.", "Similar to (Yu et al., 2017), we identify the subject and object argument type of each relation by calculating its probability of co-occurrence with subject / object entity types.", "During inference, we select candidate relations by performing a post-processing filtering step using the subject and object's type information when available.", "(i) Ceres (Lockard et al., 2019) works on semi-structured web pages (e.g., IMDB) and exploits the DOM Tree and XPath (Olteanu et al., 2002) structure in the page to extract triples like (Incred-ibles 2, Cast and Crew, Brad Bird) and (Incredi-bles 2, Writers, Brad Bird) .", "We apply Ceres on the SWDE (Hao et al., 2011) movie corpus to generate triples.", "We align these triples to two different knowledge bases:", "(i) IMDB and", "(ii) subset of Freebase with relations under /film domain.", "The average length of text predicates is 1 .", "8 tokens for Ceres extractions.", "(ii) ReVerb (Fader et al., 2011) works at sentence level and employs various syntactic constraints like part-of-speech-based regular expressions and lexical constraints to prune incoherent and uninformative extractions.", "We use 3 million ReVerb extractions from ClueWeb where the subject is already linked to Freebase (Lin et al., 2012) 4 .", "We align these extractions to", "(i) entire Freebase and", "(ii) subset of Freebase with relations under /film domain.", "The average length of text predicates is 3 .", "4 tokens for ReVerb extractions.", "In order to show the generalizability of our approach to traditional (non OpenIE) corpora, we also perform experiments in the New York Times (NYT) and Freebase dataset (Riedel et al., 2010), which is a well known benchmark for distant supervision relation extraction.", "We consider the sentences (average length of 18 . 8 tokens) there to be a proxy for text predicates.", "These results are presented in Section 4.5.", "Data preparation : We collect all entity mentions M from OpenIE text extractions, and all candidate entities EKB from KB whose name exists in M .", "We retain the sub-graph GKB of KB triples where the subject and object belongs to EKB .", "Similar to (Riedel et al., 2013), we use string match to collect candidate entities for each entity mention.", "For each pair of entity mentions, we link them if two candidate entities in EKB share a relation in KB.", "Otherwise, we link each mention to the most common candidate.", "For entity mentions that cannot be linked to KB, we consider them as new entities and link together mentions that share same text .", "For validation and test, we randomly hold-out a part of the entity pairs from GKB where text predicates are observed.", "Our training data consists of the rest of GKB and all the OpenIE text extractions.", "In addition, we exclude direct KB triples from training where corresponding entity pairs appear in the test data (following the data setting of (Toutanova et al., 2015)).", "Table 2 shows the data statistics 5 .", "We adopt a similar training strategy as Universal Schema for the Ceres dataset that not only learns direct mapping from text predicates to KB relations, but also clusters OpenIE predicates and KB relations by their co-occurrence.", "However, for the ReVerb data containing a large number of text predicates compared to Ceres, we only learn the direct mapping from text predicates to KB relations that empirically works well for this dataset.", "To verify the usefulness of the entity's neighborhood information, we devise simple Bayesian methods as baselines.", "The simplest method counts 5 Our datasets with train, test, validation split are downloadable at https://github.com/zhangdongxu/ relation-inference-naacl19 for benchmarking.", "the co-occurrence of text predicates and KB relations (by applying Bayes rule) to find the conditional probability P ( p | p (cid:48) ) of a target KB relation p given a set of observed text predicates p (cid:48) .", "This performs relation inference at predicate-level .", "Then, we can include the entity's relational neighbors in the Bayesian network by adding the neighboring predicates and relations of the subject (given by p Ns ) and object (given by p No ) to find P ( p | s, p (cid:48) , o ) , which performs relation inference at the instance-level .", "The graph structures of these three Bayesian methods are shown in Figure 2. For detailed formula derivation, please refer to Appendix A.1.", "Angeli et al. (2015) employ point-wise mutual information (PMI) between target relations and observed predicates to map OpenIE predicates to KB relations.", "This is similar to our Bayes conditional probability P ( p | p (cid:48) ) .", "This baseline operates at predicate-level .", "To indicate the usefulness of entity neighborhood information, we also compare with P ( p | s, p (cid:48) , o ) as mentioned in Section 4.2.", "For the advanced embedding-based baselines, we compare with the E-model and the Rowless model (with MaxR and query attention) introduced in Section 2.1.", "Hyper-parameters: In our experiments, we use 25 dimensional embedding vectors for the Rowless model, and 12 dimensional embedding vectors for the Eand ENE models.", "We use a batch-size of 128, and 16 negative samples for each positive sample in a batch.", "Due to memory constraints, we sample at most 8 predicates between entities and 16 neighbors for each entity during training.", "We use = 1 .", "0 and set the learning rate to 5e-3 for ReVerb and 1e-3 for Ceres datasets.", "task, where each entity pair can share multiple KB relations.", "Therefore, we consider each KB relation as a query and compute the Mean Average Precision (MAP) where entity pairs sharing the query relation should be ranked higher than those without the relation.", "In Section 4.4 we report MAP statistics for the 50 most common KB relations for ReVerb and Freebase dataset, and for the 10 most common relations in other domain specific datasets.", "The left out relations involve few triples to report any significant statistics.", "We also report the area under the precision recall curve (AUC-PR) for evaluation in Section 4.5.", "Table 3 shows that the overall results.", "OpenKI achieves significant performance improvement over all the baselines.", "Overall, we observe 33.5% MAP improvement on average across different datasets.", "From the first two rows of Table 3, we observe the performance to improve as we incorporate neighborhood information into the Bayesian method.", "This depicts the strong influence of the entity's neighboring relations and predicates for relation inference.", "The results show that our Entity Neighbor Encoder (ENE) outperforms the E-Model significantly.", "This is because the majority of the entity pairs in our test data have at least one unseen entity (refer to Table 4), which is very common in the OpenIE setting.", "The E-model cannot handle unseen entities because of its modeling of entity-specific parameters.", "This demonstrates the benefit of encoding entities with their neighborhood information (KB relations and text predicates) rather than learning entity-specific parameters.", "Besides, ENE outperforms the Rowless Universal Schema model, which does not consider any information surrounding the entities.", "This becomes a disadvantage in sparse data setting where only a few predicates are observed between an entity pair.", "Finally, the results also show consistent improvement of OpenKI model over only-Rowless and only-ENE models.", "This indicates that the models are complementary to each other.", "We further observe significant improvements by applying different attention mechanisms over the OpenKI MaxR model thus establishing the effectiveness of our attention mechanism.", "Unseen entity: Table 4 shows the data statistics of unseen entity pairs in our test data.", "The most common scenario is that only one of the entity in a pair is observed during training, where our model benefits from the extra neighborhood information of the observed entity in contrast to the Rowless model.", "Table 5 shows the performance comparison on test data where at least one of the entity is known at test time.", "We choose ReVerb+Freebase(/film) for analysis because it contains the largest proportion of test triples where both entities are unknown during training.", "From the results, we observe that OpenKI outperforms the Rowless model by 48.6% when at least one of the entity in the triple is observed during training.", "Overall, we obtain 31.3% MAP improvement considering all of the test data.", "This validates the efficacy of encoding entity neighborhood information where at least one of the entities is known at test time.", "In the scenario where both entities are unknown at test time, the model falls back to the Rowless setting.", "Explicit Argument Type Constraint: As discussed in Section 3.5, incorporating explicit type constraints can improve the model performance.", "However, entity type information and argument type constraints are not always available especially for new entities.", "Table 6 shows the performance improvement of different models with entity type constraints.", "We observe the performance improvement of the ENE model to be much less than that of the Rowless model with explicit type constraint.", "This shows that the ENE model already captures soft entity type information while modeling the neighborhood information of an entity in contrast to the other methods that require explicit type constraint.", "Prior works (Surdeanu et al., 2012; Zeng et al., 2015; Lin et al., 2016; Qin et al., 2018) on distantly supervised relation extraction performed evaluations on the New York Times (NYT) + Freebase benchmark data developed by Riedel et al. (2010) 6 .", "The dataset contains sentences whose entity mentions are annotated with Freebase entities as well as relations.", "The training data consists of sentences from articles in 2005-2006 whereas the test data consists of sentences from articles in 2007.", "There are 1950 relational facts in our test data 7 .", "In contrast to our prior experiments in the semi-structured setting with text predicates, in this experiment we consider the sentences to be a proxy for the text predicates.", "Table 7 compares the performance of our model with two state-of-the-art works (Zeng et al., 2015; Lin et al., 2016) on this dataset using AUC-PR as the evaluation metric.", "Overall, OpenKI obtains 35% MAP improvement over the best performing PCNN baseline.", "In contrast to baseline models, our approach leverages the neighborhood information of each entity from the text predicates in the 2007 corpus and predicates / relations from the 2005-2006 corpus.", "This background knowledge contributes to the significant performance improvement.", "6 This data can be downloaded from http://iesl.", "cs.umass.edu/riedel/ecml/ 7 Facts of NA' (no relation) in the test data are not included in the evaluation process.", "Note that, our model uses only the graph information from the entity neighborhood and does not use any text encoder such as Piecewise Convolutional Neural Nets (PCNN) (Zeng et al., 2015), where convolutional neural networks were applied with piecewise max pooling to encode textual sentences.", "This further demonstrates the importance of entity neighborhood information for relation inference.", "It is possible to further improve the performance of our model by incorporating text encoders as an additional signal.", "Some prior works (Verga et al., 2016; Toutanova et al., 2015) also leverage text encoders for relation inference.", "Relation Extraction: Mintz et al. (2009) utilize the entity pair overlap between knowledge bases and text corpus to generate signals for automatic supervision.", "To avoid false positives during training, many works follow the at-least-one assump-tion, where at least one of the text patterns between the entity pair indicate an aligned predicate in the KB (Hoffmann et al., 2011; Surdeanu et al., 2012; Zeng et al., 2015; Lin et al., 2016).", "These works do not leverage graph information.", "In addition, Universal Schema (Riedel et al., 2013; Verga et al., 2017) tackled this task by low-rank matrix factorization.", "Toutanova et al. (2015) exploit graph information for knowledge base completion.", "However, their work cannot deal with unseen entities since entities' parameters are explicitly learned during training.", "Schema Mapping: Traditional schema mapping methods (Rahm and Bernstein, 2001) involve three kinds of features, namely, language (name or description), type constraint, and instance level co-occurrence information.", "These methods usually involve hand-crafted features.", "In contrast, our model learns all the features automatically from OpenIE and KB with no feature engineering.", "This makes it easy to scale to different domains with little model tuning.", "Also, the entity types used in traditional schema mapping is always pre-defined and coarse grained, so cannot provide precise constraint of relations for each entity.", "Instead, our ENE model automatically learns soft and fine-grained constraints on which relations entities are likely to participate in.", "It is also compatible with pre-defined type systems.", "Relation Grounding from OpenIE to KB: Instead of modeling existing schema, open information extraction (OpenIE) (Banko et al., 2007; Yates et al., 2007; Fader et al., 2011; Mausam et al., 2012) regards surface text mentions between entity pairs as separate relations, and do not require entity resolution or linking to KB.", "Since they do not model KB, it is difficult to infer KB relations only based on textual observations.", "Soderland et al. (2013) designed manual rules to map relational triples to slot types.", "Angeli et al. (2015) used PMI between OpenIE predicates and KB relations using distant-supervision from shared entity pairs for relation grounding.", "Yu et al. (2017) used word embedding to assign KB relation labels to OpenIE text predicates without entity alignment.", "These works do not exploit any graph information.", "Entity Modeling for Relation Grounding: People leveraged several entity information to help relation extraction.", "Zhou et al. (2005) employed type information and observed 8% improvement of F-1 scores.", "Ji et al. (2017) encoded entity description to calculate attention weights among different text predicates within an entity pair.", "However, entity type and description information is not commonly available.", "Instead, the neighborhood information is easier to obtain and can also be regarded as entities' background knowledge.", "Universal Schema (Riedel et al., 2013) proposed an E-Model to capture entity type information.", "However, it can easily overfit in the OpenIE setting with large number of entities and a sparse knowledge graph.", "In this work we jointly leverage relation mentions from OpenIE extractions and knowledge bases (KB) for relation inference and aligning OpenIE extractions to KB.", "Our model leverages the rich information (KB relations and OpenIE predicates) from the neighborhood of entities to improve the performance of relation inference.", "This also allows us to deal with new entities without using any entity-specific parameters.", "We further explore several attention mechanisms to better capture entity pair information.", "Our experiments over several datasets show 33.5% MAP improvement on average over state-of-the-art baselines.", "Some future extensions include exploring more advanced graph embedding techniques without modeling entity-specific parameters and using text encoders as additional signals." ]
[ "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "objective", "objective", "objective", "objective", "objective", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "result", "objective", "objective", "result", "abstain" ]
[ "Professional summaries are written with document-level information, such as the theme of the document, in mind.", "This is in contrast with most seq2seq decoders which simultaneously learn to focus on salient content, while deciding what to generate, at each decoding step.", "With the motivation to narrow this gap, we introduce Focus Attention Mechanism, a simple yet effective method to encourage decoders to proactively generate tokens that are similar or topical to the input document.", "Further, we propose a Focus Sampling method to enable generation of diverse summaries, an area currently understudied in summarization.", "When evaluated on the BBC extreme summarization task, two state-of-the-art models augmented with Focus Attention generate summaries that are closer to the target and more faithful to their input documents, outperforming their vanilla counterparts on ROUGE and multiple faithfulness measures.", "We also empirically demonstrate that Focus Sampling is more effective in generating diverse and faithful summaries than topk or nucleus sampling-based decoding methods.", "Document summarization producing the shorter version of a document while preserving salient information (Mani, 2001; Nenkova and McKeown, 2011) is challenging even for humans.", "Today, systems can generate summaries with a high level of fluency and coherence.", "This is due to recent advances such as sequence-to-sequence architectures (seq2seq) with attention and copy mechanism (Hochreiter and Schmidhuber, 1997; Bahdanau et al., 2015; Gu et al., 2016), fully attention-based Transformer architectures (Vaswani et al., 2017), and large pretrained language models (Devlin et al., Work done when authors were interning/working at Google.", "A GOLD : Australia has expelled an Israeli diplomat saying Israel was behind the forging of Australian passports linked to the murder of a Hamas operative in Dubai.", "PEGASUS : Australia has expelled an Israeli diplomat after concluding that forged Australian passports used in the killing of a Hamas militant in Dubai were issued by Israel.", "Our PEGFAME model : The Australian government has expelled an IsraelidiplomatovertheuseofforgedAustralianpassportsinthekillingofaHamasmilitantinDubai.", "B PEGASUS with Topk Sampling Israel has summoned the Australian ambassador to complain after the Australian government said forged passports used in the killing of a Hamas operative in Dubai belonged to Netanyahu's foreign ministry.", "The Australian government has ordered Israel to withdraw an officer over the use of forged Australian passports used by the 2013 murder of a Lebanese opposition figure in Dubai.", "PEGASUS with Nucleus Sampling Israel hasracuse withdrawn an envoy after the Australian government said it concluded that Israeli agents used forged passports used to kill a Dubai Bendigo businessman.", "The Australian government has recalled an Israeli diplomat over accusation that fake Australian passports used 436 kilometres (300 miles) from Canberra in the death of a Hamas militant were stolen by Israeli agents.", "C Our PEGFAME model with novel Focus Sampling AustraliahasexpelledanIsraelidiplomaticstaffafteraccusingthecoun-try'ssecurityagency,theIsraelimilitary'sintelligenceagency,ofbeingresponsiblefortheuseofAustralianvisasusedinthekillingofaPales-tinian.TheAustraliangovernmenthasexpelledanIsraelidiplomaticstaffafteritsaidthecountrywasresponsiblefortheuseofAustralianvisasusedinthekillingofaPalestinianintheMiddleEast.", "2019; Radford et al., 2018; Yang et al., 2019; Liu et al., 2019; Dong et al., 2019a; Song et al., 2019; Lewis et al., 2019; Rothe et al., 2020; Raffel et al., 2019; Zhang et al., 2019).", "However, in terms of summary quality, many challenges remain.", "For example, generating summaries that are faithful to the input is an unsolved problem (Kryscinski et al., 2020; Maynez et al., 2020; Gabriel et al., 2020).", "Furthermore, there can be multiple equally good summaries per source document.", "Neural generation models fail to account for this and tend to generate outputs with low diversity due to standard likelihood training, approximate decoding objectives, and lack of high quality multi-reference datasets (Fan et al., 2018; Kulikov et al., 2019; Freitag et al., 2020; Choi et al., 2020).", "Not much attention has been given to generation of diverse, yet faithful summaries two goals are often challenging to achieve simultaneously (Hashimoto et al., 2019); a model can produce diverse outputs through sampling (Fan et al., 2018; Holtzman et al., 2020), but at the cost of quality.", "In this paper we introduce a Focus Attention MEchanism (or FAME ) to transformer-based seq2seq architectures.", "FAME is inspired by how humans write summaries.", "Specifically, FAME aims to perform source-side planning to focus the summary on supported and topical content.", "FAME achieves this through a novel technique which augments standard contextual representations with a dynamic source-conditioned vocabulary biasing layer.", "We present the following experimental findings: FAME promotes summaries faithful to the source When evaluated on the BBC extreme summarization task (XSUM ; Narayan et al., 2018), experiments with two state-of-the-art summarizers ROBERTA S2S (Rothe et al., 2020) and PEGASUS (Zhang et al., 2019) show that both models generate summaries that are more faithful to their input documents when augmented with FAME , in comparison with their vanilla counterparts.", "1 Faithfulness is measured through a variety of previously proposed metrics.", "In addition, we leverage the manually annotated document-summary pairs for faithfulness from Maynez et al. (2020) and train a scorer which serves as an efficient proxy for expensive human evaluations.", "We call this metric BERTFaithful .", "FAME enables diverse summaries FAME , by design, supports Focus Sampling a technique that is more effective in sampling topically relevant tokens to generate diverse, yet topically consistent and faithful outputs, than other sampling methods (Fan et al., 2018; Holtzman et al., 2020).", "Figure 1 illustrates how focus sampling generates better summaries than other sampling methods.", "We demonstrate the effectiveness of our new Focus 1 In the paper we focus on assessing FAME on XSUM .", "But other summarization and text editing results can be found in Appendix B and C. Sampling technique using a variety of existing diversity and faithfulness measures.", "Empirically, we find that optimizing for high diversity often comes at the cost of faithfulness.", "Thus FAME provides a mechanism for trading-off high faithfulness with better diversity in summarization.", "Task-Specific Architectural Priors Several works enhance seq2seq architectures with task-specific priors.", "Pointer-generator style models (See et al., 2017; Xu et al., 2020) can accurately generate mostly extractive summaries by copying words from the source text via pointing.", "Text editing models (Malmi et al., 2019; Dong et al., 2019b; Mallinson et al., 2020) cast text generation as a sequence tagging problem with carefully selected edit operations required for the task.", "Others focus on improving content selection to better constrain the model to likely input phrases (Gehrmann et al., 2018) or by improving the representation of relevant input tokens (Zhou et al., 2017).", "Instead of directly modeling such priors, FAME learns the theme of the document through dynamic vocabulary biasing.", "Thus, FAME can be seen as a generalization of Pointer-generator or text-editing models via soft vocabulary learning.", "In fact, our FAME models achieve state-of-the-art on text-editing tasks (Appendix C).", "Topic-Aware Generation Models The idea of capturing document-level semantic information has been widely explored in the summarization community.", "Barzilay and Elhadad (1997) use WordNet (Fellbaum, 1998) to model a text's content relative to a topic based on lexical chains.", "Lin and Hovy (2000) propose to learn topic signatures for summarizing documents.", "Recently, document-level topic information has been used for improving neural language models (Mikolov and Zweig, 2012; Ghosh et al., 2016; Dieng et al., 2017; Karmaker Santu et al., 2019), neural response generators (Xing et al., 2017; Dziri et al., 2019), and not surprisingly, neural summarizers (Narayan et al., 2018; Ailem et al., 2019; Wang et al., 2020c).", "Both, Narayan et al. (2018) and Ailem et al. (2019), use a pretrained Latent Dirichlet Allocation (LDA; Blei et al., 2003) model, whereas, Wang et al. (2020c) use Poisson factor analysis (Zhou et al., 2012), to synthesize topic vectors for the input.", "Instead, we dynamically learn a target-induced topic distribution for the input under the assumption that the human-written summary is a good proxy for the input document.", "Faithful Generation Models Cao et al. (2017) force faithful generation by conditioning on both source text and extracted fact descriptions from the source text.", "Song et al. (2020) propose to jointly generate a sentence and its syntactic dependency parse to induce grammaticality and faithfulness.", "Tian et al. (2019) learn a confidence score to ensure that the model attends to the source whenever necessary.", "Wang et al. (2020d) introduce new input-output matching and embedding similarity losses to alleviate hallucination issues.", "Yet, the task of generating text that is consistent with the input remains an open problem (Gabriel et al., 2020).", "Diverse Generation Models There has been a surge of interest in making language models generate more diverse and human-like outputs.", "Vijayaku-mar et al. (2018) and Kulikov et al. (2019) diversify beam search, using a task-specific scoring function, or constrain beam hypotheses to be sufficiently different.", "Others avoid text degeneration by truncating the unreliable tail of the probability distribution at each decoding step, either by sampling from the topk tokens ( Topk Sampling ; Fan et al., 2018) or by sampling from a dynamic nucleus of tokens with the bulk of the probability mass ( Nucleus Sampling ; Holtzman et al., 2020).", "Others modify the training objective to make the distribution sparse (Martins et al., 2020) or assign lower probability to unlikely generations (Welleck et al., 2019a).", "For conditional text generation, most work focuses on generating diverse questions (Narayan et al., 2016; Dong et al., 2017; Sultan et al., 2020; Wang et al., 2020b) or paraphrases (Li et al., 2016b; Dai et al., 2017; Xu et al., 2018; Cao and Wan, 2020).", "Following Gehrmann et al. (2018), Cho et al. (2019) use a mixture of experts to sample different binary masks on the source sequence for diverse content selection for summarization.", "Our focus sampling is similar to topk and nucleus sampling methods; in that it truncates the tail of the probability distribution.", "However, instead of truncating it at each decoding step, it biases the decoder proactively to generate output from a set of tokens which are topically-relevant to the input.", "Given an input document X 1: n , we aim to generate its summary Y 1: m , where n and m are input and output sequence lengths.", "We address this prob-x 1 x 2 x 3 ... x n InputTokens Embedding Matrix Encoder x 1 x 2 x 3 ... x n t x1 t x2 t x3 ... t xn Dense Dense GELU y 1 y 2 y 3 ... y t-1 Generated Output Decoder a Lt Softmax y t f t t XFAME Figure 2: A Transformer-based encoder-decoder architecture with FAME .", "lem using seq2seq architectures with Transformer encoder and decoder, augmented with FAME , as depicted in Figure 2.", "FAME learns a distribution t x i for each input token x i over the vocabulary, measuring similarity of x i (in context) to the tokens in the vocabulary.", "The vocabulary distributions, t x i , for all x i are combined to form a dynamic vocabulary bias that is added to the decoder logits.", "This mechanism enhances the conditioning on the input source and encourages the decoder to generate tokens that are topically similar to the input.", "Transformer-based seq2seq Model The encoder uses BERT Transformer layers with multiheaded self-attention to encode X to a vector sequence X = x 1 , . . . , x n , with x i R h , where h is the size of hidden representation.", "The decoder uses an identical architecture, except that at decoding step t , layer l adds a conditional representation y lt R h for the token y t by attending to the output representation Y l 1 1: t 1 = y l 1 1 , . . . , y l 1 t 1 generated so far through self-attention and by attending to the input contextual representation X through encoder-decoder attention.", "The probability of predicting the next token y t from a vocabulary V is: p ( y t | Y 1: t 1 , X ; ) = softmax( Ey Lt ) , (1) where, y Lt is the representation from the final decoder layer L , E R | V | h the embedding matrix and the model parameters.", "Parameters are trained by minimizing cross-entropy at each decoding step: LMLE ( ) = 1 m m (cid:88) i =1 log p ( y t | Y 1: t 1 , X ; ) , where, Y 1: m is the human-written summary.", "Focus Attention MEchansim (FAME ) It is challenging for a decoder to obtain all relevant information from the conditional representation y Lt to learn the vocabulary output logits such that predictions y t are consistent with the input.", "Other modeling factors, specifically the decoder language model, can overwhelm model predictions.", "FAME (Fig-ure 2) addresses this by introducing a short-circuit from the source to the vocabulary output logits via a source-conditioned bias on vocabulary items.", "We take the encoder representation X = x 1 , . . . , x n and learn a Token-level Vocabulary Distribution t x i = gelu( x i W 1 ) W 2 E R | V | , for each token x i in the input sequence X .", "t x i measures the contextual similarity of the input token x i to the tokens in the vocabulary; W 1 R h h (cid:48) and W 2 R h (cid:48) h are parameters of newly introduced dense layers, h (cid:48) is the intermediate filter size.", "We define a Source-conditioned Vocabulary Distribution as t X = 1 /n (cid:80) ni =1 t x i R | V | as an average of token-level vocabulary distributions for tokens present in the input sequence X , capturing the similarity of X to the tokens in the vocabulary.", "Let a Lt R n be the encoder-decoder attention distribution over the source tokens for the output token y t and the final decoder layer L .", "We use a Lt to produce a weighted sum of the token-level vocabulary distributions to compute a dynamic vocabulary bias, or Focus Bias f t = (cid:80) ni =1 a Lt,i t x i R | V | at decoding step t .", "We modify the probability of predicting the next token y t from a vocabulary V as: p ( y t | Y 1: t 1 , X ; ) = softmax( y L t E + f t ) (2) We call this Focused Probability Distribution , and it modifies the output logits dynamically to put more focus on those tokens in the vocabulary which are similar to the attended tokens in X .", "The focus bias introduces a human-inspired control to the model where we do not generate the output in a fully abstractive manner (as in Eq.", "(1)), but we proactively generate output tokens that are similar to the input tokens (as in Eq.", "(2)).", "representative of the topical content relevant for the task.", "We achieve this by using the human-written summary Y as a proxy for the topical content of the input and impose the following prior on the source-conditioned vocabulary distribution t X : L Topic ( ) = 1 | V | | V | (cid:88) i =1 ([ v i Y ] log( ( t X,i )) + [ v i / Y ] log(1 ( t X,i ))) .", "We further refine Eq.", "(3) by replacing Y with Y c = Y F , where F is a set of | F | most frequent tokens in the vocabulary, 2 to improve focus on content words.", "Our final loss function is then L = L MLE + (1 ) L Topic , (4) where, is an hyper parameter.", "3 By enforcing t X to be a topic distribution for the input X , we encourage the focus bias f t to promote topically relevant tokens, and subsequently generate topically consistent outputs.", "Importantly, our focus bias with target-induced topic distribution is task-agnostic and less vulnerable to reference divergence issues (Dhingra et al., 2019; Maynez et al., 2020), and can learn any property embodied in the target relevant for the task.", "For example, depending on the task, f t can learn to favour input tokens (e.g., for mostly extractive summaries) or new tokens (e.g., for mostly abstractive summaries).", "This is in sharp contrast to models that introduce task-specific priors, e.g., the pointer-generator network (See et al., 2017) that can copy words from the source text, but does not do well on extreme summarization which is highly abstractive in nature (Narayan et al., 2018).", "Focus Sampling: Promoting Diversity in Faithful Generation We introduce Focus Sampling with FAME to construct a subset V k V by sampling k tokens from the topic distribution t X ( Focus sample ,k ).", "Then, we modify Eq.", "(2) as p ( y t | Y 1: t 1 , X ; ) = (cid:40) softmax( y Lt E + f t ) i if v i V k F 0 , otherwise.", "(5) For document summarization, the subset V k will capture topically salient tokens necessary to generate a summary; F is always added to V k to ensure 2 which are usually articles or other function words.", "that the model has access to function words.", "By tuning the parameters of sampling, we can enforce the model to control the faithfulness or diversity of the outputs.", "Focus sampling has similarities to topk ( Div top ,k ; Fan et al., 2018) and nucleus sampling ( Div nucleus ; Holtzman et al., 2020); in that they all aim to promote diversity.", "At each decoding step, the topk sampling diversifies the generation process by sampling a token from the top k tokens in the final output distribution.", "Similarly, nucleus sampling samples from a dynamic nucleus of tokens containing the vast majority (with a cumulative probability p ) of the probability distribution.", "Both topk and nucleus sampling shorten the tail of the output distribution at each decoding step, whereas focus sampling constrains the decoder to use a fixed and topically relevant vocabulary V k .", "Unlike the other two techniques, Focus sample ,k can also benefit from standard beam search decoding, leading to superior generation that is not only diverse, but also consistent with the input document.", "In this section we present our experimental setup to assess the ability of our FAME models to generate faithful summaries and to demonstrate that focus sampling is more effective in generating diverse and faithful summaries than other sampling-based decoding methods.", "We evaluate FAME models on extreme document summarization (XSUM ; Narayan et al., 2018).", "The XSUM summaries, are extreme in that the documents are summarized into single-sentence summaries.", "These summaries demonstrate a high level of abstractiveness, and generating them automatically requires document-level inference, abstraction, and paraphrasing.", "Due to their extreme nature, XSUM summaries are ideal to evaluate FAME models' ability to capture the theme of the document.", "4 We use on the original cased version consisting of 204,045/11,332/11,334 training/validation/test document-summary pairs.", "During training, the input documents are truncated to 512 tokens.", "The 4 We further experiment with long-form story highlight generation (CNN /D M ; Hermann et al., 2015) and two text editing tasks: Sentence Fusion (Geva et al., 2019) and Sentence Splitting (Botha et al., 2018).", "Their results can be found in Appendix B and C. Our FAME models achieve SOTA on both text-editing tasks.", "length of the summaries are limited to 64.", "We introduce FAME to two popular seq2seq architectures: RoBERTa initialized seq2seq (ROBERTA S2S, Rothe et al., 2020) and PEGASUS (Zhang et al., 2019).", "We refer ROBERTA S2S models with FAME as ROBFAME and PEGASUS with FAME with PEGFAME .", "We experiment with ROBERTA S2S-Large with shared encoder and decoder; it has 24 layers, a hidden size of 1024, filter size of 4096, 16 attention heads, and a vocabulary with 50K sentence pieces (Kudo and Richardson, 2018).", "ROBERTA S2S has around 455M parameters and ROBFAME has an additional 8M parameters.", "The best-performing PEGASUS model from Zhang et al. (2019) is not directly comparable with ROBERTA S2S.", "It does not share the encoder and decoder, it only has 16 layers, a hidden size of 1024, filter size of 4096, 16 attention heads, with a total of 568M parameters, and it also uses a much larger vocabulary with 91K sentence pieces.", "Hence, we trained our own PEGASUS model.", "We use the same architecture as ROBERTA S2S and pretrain it on a mixture of C4 (Raffel et al., 2019) and Huge-News (Zhang et al., 2019) datasets with the original objective of generating salient GAP-sentences.", "Our experiments focus on this newly trained PEGASUS model which has same number of parameters and vocabulary as ROBERTA S2S.", "But in contrast to ROBERTA S2S, the encoder-decoder attention in PEGASUS is pretrained.", "This allows us to analyse how focus attention affects pretrained (PEGASUS ) vs randomly-initialized (ROBERTA S2S) encoder-decoder attentions.", "5 4.3 Evaluation Metrics Lexical Overlap We report ROUGE F1 scores (Lin and Hovy, 2003) against reference summaries; in particular, we report on ROUGE -1 and ROUGE -2 for informativeness and ROUGE-L for fluency.", "6 Semantic Similarity We report BERTScore (Zhang et al., 2020) which computes the contextual similarity between a candidate and its reference summary.", "Faithfulness ROUGE and BERTScore do not correlate well with faithfulness of the generated summaries (Maynez et al., 2020).", "Human evaluation is traditionally considered as the gold standard for measuring faithfulness.", "But recent research has shown that even human evaluation has shortcomings (Schoch et al., 2020).", "Moreover, it is prohibitively expensive.", "This has led to the proposal of meta-evaluation metrics for various generation tasks (Durmus et al., 2020; Kryscinski et al., 2019; Sellam et al., 2020; Rei et al., 2020).", "We evaluate FAME models on semantic inference metrics such as textual entailment (Pasunuru and Bansal, 2018; Welleck et al., 2019b; Falke et al., 2019; Kryscinski et al., 2019) and question answering (Arumae and Liu, 2019; Wang et al., 2020a).", "In particular, we report the probability of a summary entailing ( ent. ) its input document (Maynez et al., 2020) and QA-based Feqa scores (Durmus et al., 2020).", "For ent.", "scores, we train an entailment classifier by fine-tuning a BERT-Large pretrained model (Devlin et al., 2019) on the Multi-NLI dataset (Williams et al., 2018).", "For Feqa, we use a fine-tuned BART (Lewis et al., 2019) language model for question generation to generate questions from the summaries, and a BERT-base model fine-tuned on SQuAD (Rajpurkar et al., 2018) to answer the generated questions with input document as context.", "7 In addition to ent.", "and Feqa , we train a scorer leveraging manually annotated document-summary pairs for faithfulness, as a surrogate for human evaluation and call this metric BERTFaithful .", "8 In particular, we finetune a BERT-Base classi-7 We used the Feqa code available here: https:// github.com/esdurmus/feqa/ .", "8 A very similar scorer was used in the GEM benchmark (Gehrmann et al., 2021) to identify and extract the subset with faithful reference summaries from the XSum dataset (Narayan et al., 2018).", "fier on 500 manually annotated document and gold summary pairs for the XSum dataset from Maynez et al. (2020) to predict whether a summary is faithful to the input document or not.", "9 We report the percentage of summaries that were faithful ( 1 N (cid:80) i 1 [ p i ( faithful ) > 0 . 5] ) and the model's confidence to generate faithful summaries ( 1 N (cid:80) i p i ( faithful ) ); N is the total number of examples in the test set.", "Diversity We report the number of times (out of n ), a model is able to generate a completely new summary ( Unique ), and Distinct-N (Li et al., 2016a), measuring the lexical diversity in the generated summaries.", "Distinct-N is estimated as the number of distinct n -grams of order n divided by the total number of n -grams of the same order, in all generated summaries.", "Finally, we also report the average length of summaries ( Len. ), repetition errors ( Rep. , estimated as the percentage of summaries with at least one repetition of rare or content words), and ROUGE -1 precision against the input document ( R1, P% ), to better understand their quality.", "FAME Summaries are More Fluent, Informative and Faithful.", "Table 1 presents results comparing our FAME models, ROBFAME and PEGFAME , against their counterparts ROBERTA S2S 9 Out of 500, 90% of the document-summary pairs were used for training and the rest 50 document-summary pairs were used for validation.", "We used the validation set to estimate Spearman's correlation coefficients of different metrics with the human assessment for faithfulness.", "We found that both entailment scores ( ent. ) and BERTFaithful are moderately correlated with faithfulness with correlation coefficients of 0.4387 and 0.3889, respectively.", "As such, we believe that BERTFaithful works as an efficient proxy for expensive human evaluation for faithfulness for XSum summaries.", "More work is needed to understand if BERTFaithful generalizes to other datasets.", "and PEGASUS , respectively.", "Both FAME models clearly outperform their vanilla counterparts in terms of generating summaries that are more fluent (see RL and Rep.), more informative (see R1, R2 and BERTSc.) and more faithful (see ent., Feqa and BERTFaithful).", "Among all four models, PEGFAME summaries are most fluent, informative and faithful.", "We further did pairwise comparisons for all measures in Table 1 and found that all differences are statistically significant except for BERTScore and faithfulness measures between PEGASUS and PEGFAME .", "10 These assessments demonstrate that FAME models aid both ROBERTA S2S and PEGASUS in generating fluent, faithful and relevant summaries, but are more effective in ROBERTA S2S than in PEGASUS for extreme summarization.", "Generating Diverse and Faithful Summaries with Focus Sampling.", "Table 2 presents results assessing focus sampling ( Focus sample ,k ), topk sampling ( Div top ,k ) and nucleus sampling ( Div nucleus ), for their abilities to generate diverse and faithful summaries.", "For Focus sample ,k , we choose k = 10 , 000 .", "We follow Holtzman et al. (2020) and choose k = 640 and the nucleus probability p = 0 .", "95 , for Div top ,k and Div nucleus , respectively.", "For Focus sample ,k , we decode with a beam size of", "4. We also report Focus sample ,k with Div top ,k and Div nucleus to assess if they can benefit one-another.", "In each setting we sample 10 sum-10 All significance tests in this work are pairwise comparisons (one-way ANOVA with posthoc Tukey HSD tests; p < 0 . 01 ).", "maries for each input document.", "For all metrics, we report the average over all 10 samples.", "11 Both Div top ,k and Div nucleus almost always generate a new summary.", "In comparison Focus sample ,k generates 1.61 and 2.77 unique summaries using ROBFAME and PEGFAME models, respectively.", "Div nucleus tends to generate the most distinct unigrams, bigrams, and trigrams.", "Interestingly, Focus sample ,k summaries have a more diverse collection of unigrams than in Div top ,k summaries (3.5% vs 2.3% for ROBFAME and 2.4% vs 1.9% for PEGFAME ).", "The high diversity in Div top ,k and Div nucleus comes at the cost of faithfulness; summaries generated with these sampling techniques have poor entailment scores.", "Focus sample ,k , on the other hand, generates summaries which entail documents the most.", "It also has the highest ROUGE scores across the board.", "Some of the generated examples can be seen in Figure 1.", "More predictions from other models can be found in Appendix E. Augmenting Div top ,k and Div nucleus with Focus sample ,k is not desirable because, though it increases diversity in terms of uniqueness and Distinct3 scores, faithfulness suffers again.", "Comparing results in Table 2 to the results in Table 1, it is clear that diversity comes at the cost of quality (e.g., RL/ent. scores for ROBFAME and ROBFAMEFocus sample ,k are 34.81/41.3 and 31.0/34.3, respectively).", "However, Focus sample ,k is superior to both Div top ,k and Div nucleus in gen-11 Feqa and BERTFaithful scores are dropped due to time constraints.", "Focus Attention and Sampling Work Differently in ROBFAME and PEGFAME .", "Since both encoder-decoder and focus attention parameters of ROBFAME are randomly initialized, they learn to compliment each other and learn a peaky topic distribution.", "On the other hand, since PEGFAME 's encoder-decoder attention is pre-trained, there is a push-pull effect between it and focus attention.", "This results in a smoother topic distribution, as seen in Figure", "3. 12 Although we see that both models' token sets capture the target intent well, the peaky distribu-12 This difference in topic distributions is consistent across the whole test set.", "We compute the peakiness score of a topic distribution as the slope of the line connecting logits of the top-1st token to the top-100th token.", "The average peakiness scores across the XSUM testset for ROBFAME and PEGFAME are 1.25 (51 ) and 0.45 (24.3 ), respectively.", "tion of ROBFAME enables more accurate predictions than that of PEGFAME , in a controlled generation setting.", "A comparison is presented in Figure 4 where we show how ROUGE -1 scores vary when we use only topk tokens from t X for generation.", "13 We observe that ROBFAME consistently outperforms PEGFAME with the lower values of k { 50 , 100 , 200 , 500 , 1000 } .", "Further, we observe that ROBFAME generates fewer unique summaries (1.61 vs 2.77) but has higher Distinct-N scores (3.5/22.4/43.9 vs 2.4/16.5/34.2) than PEGFAME , with Focus sample ,k in Table 2.", "This can be again be attributed to how FAME works differently in ROBFAME and PEGFAME .", "When V k is sampled from ROBFAME 's peaky distribution, the beam search decoding often tends to generate similar summaries (leading to a lower Uniqueness score) as the sampled V k s do not diverge by much from each other.", "But when it does diverge, the decoder tends to generate completely new summaries (leading to higher Distinct-N scores).", "Currently, we set k = 10 , 000 for our focus sampling experiments following our observations in Figure", "4. Future work will focus on how to better leverage trade-off between diversity and faithfulness by controlling the peakiness of the topic distribution t X .", "Ablations and SOTA Comparisons We emphasize that FAME or focus sampling does not aim to improve on state-of-the-results in terms of ROUGE , but to generate more faithful or diverse summaries 13 Additional results and model predictions for these experiments can be found in Appendix D. while maintaining their quality.", "For completeness, we compare our ROBFAME and PEGFAME models to their ablations and other state-of-the-art models on XSUM in Table", "3. We report ROUGE scores for FAME in the ideal scenario (ORACLE ) where it focuses on all the correct tokens in the input, i.e., the topic distribution t X is identical to the distribution observed in the reference summary.", "These models generate summaries with very high ROUGE scores when the model is given the correct tokens to focus on.", "The gap between the ORACLE and FAME scores suggests that there is still a lot of work to be done in this space.", "Focus attention without any topical supervision (models w/o Eq.", "(3)) is not significantly better than the baselines.", "But ROBFAME and PEGFAME (trained with joint supervision in Eq.", "(4)) significantly outperform ROBERTA S2S and PEGASUS , respectively.", "Our best model PEGFAME performs better than PtGen (See et al., 2017), ConvS2S (Narayan et al., 2018), MMN (Kim et al., 2019), MASS (Song et al., 2019) and BART (Lewis et al., 2019), but worse when the original PEGASUS (Zhang et al., 2019).", "This can be expected as the number of parameters in PEGFAME is far less than that in the original PEGASUS .", "We introduced FAME , a new attention mechanism which dynamically biases the decoder to proactively generate tokens that are topically similar to the input.", "FAME enhances the faithfulness of existing state-of-the-art abstract summarization models while improving their overall ROUGE scores.", "Finally, our newly introduced focus sampling technique is a better alternative to topk or nucleus sampling to generate diverse set of faithful summaries.", "We thank Sebastian Gehrmann, Slav Petrov, the reviewers, and the action editor for their invaluable feedback.", "The nature of text generation leads to multiple ethical considerations when applied to applications.", "The main failure mode is that the model can learn to mimic target properties in the training data that are not desirable.", "Faithfulness and Factuality Since models create new text, there is the danger that they may neither be faithful to the source material nor factual.", "This can be exacerbated when the data itself has highly abstractive targets, which require the model to generate words not seen in the source material during training.", "This often leads the model to generate content inconsistent with the source material (Kryscinski et al., 2020; Maynez et al., 2020; Gabriel et al., 2020).", "Trustworthy Data If the data itself is not trustworthy (comes from suspect or malicious sources) the model itself will naturally become untrustworthy as it will ultimately learn the language and topics of the training data.", "For instance, if the training data is about Obama birther conspiracies, and the model is asked to generate information about the early life of Obama, there is a risk that such false claims will be predicted by the model.", "Bias in Data Similarly, biases in the data around gender, race, etc., risk being propagated in the model predictions, which is common for most NLP tasks.", "This is especially true when the models are trained from non-contemporary data that do not represent current norms and practices (Blodgett et al., 2020).", "The above considerations are non-malicious, in that the model is merely learning to behave as its underlying source material.", "If users of such models are not aware of these issues and do not account for them, e.g., with better data selection, evaluation, etc., then the generated text can be damaging.", "Generation models can also be misused in malicious ways.", "These include generating fake news, spam, and other text meant to mislead large parts of the general population." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Non-task oriented dialogue systems have achieved great success in recent years due to largely accessible conversation data and the development of deep learning techniques.", "Given a context, current systems are able to yield a relevant and fluent response, but sometimes make logical mistakes because of weak reasoning capabilities.", "To facilitate the conversation reasoning research, we introduce MuTual, a novel dataset for Mu ltiTu rn di al ogue Reasoning, consisting of 8,860 manually annotated dialogues based on Chinese student English listening comprehension exams.", "Compared to previous benchmarks for non-task oriented dialogue systems, MuTual is much more challenging since it requires a model that can handle various reasoning problems.", "Empirical results show that state-of-the-art methods only reach 71%, which is far behind the human performance of 94%, indicating that there is ample room for improving reasoning ability.", "MuTual is available at https://github.", "com/Nealcly/MuTual .", "Building an intelligent conversational agent is one of the longest running goals in AI.", "Existing conversational agents can be categorized into task-oriented dialogue systems (Kannan et al., 2016) and non-task-oriented chatbot systems (Shum et al., 2018; Wu et al., 2019).", "Owing to the rise of deep learning techniques and the large amount of conversation data for training (Lowe et al., 2015; Wu et al., 2017; Zhang et al., 2018b), we are now witnessing promising results of chatbots both in academia and industry (Pan et al., 2019; Tao et al., 2019).", "Neural dialogue systems are trained over a large dialogue corpus and used to predict responses given a context.", "There are two lines of methods.", "Retrieve-based methods and generation based methods rely Contribution during internship at MSRA.", "on matching scores and perplexity scores, respectively.", "Due to the development of text matching and pre-training models (Devlin et al., 2019; Liu et al., 2019), a machine is able to achieve highly competitive results on these datasets, even close to human performance.", "For instance, ESIM (Chen et al., 2017) achieves 88 % on the Dialogue NLI (Welleck et al., 2019), and BERT achieves 85.8%, 93.1% and 98.5% in terms of R 10 @1 , R 10 @2 and R 10 @5 on the Ubuntu Corpus (Whang et al., 2019).", "However, there is still a huge gap between high performance on the leader-board and poor practical user experience.", "Chatbot engines often generate responses that are logically incorrect or violate commonsense knowledge (Shum et al., 2018).", "A likely reason is that current dialogue systems do not have strong reasoning skills, and most of the cases in previous benchmarks can be tackled by linguistic information matching.", "Previous work has demonstrated that neural encoders capture a rich hierarchy of syntactic and semantic information (Jawahar et al., 2019; Clark et al., 2019).", "However, reasoning capability and commonsense knowledge are not captured sufficiently (Young et al., 2018).", "One important research question is how we can evaluate reasoning ability in chatbots, which can potentially allow us to bridge the gap between high performance on leader-board and unsatisfactory practical performance.", "To this end, we develop dataset Task Reasoning Domain Manually Ubuntu (Lowe et al., 2015) Next Utterances Prediction $ Technique $ PERSONA-CHAT (Zhang et al., 2018a) Next Utterances Prediction $ Persona \" Dialogue NLI (Welleck et al., 2019) Next Utterances Prediction $ Persona $ CoQA (Reddy et al., 2019) Conversational QA \" Diverse \" Douban (Wu et al., 2017) Next Utterances Prediction $ Open $ DREAM (Sun et al., 2019) Reading Comprehension \" Open \" WSC (Levesque et al., 2012) Coreference Resolution \" Open $ SWAG (Zellers et al., 2018) Plausible Inference \" Movie $ CommonsenseQA (Talmor et al., 2019) Reading Comprehension \" Open \" RACE (Lai et al., 2017) Reading Comprehension \" Open $ ARC (Clark et al., 2018) Reading Comprehension \" Science $ DROP (Dua et al., 2019) Reading Comprehension \" Open $ Cosmos (Huang et al., 2019) Reading Comprehension \" Narrative \" MuTual Next Utterances Prediction \" Open \" Table 1: Comparison between our dataset and other datasets.", "an open domain Mu ltiTu rn di al ogue reasoning dataset (MuTual) to facilitate conversation model reasoning capabilities.", "In particular, given a context, we prepare four response candidates, each of which is relevant to the context, but only one of them is logically correct.", "As shown in Figure 1, all responses follow the same topic, but only the first one is appropriated.", "It requires reasoning ability on social etiquette and relationship to make the correct choice, which is not considered by existing dialogue benchmarks.", "We build our dataset based on Chinese high school English listening comprehension test data, where students are excepted to select the best answer from three candidate options, given a multiturn dialogue and a question.", "The original data is formatted as (cid:104) dialogue, question, answer (cid:105) , which is not directly suitable for our goal since chatbots only concern about how to respond contexts instead of answering an additional question.", "Therefore, we ask human annotators to rewrite the question and answer candidates as response candidates.", "Then our dataset follows the traditional response selection setting (Lowe et al., 2015), where a model should recognize a correct response from others for a multi-turn dialogue.", "The resulting dataset, MuTual, consists of 8,860 challenge questions, in terms of almost all questions involving reasoning, which are designed by linguist experts and high-quality annotators.", "We evaluate state-of-the-art retrieval-based models and pre-training models on MuTual.", "The best method gives a R @1 of 71%, which significantly underper-forms human performance (94%).", "To the best of our knowledge, MuTual is the first human-labeled reasoning-based dataset for multi-turn dialogue.", "We provide detailed analysis to provide insights into developing potentially reasoning-based chitchat dialogue systems.", "Table 1 compares our dataset with prior dialogue and reasoning related benchmarks.", "Dialogue: The Ubuntu Dialogue Corpus is a large retrieval-based dataset (Lowe et al., 2015), extracted from Ubuntu chat logs.", "PERSONA-CHAT (Zhang et al., 2018a) considers consistent personality in dialogue.", "Crowd workers are required to act the part of a given provided persona, and chat naturally.", "Dialogue NLI (Welleck et al., 2019) is a natural language inference dataset modified from PERSONA-CHAT.", "It demonstrates that NLI can be used to improve the consistency of dialogue models.", "CoQA (Reddy et al., 2019) is collected by pairing two annotators to chat about a passage in the form of questions and answers.", "Each question is dependent on the conversation history.", "There are also several large-scale datasets in Chinese, such as Sina Weibo (Shang et al., 2015), Douban Conversation Corpus (Wu et al., 2017) and E-commerce Dialogue Corpus (Zhang et al., 2018b).", "As shown in Table 1, most of the existing conversation benchmarks do not focus on testing reasoning ability.", "One exception is CoQA, which considers pragmatic reasoning.", "The difference is that CoQA is a machine comprehension dataset, in which conversations are based on a given passage.", "Another related reading comprehension dataset is DREAM (Sun et al., 2019), which is designed specifically for challenging dialogue-based reading Ma'am, you forgot your phone.", "comprehension.", "It relies on an external question to test the model's understanding capability.", "In contrast to the above dataset, our dataset is a next utterance prediction task, which is the fundamental problem in retrieval-based chatbots.", "In addition, our dataset requires various specific reasoning abilities, such as algebraic reasoning, intention prediction and so on, which is the main characteristic of our dataset.", "Reasoning: Recently, efforts have been made to develop benchmarks and tasks to address reasoning for language understanding.", "Winograd Schema Challenge (Levesque et al., 2012) is a reasoning-based coreference resolution task.", "Each pair of sentences differs by only one phrase.", "SWAG (Zellers et al., 2018) is derived from pairs of consecutive video captions, including 113k short context each with four candidates endings.", "CommonsenseQA (Talmor et al., 2019) is a question answering dataset extracted from CONCEPTNET (Speer et al., 2016).", "Utilizing CONCEPTNET to construct the dataset ensures that questions directly target commonsense reasoning.", "RACE is a machine reading comprehension dataset collected from English exams for Chinese students.", "AI2 Reasoning Challenge (Clark et al., 2018) contains 7,787 genuine grade-school level science questions with a corpus of 14M science reference sentences.", "DROP (Dua et al., 2019) and COSMOS (Huang et al., 2019) focus on factual understanding and commonsense comprehension, respectively.", "Despite their success, these datasets can hardly help chatbots directly.", "Following the traditional dialogue response selection setting, we deeply modify English listening comprehension conversation to form an utterance prediction task.", "The original listening comprehension materials and question-answer pairs are designed by linguist experts.", "Students are required to choose the best answer from three options for a question based on a piece of audio.", "To ensure students fully understand the audio, most of the questions need to be answered with reasoning capability.", "We crawled the listening exams from public web-sites 1 .", "Since the audio is either a conversation between two people or a simple passage, we only crawled data in the conversation format.", "The raw data is formatted as triples (cid:104) Conversation (audio), Question and Choices (text), Answer (image) (cid:105) .", "The following data pre-processing methods are applied to convert raw data to data in Figure 2.", "Step 1 Pre-processing: If question and candidate choices in two problems are the same, we consider them as duplicates and delete one of them.", "If there are more than three candidate options in one problem, we randomly drop incorrect options until three candidates are left.", "The answers are stored as images.", "We apply a commercial OCR system to convert images to text.", "It is easy to recognize the printed alphabet answer for the OCR system.", "We manually correct all OCR 1 All the problems in our dataset are freely accessible online without copyright by consulting the legal adviser.", "outputs to ensure quality.", "In the original listening comprehension test, the conversation is stored as audio.", "We adopt a commercial ASR system to convert speech to text, and further recruit experienced annotators to correct the transcription errors.", "To further ensure the quality of the transcripts, they are double-checked by annotators in the next step.", "Step 2 Candidate Response Creation: Figure 2 illustrates the process of modifying the listening comprehension problem.", "At first, an annotator is required to segment the original conversation, after clues to answer the question have appeared.", "Then, they construct positive response (Response A in Figure", "2) and negative responses (Response C and Response D) by consulting correct choice (Choice A) and incorrect choices (Choice B and Choice C), respectively.", "To make MuTual more challenging, we further ask the annotator to construct one more negative response (Response B) based on the correct choice.", "Through these steps, MuTual not only keeps the reasoning test designed by experts, but also introduces one more another type of reasoning for each instance.", "As shown in Figure 2, Response C and D can be excluded based on the relationship between two speakers.", "But B is incorrect due to the attitude reasoning.", "It is worth noting that all negative responses are logically correct if the context is not considered, but they are not appropriated responses if the context is taken into account.", "Therefore, our dataset focuses on multi-turn conversation reasoning rather than the logic of a sentence.", "When framing a negative response , we encourage annotators to copy some phrases in the context to discourage a model that can solve the problem by text matching.", "We further calculate the lexical overlap between response and context.", "There are 9.98% (10.63%) words in the positive (negative) response that occur in the corresponding context, suggesting that MuTual is hard to solve by plain text matching.", "Annotators in Step 2 are all English-major graduate students in Chinese, who are familiar with English language exams in China and fluent in English (pass the TEM-8 2 ).", "Annotators are required to draft annotate 170 instances repeatedly, until their labeling is sufficiently accurate to provide useful annotation.", "Because not all conversations are adapted to construct a reasoning-based response problem, the annotator has the right to skip the con-2 The highest level test for English majors as a foreign language in China.", "versation.", "We employ five annotators to construct the response, and two quality inspectors to check it.", "We discard the instance when inspectors doubt the uniqueness or correctness of the answer.", "The detailed statistics of MuTual are summarized in Table 2.", "MuTual has an average of 4.73 turns.", "The vocabulary size is 11,343, which is smaller than other dialogue datasets (Lowe et al., 2015; Wu et al., 2017).", "Because MuTual is modified from listening tests of English as a foreign language, the complexity of morphology and grammar is much simpler than other datasets.", "For human-annotated datasets, there is always a trade-off between the number of instances being annotated and the quality of annotations (Kryciski et al., 2019).", "Our dataset is smaller than the previous crawling-based dialogue dataset (Lowe et al., 2015; Wu et al., 2017) due to the collection method.", "But it is comparable with high-quality reasoning based dataset (Clark et al., 2018; Khashabi et al., 2018; Talmor et al., 2019) and human-designed dialogue dataset (Zhang et al., 2018a).", "Moreover, around 10k is sufficient to train a discriminative model (Nivre et al., 2019) or fine-tuning the pretraining model (Wang et al., 2019).", "To assess the distribution of different reasoning types, we annotate the specific types of reasoning that are involved for instance, sampled from the test set and categorize them into six groups.", "The definition and ratio of each group are shown as follows.", "Attitude Reasoning: This type of instance tests if a model knows the speaker's attitude towards an object.", "Intention Prediction: This type tests whether a model can predict what the speaker is going to do next.", "Situational Reasoning: Situation information (e.g., Location, Relationship between two speakers) is considered in this type of instance.", "A model should mine the implicit information from the previous context.", "Multi-fact Reasoning: In this type of instance, the correct response is related to multiple facts in context, which requires the model to deeply understand the context rather than simply text matching.", "Others: .", "There are 9 % of instances that require other commonsense knowledge.", "For example, at the bottom of Figure 3, the model should know that a fully reserved restaurant is usually very popular.", "The six types of reasoning are considered the most relevant to real chatbots.", "For example, it enables chatbots to make personal recommendations if a machine knows the user's attitude.", "The ability of intention prediction allows chatbots to respond more intelligently in a long conversation session.", "To further increase the difficulty, we use safe response to replace one of the candidate responses for each instance in MuTual.", "To guarantee diversity, the safe response is sampled from a list including I'm afraid I didn't quite catch what you were say-ing., Could you repeat that?, I'm really sorry, I didn't catch that., etc.", "In particular, once the instance is chosen, we randomly select a response to replace.", "If the positive response is replaced, the correct one is the safe response.", "If the negative response is replaced, the original positive response is still the best one.", "The motivation to build MuTual plus is to evaluate whether a model is able to select a safe response when the other candidates are inappropriate.", "When we replace the positive response with a safe response, it simulates a scenario in which all the other candidates are incorrect.", "The phenomenon is common in retrieval-based chatbots, because limited candidate responses cannot handle all cases in practice.", "Similarly, we can evaluate if the model can choose the correct response instead of a safe response when a correct response exists.", "We split the data into training, development and test sets, with an 80%, 10% and 10% ratio.", "We pack instances constructed from the same conversation during splitting to avoid data leakage.", "Following the standard dialogue setting (Lowe et al., 2015; Wu et al., 2017), we consider our task as a response selection task and employ traditional information retrieval evaluation methods, including recall at position 1 in 4 candidates (R @1 ), recall at position 2 in 4 candidates (R @2 ) and Mean Reciprocal Rank (MRR) (Voorhees, 2000).", "We compare the performance of several response selection models as well as pre-training models.", "We simply introduce these works as follows: 4.1 Baselines We evaluate individual scoring methods, multi-choice methods and human performance in our experiment.", "Given a context c and four candidates ( r 1 , r 2 , r 3 , r 4 ) , the individual scoring method computes a score for each choice independently with a score g ( c, r i ) , and selects the individual with the highest score among four candidates.", "On the contrary, the multi-choice method selects the best one by classification over all choices, formulated as h ( c, r 1 , r 2 , r 3 , r 4 ) .", "TF-IDF : The correct response tends to share more words with the context than the incorrect ones.", "Following Lowe et al. (2015), we calculate the TF-IDF vectors for the context and each of the candidate responses, respectively, and then select the highest cosine similarity between the context and the candidate response as the model output.", "The IDF is calculated only on the training set.", "Dual LSTM (Lowe et al., 2015): Two LSTMs are used to encode context and response, respectively.", "The relevance between context and response is calculated by the similarity of the final hidden state from both LSTMs.", "Sequential Matching Network (Wu et al., 2017): To avoid losing information in the context, SMN constructs a word-word and a sequence-sequence similarity matrix, instead of utilizing the last hidden state only, and then aggregates similarity matrix as a matching score.", "Deep Attention Matching Network : Zhou et al. (2018) adopt self attention module (Vaswani et al., 2017) to encode response and each utterance, respectively.", "To match utterance and response, DAM further applies cross-attention module and 3D matching to obtain final score.", "BERT (Devlin et al., 2019): Pre-training models have shown promising results on various multi-choice and reasoning tasks (Whang et al., 2019; Xu et al., 2019).", "Following Devlin et al. (2019), we concatenate the context (sentence A), and a candidate response (sentence B) as BERT input.", "On the top of BERT, a fully-connected layer is used for transforming the [CLS] token representation to the matching score.", "RoBERTa : Liu et al. (2019) re-establish BERT's masked language model training objective by using more data and different hyper-parameters.", "We fine-tune RoBERTa in the same way as BERT.", "GPT-2 (Radford et al., 2019): Given a context, the positive response has a higher probability compared with negative responses.", "Motivated by this, we concatenate context and response as a sequence, and calculate the joint probability of an entire sequence.", "The response in the lowest perplexity sequence is considered as the positive response.", "Moreover, we fine-tune the GPT-2 on [Context, Positive Response] pairs in MuTual training set, denoted as GPT-2-FT .", "Multi-choice Method : Inspired by BERT for multiple choice (Devlin et al., 2019), the task is considered as picking the most suitable response by comparing four candidates responses.", "In particular, we concatenate each candidate response with the corresponding context.", "Each input sequence is subsequently encoded to produce a [CLS] representation.", "The positive response is predicted based on the concatenation of all [CLS] representations, on which a fully connected layer with softmax is used.", "The method is denoted as BERT-MC .", "Similarly, we implement RoBERTa-MC as another multi-choice method.", "Human Performance : To obtain the human performance, we employ 3 NLP experts to measure the ceiling performance on the test set.", "All models perform significantly worse than on other popular conversation datasets, such as the Ubuntu Corpus (Lowe et al., 2015) and the Dialogue NLI dataset (Welleck et al., 2019), while human can address the reasoning problems easily.", "For example, BERT gives 85.8 % R 10 @1 on the Ubuntu Corpus, but RoBERTa only gives 71.3% R 4 @1 on MuTual.", "TF-IDF only slightly better than randomly guessing, which indicates that there is no obvious statistic clue between context and positive response.", "In contrast, TF-IDF achieves 54.98 % R @1 score on the Ubuntu Corpus, showing our dataset is more difficult to get the correct answer by text overlap.", "We evaluate typical retrieved-based dialogue mod-els' performance on MuTual.", "From Table 3, we Dev Test Baseline category Baseline method R @1 R @2 MRR R @1 R @2 MRR Baseline Human --0.938 0.971 0.964 Random 0.250 0.500 0.604 0.250 0.500 0.604 Individual scoring method (discrimination) TF-IDF 0.276 0.541 0.541 0.279 0.536 0.542 Dual LSTM (Lowe et al., 2015) 0.266 0.528 0.538 0.260 0.491 0.743 SMN (Wu et al., 2017) 0.274 0.524 0.575 0.299 0.585 0.595 DAM (Zhou et al., 2018) 0.239 0.463 0.575 0.241 0.465 0.518 BERT (Devlin et al., 2019) 0.657 0.867 0.803 0.648 0.847 0.795 RoBERTa (Liu et al., 2019) 0.695 0.878 0.824 0.713 0.892 0.836 Individual scoring method (generation) GPT-2 (Radford et al., 2019) 0.335 0.595 0.586 0.332 0.602 0.584 GPT-2-FT (Radford et al., 2019) 0.398 0.646 0.628 0.392 0.670 0.629 Multi-choice method BERT-MC (Devlin et al., 2019) 0.661 0.871 0.806 0.667 0.878 0.810 RoBERTa-MC (Liu et al., 2019) 0.693 0.887 0.825 0.686 0.887 0.822 Table 3: Comparison of varying approaches on MuTual.", "can see that well-designed matching models do not give better performance compared with simple dual LSTM, moreover, they drop by more than 50 abso-lute R @1 points compared to their performance on the Ubuntu Corpus, indicating that text matching models cannot handle reasoning problem well.", "Both BERT and RoBERTa outperform other models in MuTual, which is consistent with results in other literatures (Talmor et al., 2019).", "This is mainly because models learn reasoning capability during the pre-training on a large corpus.", "Although RoBERTa only gets 71.3% on R @1 , it achieves a surprising number, 89.2 %, on R @2 , indicating that the model is able to rank the correct response to the top-2 position.", "BERT-MC and RoBERTa-MC obtain similar results with BERT and RoBERTa, respectively.", "However, even RoBERTa is far behind human performance 23 points on R @1 , indicating that MuTual is indeed a challenging dataset, which opens the door for tackling new and complex reasoning problems in multi-turn conversations.", "GPT-2 and GPT-2-FT also perform undesirably on MuTual, even if the averaged perplexity on MuTual testset is 10.40.", "This phenomenon illustrates that", "1) sentences in MuTual are fluent; and", "2) current generative models still have plenty of room to improve their reasoning ability.", "As shown in Table 4, all models perform worse on MuTual plus , indicating the dataset is more difficult than MuTual, which is consistent with our assumption.", "We find that the performance of multi-choice method is significantly better than individual scoring method.", "One possible explanation is that multi-choice methods consider candidates together, so they can distinguish whether or not the safe response is the best one.", "In contrast, individual scoring methods are not robust, and safe responses are easy to confuse methods in the training stage.", "Moreover, RoBERTa-MC outperforms others by a large margin, showing its outstanding performance on reasoning problems.", "Furthermore, we conduct a transfer experiment, in which models are trained on MuTual but tested on MuTual plus without fine-tuning.", "The experiment investigates whether the model handles safe responses well if they have never seen them in training corpus.", "As shown in Table 4, RoBERTa-MC and RoBERTa drops 24.1% and 6.8%, respectively, Figure 4: BERT-MC and RoBERTa-MC performance on different reasoning types.", "in the transfer setting, demonstrating the benefits of seeing safe responses during the training process.", "Moreover, the individual scoring RoBERTa outperforms RoBERTa-MC, showing that the individual scoring method is more robust, when the safe response is not fed during training.", "Performance across different reasoning types: To analyze model performance across different reasoning types, we calculate BERT-MC and RoBERTa-MC performance on various question types as we introduce in Section 3.2.", "As shown in Figure 4, we find that the trends of BERT-MC and RoBERTa-MC are similar across different categories.", "RoBERTa-MC significantly outperforms BERT-MC in attitude reasoning and multi-fact reasoning.", "One potential reason is that there are some normal patterns between action and attitude captured by RoBERTa-MC, such as play football and excited.", "However, instances that involve algebraic and situation show poor performance.", "These two reasoning types heavily depend on commonsense reasoning.", "Taking Figure 5 as examples, it takes a simple subtraction step to derive the time difference (5:00 pm 6h = 11:00 am), but this turns out a significant challenge for RoBERTa-MC.", "In the second case, RoBERTa-MC fails to infer the dialogue situation, where the goal is to find a flat to rent.", "Performance across different context lengths: It is interesting that the performance of RoBERTa does not decrease significantly with the number of turns increasing, which is different from the phenomenon observed on other datasets.", "As shown in Table 5, the performance drops by only 1.9 points R @1 from 2 turns to long turns ( > 6), and the performance of 5 turns is higher than those with 4 F: Good morning.", "What can I do for you?", "M: I am looking for a flat for 2 people near the university.", "F: Well.", "There are several places available and the rent ranges from 80 to $150 a month.", "What are your requirements?", "M: I think of flat for no more than $100 a month is good.", "I prefer to live in a quiet street and I need at least 2 bedrooms.", "F: If you have any questions about enrollment, do not hesitate to ask me.", "F: How about this flat?", "If you are satisfied, we can sign the contract tomorrow.F: We have 2floors in our supermarket.", "F: You want only 1bedroom, so we have three flats that meet your requirement.", "F: Do you know what time it is right now in New York?", "M: Let me see.", "It's 5:00 pm now, in New York is 6 hours behind.", "F: Let me see, 7 hours behind.", "It is 11:00 am now in New York.", "F: 5 hours ahead.", "It is 11:00 pm now in New York.", "F: Is it 5:00 pm as well?", "F: It is 11:00 am now in New York.", "The results also show that the difficulty of MuTual is attributed to reasoning instead of complex conversation history.", "Context ablation study: We further verify whether our dataset requires multi-turn understanding rather than degenerating to a single turn reasoning problem.", "We evaluate Roberta and Roberta-MC performance when some utterances are manually removed.", "Figure 6 shows the performance when the earliest n utterances are removed in testing.", "As the ablation utterance increases, the performance of RoBERTa and RoBERTa-MC significantly decreases, which conforms to intuition.", "RoBERTa and RoBERTa-MC achieve only 43.7% and 47.7% after ablating all utterances in the context, respectively, indicating the importance of each utterance and the quality of the dataset.", "Moreover, if we shuf-fle the sequence of utterance, the performance of RoBERTa-MC drops by 3.8% only, showing that it is insensitive to the utterance sequence information.", "We introduced MuTual, a high-quality manually annotated multi-turn dialogue reasoning dataset, which contains 8,860 dialogues and aims to test reasoning ability of dialogue models.", "We describe the process for generating MuTual, and perform a detailed analysis.", "We find that various state-of-the-art models show poor performance in MuTual.", "The best model RoBERTa only obtains 71.3% R @1 .", "There is a large gap between the model performance and human performance.", "We hope that this dataset facilitates future research on multi-turn conversation reasoning problem.", "We thank Yulong Chen, Duyu Tang, Zhiyang Teng and Sen Yang for their insightful discussions.", "We also thank all anonymous reviewers for their constructive comments.", "The corresponding author is Yue Zhang.", "We thank the support by a Bright-Dreams Robotics Westlake University research grant." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "abstain", "method", "abstain", "objective", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "We introduce a transductive model for parsing into Universal Decompositional Semantics (UDS) representations, which jointly learns to map natural language utterances into UDS graph structures and annotate the graph with decompositional semantic attribute scores.", "We also introduce a strong pipeline model for parsing into the UDS graph structure, and show that our transductive parser performs comparably while additionally performing attribute prediction.", "By analyzing the attribute prediction errors, we find the model captures natural relationships between attribute groups.", "A structured account of compositional meaning has been longstanding goal for both natural language understanding and computational semantics.", "To this end, a number of efforts have focused on encoding semantic relationships and attributes in a semantic graphe.g. Abstract Meaning Representation (AMR; Banarescu et al., 2013), Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013), and Semantic Dependency Parsing (SDP; Oepen et al., 2014, 2015, 2016).", "In these formalisms, semantic information is typically encoded discretely, using nominal category labels for nodes and edges.", "This categorical encoding can make such formalisms brittle when presented with non-prototypical instances, and leads to challenges in coping with changing label ontologies and new datasets (White et al., 2019).", "Furthermore, they are difficult to annotate, often requiring trained linguists and large annotation manuals.", "The Decompositional Semantics framework presents an alternative to categorical formalisms that encodes semantic information in a feature-based schemeusing continuous scales rather than categorical labels.", "Starting with a feature-based semantic role representation rooted in Dowty 1991's (1991) proto-role theory (Reisinger et al., 2015; White et al., 2016), this framework has expanded to cover a wide variety of phenomena: event factuality (Rudinger et al., 2018b), genericity (Govindarajan et al., 2019), entity types (White et al., 2016), and temporal relations (Vashishtha et al., 2019).", "While this rich array of annotation types has been separately modeled, no system yet exists for its joint prediction, which has only recently been made feasible by the introduction of Universal Decompositional Semantics v1.0 (UDS1.0).", "Presented by White et al. (2019), UDS1.0 normalizes all of these annotations, and incorporates them as nodeand edge-level attributes in a single semantic graph whose structure is deterministically extracted from Universal Dependencies (UD; Nivre et al., 2015) syntactic parses via the PredPatt tool (White et al., 2016; Zhang et al., 2017).", "1 An example graph can be seen in Fig. 1.", "We present the first joint UDS parser, which learns to extract both graph structures and attributes from natural language input.", "This parser is a sequence-to-graph transductive model which takes as input a sentence and outputs a UDS graph complete with nodeand edge-level annotations.", "In contrast to the traditional semantic parsing paradigm, which shares its roots with syntactic parsing and rests on the assumption that the nodes in the graph correspond to tokens in the input i.e. the graph is lexicalized the parsing-as-transduction paradigm treats parsing as a sequence-to-graph problem.", "Rather than generating one sequence conditional on another sequence (sequence-to-sequence), we generate the nodes in a graph conditional on an input sequence, dynamically adding their edges during generation.", "As in sequence-to-sequence modeling, the supports of the input and output distributionsi.e. the input and output 1 Available at http://decomp.io .", "vocabulariesare not constrained to be identical.", "This has two benefits: first, post-hoc methods of obtaining alignments between input sequences and graphscommon especially in AMR parsing are no longer required; and second, we are able to produce semantic graphs from arbitrary input vocabulariesallowing for future extensions to cross-lingual parsing (Zhang et al., 2018).", "The parsing-as-transduction paradigm thus lends itself perfectly to UDS parsing, since the UDS protocol allows non-lexicalized (as well as cross-lingual) graphs, and these graphs may have nodes with multiple parentsi.e. re-entrant nodeswhich pose problems for traditional tree-based methods but are handled natively by the transductive paradigm.", "We compare our end-to-end transductive parser against a strong pipeline system, finding that the parser slightly outperforms the pipeline while additionally learning to produce decompositional attribute scores.", "Our results are reflected in the UDS1.0 leaderboard at http://decomp.io/ leaderboards/ .", "Datasets Reisinger et al. (2015) introduce the Decompositional Semantics framework in the context of a corpus-based verification of Dowty's seminal proto-role theory of semantic roles.", "This work was substantially expanded by White et al. (2016), who annotate for semantic proto-roles (SPR), word-sense, and temporal properties on top of semantic graphs extracted from English Web Treebank (EWT; Bies et al., 2012) UD parses using PredPatt (White et al., 2016; Zhang et al., 2017).", "White et", "al.'s EWT annotations are modeled by Teichert et al. (2017), who present a CRF-based multi-label classifier for proto-role labelling, and Rudinger et al. (2018a), who make use of an event-driven neural model.", "More recently, the annotation coverage for the same EWT data was expanded by Vashishtha et al. (2019) who annotate and model fine-grained temporal distinctions, and Govindarajan et al. (2019), who add annotations and models for genericityi.e. the degree of generality of events and entities in linguistic expressions.", "All of these efforts coalesce in White et al. (2019), which presents the first unified Decompositional Semantics-aligned datasetUniversal Decompositional Semantics v1.0 (UDS1.0) containing all properties annotated on top of EWT parses with standardized train, validation, and testing splits and a native reader and query interface.", "Parsing In most work on decompositional semantics, models are tasked with learning to predict attribute values, but not the structure of the graph.", "Zhang et al. (2018) develop the first model for performing both graph parsing and UDS attribute prediction in a cross-lingual setting, where Chinese input sentences were transduced into UDS graphs derived from UD parses of the input's English translation.", "This represents the first application of the parsing-as-transduction paradigm to a subset of UDS data as well as the introduction of a novel graph evaluation metric, S which we describe in further detail in Section 5.", "In contrast to the end-to-end approach presented here, Zhang et al. take a pipeline approach to parsing.", "Andreas et al. (2013) recast semantic parsing in a tree formalism as a sequence-to-sequence problem.", "Parsing-as-transduction, which extends this approach to directed acyclic graphs, has proven to be applicable in a variety of settings: Zhang et al. (2019a) use it to achieve state-of-the-art results in AMR parsing.", "These results are improved upon and shown to generalize to two other semantic formalisms (UCCA and SDP) by Zhang et al. (2019b), which set new state-of-the-art benchmarks for AMR and UCCA.", "The former result was subsequently surpassed by Cai and Lam (2020), which applies a similar transductive approach, while the latter was surpassed by Jiang et al. (2019).", "Having both been subjects of SemEval tasks (May, 2016; May and Priyadarshi, 2017; Oepen et al., 2019; Hershcovich et al., 2019), there are a number of contrasting methods for both AMR and UCCA parsing.", "These include transition-based parsing system for AMR (Wang et al., 2018; Goodman et al., 2016; Damonte et al., 2017; Ballesteros and Al-Onaizan, 2017) and for UCCA (Her-shcovich et al., 2017).", "In a similar vein to Zhang et al. (2019b), Hershcovich et al. (2018a) convert multiple formalisms into a unified formalism and use multitask learning for improved UCCA parsing; however, the latter does so at a loss to performance on the other formalisms, while Zhang et al. achieve state-of-the-art results in AMR and UCCA simultaneously.", "UCCA has also been shown to transfer to syntactic parsing: by converting UD parse trees into a format resembling UCCA, Hershcovich et al. (2018b) are able to apply a UCCA parser to both standard UD parses as well as enhanced UD parses, which contain re-entrant nodes.", "The UDS1.0 dataset is built on top of the UD-EWT data with three layers of annotations: UD parses, PredPatt graph structure, and decompositional semantic annotations on the edge and node level.", "In addition to specifying the syntactic head and head relation of every token in the input, UD parses include lexical features, such as word form, word lemma, and part-of-speech (POS) tag.", "This forms the syntactic graph , which is lexicalized (each token is tied to a node in the graph).", "From these pieces of information, PredPatt outputs a set of predicates and their arguments.", "Each predicate and argument is tied via an instance edge to a particular node in the syntactic graph.", "Because both predicates and arguments can consist of multi-word spans, there can be multiple instance edges leaving a semantic node.", "The semantic graph contains edges between predicates and arguments; in the case of clausal embedding, there can also be argument-argument edges.", "UDS1.0 asked Hiller Bush (1) name leaders the of Che.", "includes performative speaker/author and addressee nodes, which model discourse properties of the sentence.", "These nodes are structural place-holders for future discourse-level annotations; as these properties have not yet been annotated, we have opted to remove them from the graphs.", "2 The crowdsourced decompositional annotations tied to the semantic subgraph can be divided into node-level annotations and edge-level annotations.", "On the node level, annotations were collected for factuality, genericity, time, and entity type.", "Edge-level annotations are in the space of semantic proto-roles, which are designed to provide a nuanced higher-dimensional substrate for notions of agency and patienthood.", "These are summarized in Table 1, where purple indicates a high attribute score, while orange indicates a low score.", "For further details on attribute types and data annotation, see White et al. (2019) and the references therein.", "Arborescence Recall that the lowest level of the UDS graph (Fig. 1) is a syntactic dependency parse.", "Modeling this level is out of scope for this work, as we are interesting in modeling the semantic structure and attributes.", "In order to train a parsing-as-transduction model, an arborescencea hierarchical tree structure which has only edge and node annotationsis required.", "From the full UDS graph, we construct the arborescence by:", "(a) Assigning each semantic node a lexical label; this label is taken from the syntactic head that the semantic node dominates.", "The only exception to this is in the case of embedded clauses, where an argument node dominates an embedded predicate.", "Here, we follow PredPatt, assigning the label SOMETHING to the embedded argument (c.f. Fig. 2).", "2 Since these placeholder nodes are currently added deterministically, recovering them is also a deterministic operation.", "(b) Retaining all edges between semantic nodes as argument edges, duplicating nodes in cases of re-entrancy (e.g. Bush (1) in Fig. 2).", "(c) Converting the deep syntactic structure into a shallow representation, where we introduce non-head edges from the syntactic head (attached to a semantic node) to each node it dominates, and remove all other syntax-semantics edges.", "This effectively linearizes the yield of each semantic node (see Fig. 2).", "Our model is based on the transductive broad-coverage parsing model presented in Zhang et al. (2019b), which can be consulted for further details on the encoder, decoder, and pointer-generator modules.", "The original parser is composed of six major modules: the encoder, the decoder embedding module, the target node module, the target label module, the head module, and the relation module.", "In this work we introduce two new modules: the node attribute module and the edge attribute module, as well a loss function for attributes.", "Encoder The encoder module takes a concatenation of multiple input features: GloVe token embeddings (Pennington et al., 2014), POS tag embeddings, character CNN embeddings, and BERT (Devlin et al., 2019) contextual embeddings (mean-pooled over subwords).", "These representations are passed through a stacked bidirectional LSTM encoder, which has the following definition: s lt = (cid:20) s lt s lt (cid:21) = (cid:34) LSTM ( s l 1 t , s tt 1 ) LSTM ( s l 1 t , s tt +1 ) (cid:35) where arrows denote the LSTM direction, t denotes the timestep, and l denotes the layer of the stack.", "Decoder embedding module In order to generate new semantic nodes and relationships, a method of embedding categorical semantic information is required.", "More formally, a semantic relation is given by a tuple (cid:104) u i , d ui , r i , v i , d vi (cid:105) , where u i denotes the head token of index i and v i denotes the token at index i .", "Note that these tokens are the labels of nodes in the arborescence (see Fig 2.) d ui and d vi are the indices of u i and v i , while r i is the relationship type between v i and u i .", "The decoder embedding module embeds these categorical variables into real space, producing a tuple of vectors (cid:104) u i , d ui , r i , v i , d vi (cid:105) .", "For node labels u i and v i , we take the concatenation of GloVe and CharCNN features.", "r i , d vi and d ui are randomly initialized.", "Target Node Module From the continuous embedding of a semantic relation (cid:104) u i , d ui , r i , v i , d vi (cid:105) we want to obtain a latent node representation z i .", "We initialize the hidden states of the 0 th layer and the hidden states of the 0 th state in each layer to h 0i = [ v i ; d vi ] h l0 = [ s l 1 ; s ln ] respectively.", "Further, let c i be a context vector over encoder states s l 1: n , defined as a (enc) i = softmax (cid:0) MLP (enc) ([ h li ; s l 1: n ]) (cid:1) c i = a Ti s l 1: n Let h li and z i be defined as follows: z i = MLP (relation) ([ h li ; c i ; r i ; u i ; d ui ]) h li = LSTM ( h l 1 i , h li 1 ) where z i can be thought as a representation of node i in the graph, conditioned on previous nodes (via h li as well as the input text via c i , the graph token (via u i and d ui ) and the relation type (via r i ).", "Using this representation z i , Zhang et al. (2019b) introduce an extended pointer-generator network (See et al., 2017) which computes the distribution over the next node label v i +1 : [ p gen , p enc ,p dec ] = softmax (cid:0) MLP (switch) ( z i ) (cid:1) a dec i = softmax (cid:0) MLP dec ([ z 1: i ]) (cid:1) p (vocab) i = softmax (cid:0) MLP ( vocab ) ( z i ) (cid:1) P ( v i +1 ) = p gen p (vocab) i p enc a (enc) i p dec a (dec) i From this last equation, we have that the generation of a new node is decomposed into three options: (1) generate a new node from a vocabulary of node labels, (2) copy a node label directly from the input sequence (lexicalization), or (3) copy a node label from a previously generated node (re-entrancy).", "Parsing modules To obtain a parse from the node states h 1: n , a head node and relation type must be assigned to each node 1 : n .", "In order to assign a head node, we instantiate two multilayer perceptrons (MLPs): MLP (start) and MLP (end) , where (start) denotes the starting node of the edge and (end) denotes its target.", "Using these MLPs, for node i + 1 we obtain h (start) i +1 = MLP (start) ( h li +1 ) h (end) 1: i = MLP (end) ( h l 1: i ) P ( u i +1 ) = softmax (cid:0) BIAFFINE ( h (start) i +1 , h (end) 1: i ) (cid:1) The next relationship r i +1 is computed in a similar fashion, also using two MLPs: h (rel-src) i +1 = MLP (rel-src) ( h lj ) h (rel-tgt) i +1 = MLP (rel-tgt) ( h li +1 ) P ( r i +1 ) = softmax (cid:0) BILINEAR ( h (rel-src) i +1 , h (rel-tgt) i +1 ) (cid:1) where j is the index of the head assigned to the node indexed by i + 1 .", "3 3 BIAFFINE is defined in Dozat and Manning (2016).", "BILINEAR ( x 1 , x 2 ) = x 1 Ax 2 + b where A and b are learned parameters.", "Node attribute module As noted in previous UDS projects, an important step in decompositional attribute annotation is determining whether a property applies in a given context.", "For example, factuality typically applies only to predicate nodes.", "Since all nodes (predicate and argument) are treated identically w.r.t. their semantic relations z i , this work introduces a two-fold node attribute model, which predicts whether a property j applies to a node i via a binary mask ji as well as its value ji .", "This module defines ji and ji as follows: P ( ji ) = sigmoid (cid:0) MLP (node-mask) ( z i ) (cid:1) ji = MLP (node-attr) ( z i ) Edge attribute module As in the case of node attributes, edge attributes do not apply in all cases.", "Therefore, a similar bifurcation strategy is pursued with edge attribute prediction: we predict a binary attribute mask js,e for attribute j on edge s e as well as an attribute value js,e .", "These are given by: m (mask) s,e = BILINEAR (mask) ( h ls , h le ) m (attr) s,e = BILINEAR (attr) ( h ls , h le ) P ( js,e ) = sigmoid (cid:0) MLP (edge-mask) ( m (mask) s,e ) (cid:1) js,e = MLP (edge-attr) ( m (attr) s,e ) Training The nodes in the graph are linearized in a pre-order traversal over the arborescence, which ensures that at prediction time, we have seen the potential antecendent of a node for target-side copying (e.g. Bush (1) in Fig. 2), determining the order of semantic nodes in the graph.", "The syntactic children of these nodes are presented in the order they appear in the text.", "The loss functions for the node, head, and relation prediction modules are cross-entropy loss, while for the masks and binary cross-entropy loss is used, since each position in the mask is a separate classification decision.", "The loss function used for K attributes 1: K on N nodes/edges is given by: ( x ) = (cid:40) 0 if x 0 1 otherwise LMSE ( , ) = 1 NKN (cid:88) i =1 K (cid:88) j =1 c ji ( ji j i ) 2 LBCE ( , ) = 1 NKN (cid:88) i =1 K (cid:88) j =1 (cid:18) ( j i ) log( ( ji )) + (cid:0) 1 ( j i ) (cid:1) log (cid:0) 1 ( ji ) (cid:1)(cid:19) L ( , ) = 2 LMSE ( , ) LBCE ( , ) LMSE ( , ) + LBCE ( , ) where is a scaling factor, c ji is the annotator confidence for annotation j on token i , is the set of predicted attributes, and is the set of true attributes.", "Note that inclusion of the confidence mask c ji means the model only incurs loss on attributes annotated for a given node, since c ji = 0 when an annotation is missing (i.e. no MSE loss is incurred for attributes which do not apply to a node or edge); in the binary experimental setting, we replace c ji with ( c ji ) , removing the weighting but still masking out loss on un-annotated nodes.", "Also note than in the case of edges, the form of the loss is identical, but is replaced by , and by .", "This loss encourages the predicted attribute ji value to be close in value to the true value j i via the mean-squared error criterion while concomitantly encouraging the predicted and reference values to share a sign via the thresholded cross-entropy criterion.", "Both node and edge attribute models are trained to predict attribute values independently, and that parameters are shared across attributes.", "This is central to our analysis in 7.", "Following Zhang et al. (2019b) we train the structural parsing modules with coverage loss (See et al., 2017).", "All models were trained to convergence using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0 .", "001 .", "Pipeline Model Recall from Section 3 that the semantic graph structure in UDS graphs is deterministically generated from PredPatt, which takes as input a UD parse and outputs a semantic graph structure.", "This leads to a strong pipeline model for the graph structure alone: running a high-performing UD parserthe Stanford UD parser (Chen and Manning, 2014)and passing its output through PredPatt to create a structure.", "4 For this baseline, 4 This structure is missing the core decompositional attributes but has both predicate and argument nodes.", "Additionally, the pipeline model fails to capture nominal heads of copular predicates (e.g. Jo is a doctor ), which are not returned by PredPatt but are added to the dataset as a preprocessing step in the genericity annotation task.", "the only source of error is the UD parsing model, which for English performs very highly.", "S Metric For evaluating the quality of output graph structures, Smatch (Cai and Knight, 2013), a hill-climbing approach to approximating the optimal matching between variables in two graphs, is commonly used.", "While Smatch can match categorial variables such as those found in meaning representations like AMR, it lacks a matching function for continuous variables such as decompositional attributes.", "To remedy this, Zhang et al. (2018) introduce the S metric, an extension to Smatch that allows for attribute matching.", "Using hill-climbing, we are able to match instance and attribute nodes and edges; instance nodes are matched via string match, while attribute similarity is given by 1 (cid:16) i j (cid:17) 2 where = 6 is the maximum possible difference between attributes, which are bounded on [ 3 , 3] .", "5 6 Results Table 5 shows the Pearson's correlation coefficient ( ) and the F1 score computed on binarized responses for each node and edge attribute under the oracle decoding setting, where a gold graph structure is provided to the model.", "An asterisk denotes that p < 0 .", "05 , where p is determined by a Student's t-test.", "F1 scores are obtained by binarizing continuous attribute predictions into positive and negative, following from the original UDS motivation found in Dowty (1991), where binary proto-role features were introduced.", "The binarization threshold was tuned per attribute on the validation set.", "The baseline column in Table 5 shows the binarized F1 score for the baseline attribute model, given by predicting the median attribute value for each attribute type at each position.", "Pearson's is undefined for this approach, as the variance of the predicted distribution is 0.", "The thresholds were similarly tuned on validation data for this baseline.", "Table 2 shows S metric (c.f. 5) precision, recall, and F1 score as computed on full arborescences 5 This function was found to produce more matches on UDS1.0 than the e MAE function used by Zhang et al. (2018).", "with both semantics and syntax nodes.", "Our parser slightly outperforms the pipeline, with higher performance in the binary setting, where we exclude annotator confidence from the loss.", "Table 3 shows precision, recall, and F1 score on semantics nodes alone.", "The first parser setting (syn-tax) reflects a parsing model trained on full graphs, and evaluated only on the semantic subgraphs of the produced graphs.", "The second parser (seman-tics) is directly trained on semantic subgraphs, with no syntactic nodes in the training graphs.", "The full parser performs comparably to the pipeline, while the parser trained specifically on semantics-only graphs outperforms the pipeline.", "However, the mean attribute of the syntactic parser ( 0 . 3433 ) exceeded that of the semantics-only parser ( 0 . 3151 ).", "Table 4 gives the S metric results on full graphs predicted by the model, including attribute matching.", "The pipeline model is unable to perform this task because it predicts structure alone, without attributes.", "We see that training the parser with shared MLP and BILINEAR modules (i.e. MLP (mask) = MLP (attr) and BILINEAR (mask) = BILINEAR (attr) ) for both the attribute mask and attribute value heavily degrades the performance, while removing annotator confidence increases it slightly.", "Table 2 suggests that the structural quality of the parses obtained by the parsing model presented here is slightly superior to that of pipeline model's parses, with Table 3 indicating that the semantic component of the graph can be parsed significantly more accurately by our model.", "Taken together with Table 5, we can conclude that the model is able to learn to jointly predict the graph structure and attributes.", "This is further reinforced by Table 4.", "Note that the numbers reported in Tables 2 and 4 are not directly comparable, as the scores in Table 4 Method P R F1 Shared 79.52 32.48 46.12 Separate 83.46 82.27 82.86 Separate (binary) 84.19 84.19 84.19 Table 4: Test set precision, recall, and F1 computed via S score with attributes (syntactic nodes included) Property Pearson's F1 F1 (model) (baseline) (model) nod e l e v e l factuality-factual 0.6479* 75.15 84.46 g e n e r i c it y arg-abstract 0.3392* 40.04 48.05 arg-kind 0.2145* 67.61 67.54 arg-particular 0.3347* 83.10 84.62 pred-dynamic 0.2469* 72.49 71.19 pred-hypothetical 0.3442* 44.16 50.21 pred-particular 0.1887* 77.47 78.16 ti m e dur-centuries 0.1336* 10.14 12.30 dur-days 0.1802* 68.72 68.21 dur-decades 0.2383* 29.89 34.19 dur-forever 0.2524* 37.93 38.58 dur-hours 0.2227* 73.66 73.61 dur-instant 0.1761* 55.98 51.90 dur-minutes 0.3409* 86.28 87.05 dur-months 0.3204* 63.25 64.42 dur-seconds 0.2751* 65.33 64.75 dur-weeks 0.2475* 54.02 55.41 dur-years 0.4239* 65.03 66.19 w o r d s e n s e supersense-noun.Tops 0.4660* 7.34 40.00 supersense-noun.act 0.6007* 27.37 56.39 supersense-noun.animal 0.3773* 5.60 25.64 supersense-noun.artifact 0.5617* 23.12 52.79 supersense-noun.attribute 0.4505* 10.81 29.27 supersense-noun.body 0.4543* 1.53 42.86 supersense-noun.cognition 0.5692* 21.17 50.56 supersense-noun.communication 0.6182* 30.60 62.12 supersense-noun.event 0.4233* 5.80 33.61 supersense-noun.feeling 0.2404* 2.74 5.45 supersense-noun.food 0.6773* 7.15 67.72 supersense-noun.group 0.5650* 15.57 55.22 supersense-noun.location 0.5118* 7.81 55.64 supersense-noun.motive 0.3447* 0.62 50.00 supersense-noun.object 0.2276* 2.04 19.05 supersense-noun.person 0.6091* 15.74 61.25 supersense-noun.phenomenon 0.2955* 2.04 8.85 supersense-noun.plant 0.0358 0.21 13.33 supersense-noun.possession 0.5247* 6.67 47.62 supersense-noun.process 0.1292* 1.13 3.96 supersense-noun.quantity 0.4403* 4.92 36.11 supersense-noun.relation 0.2089* 2.34 11.94 supersense-noun.shape 0.0659* 0.31 1.55 supersense-noun.state 0.4877* 11.36 36.17 supersense-noun.substance 0.2411* 1.43 3.64 supersense-noun.time 0.5175* 10.99 51.43 e dg e l e v e l p r o t o r o l e s awareness 0.6715* 68.20 81.99 change-of-location 0.1061* 38.98 36.90 change-of-possession 0.0452 14.93 20.00 change-of-state 0.0448 42.59 37.21 change-of-state-continuous 0.0793 31.47 27.69 existed-after 0.3910* 93.33 95.58 existed-before 0.4802* 91.60 92.31 existed-during 0.3247* 98.31 98.61 instigation 0.3820* 74.48 76.77 partitive 0.0213 31.91 34.64 sentient 0.6494* 64.67 82.81 volition 0.5501* 63.79 79.86 was-for-benefit 0.2389* 59.87 62.11 was-used 0.1608* 86.64 89.00 macro-average 0.3433 37.20 50.66 Table 5: Pearson's , baseline F1, and model F1 for each UDS attribute given gold test-set graph structures.", "Table 3 shows that a parser trained on semantic subgraphs better recovers the subgraphs than a parser trained on full graphs whose outputs are postprocessed to remove syntactic nodes.", "However, the fact that the parser trained on full graphs achieves a higher Pearson's score indicates that the inclusion of syntactic nodes may provide additional information for predicting UDS attributes.", "In examining instances with an S score below 50, we observe two trends: the input sentences are often ungrammatical, and for 63.82% (on the validation set) the model predicts no output nodes.", "While the pipeline system does well on modeling semantic graph structure, it is by its definition unable to perform attribute parsing.", "In contrast, the results presented in Tables 4 and 5 show that the parser can jointly learn to produce semantic graphs and annotate them with attributes.", "Finally, we find that while weighting the loss with the confidence scores has a small benefit in the semantics-only setting, it hurts overall attribute and structure prediction performance.", "This may be due to the relatively small size of the UDS dataset, which makes a strategy that is effectively weakening the loss signal at training time less effective.", "6 Figs.", "3a-3c show the correlational strength coefficient between the true and predicted attributes under a forced decode of the graph structure.", "It is defined over property types indices j , k with predicted attribute values ji and true values j i as: ( j, k ) = tanh (cid:0) 1 | corr ( j j , k k ) | | corr ( j , k ) | (cid:1) where corr ( j , k ) is Pearson's correlation coefficient.", "Further details are given in Appendix A. ( i, j ) reflects how well the model captures the strength of the correlations (either positive or negative) between two attribute types in the dataset: a positive value indicates that the model captures the correlation to some extent, with values closer to 1 implying better performance; a value of 0 indicates that the model does not capture the correlation at all, or that no significant interaction was present; a negative value indicates that the model makes systematic mistakes while predicting the two variables, e.g. when the model under-predicts the value of property i , it also under-predicts property j 's value.", "A Bonferroni-corrected non-parametric bootstrap test ( 1000 replicants) was used for significance testing, with failing pairs being said to not be reliably different from 0 correlation.", "(a) between argument-node attribute pairs.", "Subset of wordsenses used for readability.", "of argument-node attributes, with most values close to -1.", "However, we do see positive correlations between some of the genericity annotations, Sentences Property (A) Ours (A) (B) Ours (B) (C) Ours (C) (A) She was untrained and, awareness 3 3.04 1 3.69 5 3.68 in one botched job, killed a client.", "as well as between genericity-arg-abstract , which rates how conceptually abstract an argument is, and the cognition wordsense, which applies to abstract terms such as doubts and thoughts.", "In Fig. 3b, we again observe several negative values; however, some positive correlations can be seen between certain time properties, such as duration-days , duration-weeks , and duration-months , as well as more strongly positive 's between certain genericity annotations.", "The positive between factuality and genericity-hypothetical indicates the model has captured the commonalities between predicates with these annotations.", "In contrast to the node attributes, Fig. 3c shows stronger results for edge attribute prediction, with all significant 's being positive, and related attributes falling into clusters (e.g. volition , awareness , sentience , or the existed attributes) Qualitative examples Table 6 lists three sentences from Reisinger et al. (2015) along with a relevant subset of their original SPR properties and values; the scale in Reisinger et al. was ordinal from 1-5, with 1 corresponding to very unlikely, 5 to very likely, and 3 to neutral.", "Our model's predictions for the same sentences and properties are given as well, mapped onto [1 , 5] .", "We first note that the structural component of the model is suffi-ciently strong that the correct predicate-argument edges were extracted during parsing, allowing for a direct comparison between the annotations by Reisinger et al. and the parser's predictions.", "We see that while for sentence (C), the model captures at least the correct direction of the protorole annotations, it overgeneralizes these results to (B), where a more nuanced analysis is required.", "For (A), we see that on most attributes the model captures the desired binary direction of the inferences, but that it fails on sentience .", "Overall, the model's predictions are weaker than the desired output, even when the prediction is on the correct side of the midpoint, 3.", "This might help explain the disparity between Pearson and F1 scores in Table 5, and represents a direction for future work.", "Note that to obtain attributes for (A) and (B), the threshold for the masks was dropped; ideally, this would not be required.", "The scalar valued, multi-attribute nature of UDS provides for a distinct structured prediction problem as compared to other existing representations.", "We have demonstrated how a transductive parsing paradigm that has achieved state-of-the-art results on other representations can be adapted to UDS1.0 structures and attributes, and have provided procedures for analysis, with the fine-grained nature of UDS allowing for investigating novel correlations and aspects of meaning.", "While UDS structures and various attribute types have been modeled separately (Vashishtha et al., 2019; Govindarajan et al., 2019; White et al., 2016; Rudinger et al., 2018a,b; Zhang et al., 2018), this work represents the first time all of these attributes and structures have been modeled jointly, and establishes a baseline for future efforts on UDS1.0.", "We envision future efforts exploring the interactions between improving the underlying graph-structure prediction and ever-better correlations to human judgements on individual properties.", "This work was supported by NSF Awards #1749025 and #1763705, DARPA LORELEI and AIDA, and IARPA BETTER.", "We thank the anonymous reviewers for their constructive feedback.", "The views and conclusions expressed herein are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA, IARPA, or the U.S. Government." ]
[ "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "result", "other", "other", "method", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "other", "other", "other" ]
[ "(cid:58)(cid:82)(cid:85)(cid:71) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:72)(cid:80)(cid:83)(cid:82)(cid:90)(cid:72)(cid:85)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:68)(cid:71)(cid:71)(cid:76)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:75)(cid:68)(cid:89)(cid:72) (cid:69)(cid:72)(cid:72)(cid:81) (cid:90)(cid:76)(cid:71)(cid:72)(cid:79)(cid:92) (cid:86)(cid:87)(cid:88)(cid:71)(cid:76)(cid:72)(cid:71) (cid:68)(cid:81)(cid:71) (cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:71) (cid:87)(cid:82) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80) (cid:87)(cid:85)(cid:68)(cid:71)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:38)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:87) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:80)(cid:68)(cid:76)(cid:81)(cid:79)(cid:92) (cid:73)(cid:82)(cid:70)(cid:88)(cid:86) (cid:82)(cid:81) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:73)(cid:82)(cid:85) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:90)(cid:75)(cid:76)(cid:79)(cid:72) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:11)(cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:85)(cid:72)(cid:71) (cid:87)(cid:82) (cid:68)(cid:86) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:12) (cid:68)(cid:85)(cid:72) (cid:71)(cid:76)(cid:86)(cid:70)(cid:68)(cid:85)(cid:71)(cid:72)(cid:71) (cid:68)(cid:73)(cid:87)(cid:72)(cid:85) (cid:87)(cid:75)(cid:72) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:90)(cid:82)(cid:85)(cid:78) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:86) (cid:68) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:87)(cid:82) (cid:77)(cid:82)(cid:76)(cid:81)(cid:87)(cid:79)(cid:92) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81) (cid:69)(cid:82)(cid:87)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:69)(cid:92) (cid:76)(cid:81)(cid:70)(cid:82)(cid:85)(cid:83)(cid:82)(cid:85)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74) (cid:80)(cid:82)(cid:85)(cid:16) (cid:83)(cid:75)(cid:82)(cid:79)(cid:82)(cid:74)(cid:76)(cid:70)(cid:68)(cid:79)(cid:15) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:87)(cid:76)(cid:70)(cid:15) (cid:68)(cid:81)(cid:71) (cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:70)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:16) (cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:79)(cid:72)(cid:89)(cid:72)(cid:85)(cid:68)(cid:74)(cid:72)(cid:86) (cid:68)(cid:81) (cid:76)(cid:81)(cid:81)(cid:82)(cid:16) (cid:89)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:73)(cid:76)(cid:81)(cid:72)(cid:16)(cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:76)(cid:81)(cid:87)(cid:72)(cid:74)(cid:85)(cid:68)(cid:87)(cid:72)(cid:86) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:83)(cid:79)(cid:72) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:83)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72)(cid:86) (cid:75)(cid:76)(cid:74)(cid:75)(cid:16) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:73)(cid:82)(cid:85) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:86)(cid:88)(cid:83)(cid:85)(cid:72)(cid:80)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:17) (cid:36) (cid:81)(cid:82)(cid:89)(cid:72)(cid:79) (cid:68)(cid:79)(cid:74)(cid:82)(cid:85)(cid:76)(cid:87)(cid:75)(cid:80) (cid:76)(cid:86) (cid:68)(cid:79)(cid:86)(cid:82) (cid:71)(cid:72)(cid:86)(cid:76)(cid:74)(cid:81)(cid:72)(cid:71) (cid:87)(cid:82) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:73)(cid:82)(cid:85) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:69)(cid:92) (cid:70)(cid:68)(cid:83)(cid:87)(cid:88)(cid:85)(cid:76)(cid:81)(cid:74) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:75)(cid:68)(cid:87) (cid:76)(cid:86) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75)(cid:76)(cid:81) (cid:72)(cid:68)(cid:70)(cid:75) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:68)(cid:87) (cid:76)(cid:86) (cid:86)(cid:75)(cid:68)(cid:85)(cid:72)(cid:71) (cid:68)(cid:70)(cid:85)(cid:82)(cid:86)(cid:86) (cid:87)(cid:75)(cid:72)(cid:80)(cid:17) (cid:40)(cid:91)(cid:83)(cid:72)(cid:85)(cid:76)(cid:80)(cid:72)(cid:81)(cid:87)(cid:68)(cid:79) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:82)(cid:73) (cid:79)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:68)(cid:81)(cid:71) (cid:71)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:81)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:83)(cid:85)(cid:82)(cid:16) (cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:76)(cid:79)(cid:79)(cid:88)(cid:86)(cid:87)(cid:85)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:88)(cid:85) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:70)(cid:68)(cid:81) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81) (cid:69)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:52)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:79)(cid:92) (cid:70)(cid:68)(cid:83)(cid:87)(cid:88)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:16) (cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17)", "(cid:39)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:72)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:68)(cid:79)(cid:86)(cid:82) (cid:81)(cid:68)(cid:80)(cid:72)(cid:71) (cid:68)(cid:86) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:15) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:86) (cid:72)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:86) (cid:68) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85) (cid:76)(cid:81) (cid:68) (cid:70)(cid:82)(cid:81)(cid:87)(cid:76)(cid:81)(cid:88)(cid:82)(cid:88)(cid:86) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85) (cid:86)(cid:83)(cid:68)(cid:70)(cid:72)(cid:17) (cid:39)(cid:88)(cid:72) (cid:87)(cid:82) (cid:76)(cid:87)(cid:86) (cid:86)(cid:87)(cid:85)(cid:82)(cid:81)(cid:74) (cid:68)(cid:69)(cid:76)(cid:79)(cid:16) (cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:72)(cid:81)(cid:70)(cid:82)(cid:71)(cid:76)(cid:81)(cid:74) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:76)(cid:86) (cid:88)(cid:86)(cid:72)(cid:73)(cid:88)(cid:79) (cid:76)(cid:81) (cid:80)(cid:68)(cid:81)(cid:92) (cid:71)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:49)(cid:47)(cid:51) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:87)(cid:72)(cid:91)(cid:87) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:11)(cid:58)(cid:76)(cid:72)(cid:87)(cid:76)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:25)(cid:30) (cid:60)(cid:76)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:25)(cid:12)(cid:15) (cid:81)(cid:68)(cid:80)(cid:72)(cid:71) (cid:72)(cid:81)(cid:87)(cid:76)(cid:87)(cid:92) (cid:85)(cid:72)(cid:70)(cid:82)(cid:74)(cid:81)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:11)(cid:49)(cid:40)(cid:53)(cid:12) (cid:11)(cid:38)(cid:82)(cid:79)(cid:79)(cid:82)(cid:69)(cid:72)(cid:85)(cid:87) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:20)(cid:30) (cid:54)(cid:88)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:12)(cid:15) (cid:72)(cid:87)(cid:70)(cid:17) (cid:38)(cid:79)(cid:68)(cid:86)(cid:16) (cid:86)(cid:76)(cid:70) (cid:68)(cid:83)(cid:83)(cid:85)(cid:82)(cid:68)(cid:70)(cid:75)(cid:72)(cid:86) (cid:80)(cid:68)(cid:76)(cid:81)(cid:79)(cid:92) (cid:87)(cid:85)(cid:72)(cid:68)(cid:87)(cid:72)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:68)(cid:86) (cid:68)(cid:87)(cid:82)(cid:80)(cid:76)(cid:70) (cid:87)(cid:82)(cid:16) (cid:78)(cid:72)(cid:81)(cid:86)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:58)(cid:82)(cid:85)(cid:71)(cid:57)(cid:72)(cid:70) (cid:11)(cid:48)(cid:76)(cid:78)(cid:82)(cid:79)(cid:82)(cid:89) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:22)(cid:69)(cid:15)(cid:68)(cid:12) (cid:68)(cid:81)(cid:71) (cid:42)(cid:79)(cid:82)(cid:57)(cid:72) (cid:11)(cid:51)(cid:72)(cid:81)(cid:81)(cid:76)(cid:81)(cid:74)(cid:87)(cid:82)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:23)(cid:12)(cid:17) (cid:53)(cid:72)(cid:70)(cid:72)(cid:81)(cid:87)(cid:79)(cid:92)(cid:15) (cid:80)(cid:68)(cid:81)(cid:92) (cid:85)(cid:72)(cid:86)(cid:72)(cid:68)(cid:85)(cid:70)(cid:75)(cid:72)(cid:85)(cid:86) (cid:76)(cid:81)(cid:87)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72)(cid:71) (cid:86)(cid:88)(cid:69)(cid:90)(cid:82)(cid:85)(cid:71) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)", "(cid:87)(cid:82) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81) (cid:68)(cid:71)(cid:89)(cid:68)(cid:81)(cid:70)(cid:72)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:73)(cid:82)(cid:85) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72)(cid:86)(cid:15) (cid:76)(cid:81)(cid:70)(cid:79)(cid:88)(cid:71)(cid:76)(cid:81)(cid:74) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75) (cid:11)(cid:37)(cid:82)(cid:77)(cid:68)(cid:81)(cid:82)(cid:90)(cid:86)(cid:78)(cid:76) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:26)(cid:12) (cid:68)(cid:81)(cid:71) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:11)(cid:60)(cid:88) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:26)(cid:30) (cid:38)(cid:68)(cid:82) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:12)(cid:17) (cid:44)(cid:81) (cid:87)(cid:75)(cid:76)(cid:86) (cid:83)(cid:68)(cid:83)(cid:72)(cid:85)(cid:15) (cid:90)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85) (cid:87)(cid:82) (cid:86)(cid:88)(cid:69)(cid:90)(cid:82)(cid:85)(cid:71) (cid:87)(cid:92)(cid:83)(cid:72)(cid:86) (cid:68)(cid:86) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:86)(cid:92)(cid:80)(cid:69)(cid:82)(cid:79)(cid:86) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:76)(cid:81)(cid:74) (cid:86)(cid:88)(cid:69)(cid:16) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:68)(cid:86) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:79)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71) (cid:90)(cid:76)(cid:86)(cid:71)(cid:82)(cid:80) (cid:76)(cid:86) (cid:62)(cid:90)(cid:15) (cid:76)(cid:15) (cid:86)(cid:15) (cid:71)(cid:15) (cid:82)(cid:15) (cid:80)(cid:64)(cid:17) (cid:40)(cid:79)(cid:16) (cid:72)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:68)(cid:85)(cid:72) (cid:79)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:68)(cid:85)(cid:72) (cid:73)(cid:85)(cid:82)(cid:80) (cid:87)(cid:75)(cid:72) (cid:79)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:89)(cid:82)(cid:70)(cid:68)(cid:69)(cid:88)(cid:79)(cid:68)(cid:85)(cid:92)(cid:15) (cid:76)(cid:17)(cid:72)(cid:17)(cid:15) (cid:87)(cid:75)(cid:72) (cid:68)(cid:79)(cid:83)(cid:75)(cid:68)(cid:69)(cid:72)(cid:87) (cid:87)(cid:68)(cid:69)(cid:79)(cid:72)(cid:17) (cid:41)(cid:76)(cid:74)(cid:17) (cid:20) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:80)(cid:82)(cid:85)(cid:72) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:86) (cid:82)(cid:73) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)", "(cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86)(cid:17) (cid:43)(cid:82)(cid:90)(cid:72)(cid:89)(cid:72)(cid:85)(cid:15) (cid:87)(cid:75)(cid:82)(cid:88)(cid:74)(cid:75) (cid:75)(cid:88)(cid:74)(cid:72) (cid:83)(cid:85)(cid:82)(cid:74)(cid:85)(cid:72)(cid:86)(cid:86) (cid:75)(cid:68)(cid:86) (cid:69)(cid:72)(cid:72)(cid:81) (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72)(cid:71)(cid:15) (cid:87)(cid:75)(cid:72)(cid:85)(cid:72) (cid:68)(cid:85)(cid:72) (cid:80)(cid:68)(cid:81)(cid:92) (cid:70)(cid:75)(cid:68)(cid:79)(cid:79)(cid:72)(cid:81)(cid:74)(cid:72)(cid:86) (cid:82)(cid:85) (cid:79)(cid:76)(cid:80)(cid:76)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:73)(cid:82)(cid:85) (cid:73)(cid:88)(cid:79)(cid:79)(cid:92) (cid:72)(cid:91)(cid:83)(cid:79)(cid:82)(cid:76)(cid:87)(cid:76)(cid:81)(cid:74) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:182) (cid:83)(cid:82)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:68)(cid:79) (cid:82)(cid:81) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:68)(cid:71)(cid:89)(cid:68)(cid:81)(cid:70)(cid:72)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:70)(cid:75)(cid:68)(cid:79)(cid:16) (cid:79)(cid:72)(cid:81)(cid:74)(cid:72) (cid:76)(cid:86) (cid:83)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:76)(cid:81)(cid:74) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79)(cid:79)(cid:92) (cid:80)(cid:72)(cid:68)(cid:81)(cid:76)(cid:81)(cid:74)(cid:73)(cid:88)(cid:79) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:16) (cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:73)(cid:82)(cid:85) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86)(cid:17) (cid:54)(cid:88)(cid:70)(cid:75) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:85)(cid:72)(cid:79)(cid:92) (cid:82)(cid:81) (cid:11)(cid:68)(cid:12) (cid:69)(cid:85)(cid:82)(cid:68)(cid:71) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:11)(cid:69)(cid:12) (cid:75)(cid:76)(cid:74)(cid:75)(cid:16) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:15) (cid:82)(cid:81)(cid:79)(cid:92) (cid:80)(cid:82)(cid:85)(cid:83)(cid:75)(cid:82)(cid:16) (cid:79)(cid:82)(cid:74)(cid:76)(cid:70)(cid:68)(cid:79) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:90)(cid:72)(cid:85)(cid:72) (cid:86)(cid:87)(cid:88)(cid:71)(cid:76)(cid:72)(cid:71)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:79)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:76)(cid:81) (cid:40)(cid:81)(cid:16) (cid:74)(cid:79)(cid:76)(cid:86)(cid:75) (cid:11)(cid:37)(cid:82)(cid:77)(cid:68)(cid:81)(cid:82)(cid:90)(cid:86)(cid:78)(cid:76) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:26)(cid:12)(cid:15) (cid:68)(cid:81)(cid:71) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:11)(cid:60)(cid:88) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:26)(cid:12)(cid:17) (cid:43)(cid:82)(cid:90)(cid:72)(cid:89)(cid:72)(cid:85)(cid:15) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:16) (cid:87)(cid:76)(cid:70)(cid:86) (cid:86)(cid:87)(cid:88)(cid:71)(cid:76)(cid:72)(cid:86) (cid:85)(cid:72)(cid:89)(cid:72)(cid:68)(cid:79)(cid:72)(cid:71) (cid:87)(cid:75)(cid:68)(cid:87) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:87)(cid:76)(cid:70) (cid:68)(cid:81)(cid:71) (cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:70)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81) (cid:85)(cid:76)(cid:70)(cid:75) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:11)(cid:37)(cid:72)(cid:68)(cid:89)(cid:72)(cid:85) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:19)(cid:26)(cid:12)(cid:15) (cid:90)(cid:75)(cid:82)(cid:86)(cid:72) (cid:88)(cid:87)(cid:76)(cid:79)(cid:76)(cid:87)(cid:92) (cid:90)(cid:68)(cid:86) (cid:81)(cid:82)(cid:87) (cid:73)(cid:88)(cid:79)(cid:79)(cid:92) (cid:86)(cid:87)(cid:88)(cid:71)(cid:76)(cid:72)(cid:71) (cid:69)(cid:72)(cid:73)(cid:82)(cid:85)(cid:72)(cid:17) (cid:41)(cid:82)(cid:85) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86)(cid:15) (cid:70)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:87) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:82)(cid:81)(cid:79)(cid:92) (cid:83)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72)(cid:71) (cid:70)(cid:82)(cid:68)(cid:85)(cid:86)(cid:72) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86)(cid:15) (cid:90)(cid:75)(cid:82)(cid:86)(cid:72) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:86)(cid:72)(cid:79)(cid:16) (cid:71)(cid:82)(cid:80) (cid:70)(cid:68)(cid:85)(cid:85)(cid:92) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:86)(cid:86)(cid:82)(cid:70)(cid:76)(cid:68)(cid:87)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:82)(cid:85)(cid:76)(cid:74)(cid:76)(cid:81)(cid:68)(cid:79) (cid:90)(cid:82)(cid:85)(cid:71)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:73)(cid:85)(cid:82)(cid:80) (cid:62)(cid:90)(cid:15) (cid:76)(cid:15) (cid:86)(cid:15) (cid:71)(cid:15) (cid:82)(cid:15) (cid:80)(cid:64)(cid:15) (cid:87)(cid:75)(cid:72) (cid:79)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71) (cid:90)(cid:76)(cid:86)(cid:71)(cid:82)(cid:80) (cid:15) (cid:68)(cid:85)(cid:72) (cid:86)(cid:76)(cid:80)(cid:16) (cid:83)(cid:79)(cid:72) (cid:68)(cid:81)(cid:71) (cid:79)(cid:72)(cid:86)(cid:86) (cid:80)(cid:72)(cid:68)(cid:81)(cid:76)(cid:81)(cid:74)(cid:73)(cid:88)(cid:79) (cid:79)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85)(cid:86)(cid:17)", "(cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:69)(cid:82)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:88)(cid:81)(cid:76)(cid:84)(cid:88)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:72)(cid:68)(cid:70)(cid:75) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:75)(cid:76)(cid:83) (cid:68)(cid:80)(cid:82)(cid:81)(cid:74) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:15) (cid:74)(cid:76)(cid:89)(cid:72)(cid:81) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:70)(cid:85)(cid:72)(cid:68)(cid:86)(cid:76)(cid:81)(cid:74)(cid:79)(cid:92) (cid:68)(cid:89)(cid:68)(cid:76)(cid:79)(cid:68)(cid:69)(cid:79)(cid:72) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:68)(cid:86) (cid:87)(cid:75)(cid:72) (cid:81)(cid:72)(cid:90) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:37)(cid:72)(cid:86)(cid:76)(cid:71)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72)(cid:76)(cid:85) (cid:88)(cid:81)(cid:76)(cid:84)(cid:88)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86)(cid:15) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:75)(cid:68)(cid:89)(cid:72) (cid:86)(cid:87)(cid:85)(cid:82)(cid:81)(cid:74) (cid:70)(cid:82)(cid:85)(cid:16) (cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:72)(cid:68)(cid:70)(cid:75) (cid:82)(cid:87)(cid:75)(cid:72)(cid:85)(cid:17) (cid:58)(cid:72) (cid:70)(cid:68)(cid:81) (cid:73)(cid:76)(cid:81)(cid:71) (cid:80)(cid:68)(cid:81)(cid:92) (cid:80)(cid:82)(cid:85)(cid:83)(cid:75)(cid:72)(cid:80)(cid:72)(cid:16)(cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:91) (cid:83)(cid:68)(cid:76)(cid:85)(cid:86)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:16)(cid:87)(cid:76)(cid:82)(cid:81)(cid:29)(cid:49)(cid:82)(cid:88)(cid:81) (cid:68)(cid:81)(cid:71) (cid:16) (cid:76)(cid:82)(cid:88)(cid:86)(cid:29)(cid:36)(cid:71)(cid:77) (cid:68)(cid:81)(cid:71) (cid:80)(cid:82)(cid:85)(cid:83)(cid:75)(cid:72)(cid:80)(cid:72)(cid:16)(cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:80)(cid:72) (cid:83)(cid:68)(cid:76)(cid:85)(cid:86)(cid:29) (cid:86)(cid:82)(cid:88)(cid:81)(cid:71)(cid:86) (cid:82)(cid:73) (cid:11)(cid:80)(cid:82)(cid:87)(cid:75)(cid:72)(cid:85)(cid:12) (cid:68)(cid:81)(cid:71) (cid:11)(cid:70)(cid:82)(cid:71)(cid:72)(cid:12)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:68)(cid:85)(cid:72) (cid:71)(cid:72)(cid:85)(cid:76)(cid:89)(cid:72)(cid:71) (cid:73)(cid:85)(cid:82)(cid:80) (cid:87)(cid:75)(cid:72) (cid:83)(cid:68)(cid:76)(cid:85) (cid:11)(cid:75)(cid:82)(cid:85)(cid:86)(cid:72)(cid:12)(cid:29)(cid:80)(cid:68)(cid:17) (cid:43)(cid:82)(cid:90)(cid:72)(cid:89)(cid:72)(cid:85)(cid:15) (cid:83)(cid:68)(cid:86)(cid:87) (cid:80)(cid:72)(cid:87)(cid:75)(cid:16) (cid:82)(cid:71)(cid:86) (cid:80)(cid:76)(cid:74)(cid:75)(cid:87) (cid:73)(cid:68)(cid:76)(cid:79) (cid:87)(cid:82) (cid:70)(cid:68)(cid:83)(cid:87)(cid:88)(cid:85)(cid:72) (cid:75)(cid:82)(cid:79)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:70)(cid:68)(cid:85)(cid:16) (cid:85)(cid:76)(cid:72)(cid:71) (cid:68)(cid:70)(cid:85)(cid:82)(cid:86)(cid:86) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:11)(cid:38)(cid:75)(cid:72)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:69)(cid:30) (cid:38)(cid:68)(cid:82) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:12)(cid:17) (cid:55)(cid:75)(cid:72)(cid:92) (cid:86)(cid:76)(cid:80)(cid:83)(cid:79)(cid:92) (cid:83)(cid:88)(cid:87) (cid:68)(cid:79)(cid:79) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:87)(cid:82)(cid:74)(cid:72)(cid:87)(cid:75)(cid:72)(cid:85) (cid:68)(cid:81)(cid:71) (cid:76)(cid:80)(cid:16) (cid:80)(cid:76)(cid:74)(cid:85)(cid:68)(cid:87)(cid:72)(cid:71) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:70)(cid:68)(cid:79) (cid:90)(cid:82)(cid:85)(cid:71)(cid:21)(cid:89)(cid:72)(cid:70) (cid:68)(cid:79)(cid:74)(cid:82)(cid:85)(cid:76)(cid:87)(cid:75)(cid:80)(cid:86)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:76)(cid:74)(cid:16) (cid:81)(cid:82)(cid:85)(cid:72)(cid:71) (cid:86)(cid:88)(cid:70)(cid:75) (cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:16)(cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17)", "(cid:41)(cid:88)(cid:85)(cid:87)(cid:75)(cid:72)(cid:85)(cid:80)(cid:82)(cid:85)(cid:72)(cid:15) (cid:87)(cid:75)(cid:72) (cid:89)(cid:68)(cid:79)(cid:88)(cid:72) (cid:82)(cid:73) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:81) (cid:49)(cid:47)(cid:51) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:75)(cid:68)(cid:86) (cid:81)(cid:82)(cid:87) (cid:69)(cid:72)(cid:72)(cid:81) (cid:70)(cid:82)(cid:80)(cid:83)(cid:85)(cid:72)(cid:75)(cid:72)(cid:81)(cid:86)(cid:76)(cid:89)(cid:72)(cid:79)(cid:92) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:16) (cid:68)(cid:87)(cid:72)(cid:71)(cid:17) (cid:54)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85) (cid:87)(cid:82) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:15) (cid:90)(cid:72) (cid:76)(cid:81)(cid:87)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87) (cid:72)(cid:68)(cid:70)(cid:75) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:90)(cid:76)(cid:87)(cid:75) (cid:68) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85)(cid:17) (cid:51)(cid:68)(cid:86)(cid:87) (cid:90)(cid:82)(cid:85)(cid:78) (cid:73)(cid:82)(cid:70)(cid:88)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:69)(cid:88)(cid:87) (cid:83)(cid:68)(cid:76)(cid:71) (cid:79)(cid:76)(cid:87)(cid:87)(cid:79)(cid:72) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:68)(cid:81)(cid:71) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:73)(cid:82)(cid:85) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:17) (cid:58)(cid:75)(cid:72)(cid:87)(cid:75)(cid:72)(cid:85) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:70)(cid:68)(cid:81) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:92) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70)(cid:86) (cid:68)(cid:81)(cid:71) (cid:69)(cid:72)(cid:81)(cid:72)(cid:73)(cid:76)(cid:87) (cid:49)(cid:47)(cid:51) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:76)(cid:86) (cid:86)(cid:87)(cid:76)(cid:79)(cid:79) (cid:81)(cid:82)(cid:87) (cid:86)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80)(cid:68)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79)(cid:79)(cid:92) (cid:86)(cid:87)(cid:88)(cid:71)(cid:76)(cid:72)(cid:71)(cid:17)", "(cid:55)(cid:82) (cid:86)(cid:82)(cid:79)(cid:89)(cid:72) (cid:87)(cid:75)(cid:72) (cid:68)(cid:69)(cid:82)(cid:89)(cid:72) (cid:70)(cid:75)(cid:68)(cid:79)(cid:79)(cid:72)(cid:81)(cid:74)(cid:72)(cid:86)(cid:15) (cid:90)(cid:72) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72) (cid:68) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:87)(cid:82) (cid:77)(cid:82)(cid:76)(cid:81)(cid:87)(cid:79)(cid:92) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:86)(cid:76)(cid:80)(cid:88)(cid:79)(cid:87)(cid:68)(cid:81)(cid:72)(cid:82)(cid:88)(cid:86)(cid:79)(cid:92)(cid:17) (cid:44)(cid:87) (cid:70)(cid:68)(cid:81) (cid:73)(cid:79)(cid:72)(cid:91)(cid:16) (cid:76)(cid:69)(cid:79)(cid:92) (cid:76)(cid:81)(cid:87)(cid:72)(cid:74)(cid:85)(cid:68)(cid:87)(cid:72) (cid:68)(cid:81)(cid:92) (cid:70)(cid:82)(cid:80)(cid:69)(cid:76)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:17) (cid:50)(cid:88)(cid:85) (cid:70)(cid:82)(cid:81)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:68)(cid:85)(cid:72) (cid:73)(cid:82)(cid:79)(cid:79)(cid:82)(cid:90)(cid:86)(cid:29)", "(cid:11)(cid:20)(cid:12) (cid:36) (cid:73)(cid:76)(cid:81)(cid:72)(cid:16)(cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:11)(cid:68)(cid:12) (cid:87)(cid:68)(cid:78)(cid:72)(cid:86) (cid:68)(cid:81)(cid:92) (cid:70)(cid:82)(cid:80)(cid:69)(cid:76)(cid:16) (cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:89)(cid:68)(cid:85)(cid:76)(cid:82)(cid:88)(cid:86) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:15) (cid:76)(cid:81)(cid:70)(cid:79)(cid:88)(cid:71)(cid:76)(cid:81)(cid:74) (cid:80)(cid:82)(cid:85)(cid:16) (cid:83)(cid:75)(cid:72)(cid:80)(cid:72)(cid:15) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:80)(cid:72)(cid:15) (cid:68)(cid:81)(cid:71) (cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:91)(cid:15) (cid:68)(cid:86) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87)(cid:15) (cid:68)(cid:81)(cid:71) (cid:11)(cid:69)(cid:12) (cid:76)(cid:81)(cid:70)(cid:79)(cid:88)(cid:71)(cid:72)(cid:86) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:76)(cid:81)(cid:74) (cid:87)(cid:82) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:16) (cid:68)(cid:87)(cid:72) (cid:75)(cid:76)(cid:74)(cid:75)(cid:16)(cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:68)(cid:86) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79)(cid:79)(cid:92) (cid:80)(cid:72)(cid:68)(cid:81)(cid:76)(cid:81)(cid:74)(cid:73)(cid:88)(cid:79) (cid:68)(cid:81)(cid:71) (cid:70)(cid:82)(cid:80)(cid:83)(cid:79)(cid:72)(cid:87)(cid:72) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:73)(cid:82)(cid:85) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86)(cid:17)", "(cid:11)(cid:21)(cid:12) (cid:36) (cid:81)(cid:82)(cid:89)(cid:72)(cid:79) (cid:68)(cid:79)(cid:74)(cid:82)(cid:85)(cid:76)(cid:87)(cid:75)(cid:80) (cid:76)(cid:86) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:16) (cid:87)(cid:76)(cid:89)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:88)(cid:69)(cid:76)(cid:84)(cid:88)(cid:76)(cid:87)(cid:82)(cid:88)(cid:86) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:83)(cid:75)(cid:72)(cid:81)(cid:82)(cid:80)(cid:72)(cid:81)(cid:68)(cid:17) (cid:44)(cid:87)(cid:86) (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72)(cid:86) (cid:87)(cid:90)(cid:82) (cid:78)(cid:76)(cid:81)(cid:71)(cid:86) (cid:82)(cid:73) (cid:74)(cid:85)(cid:68)(cid:71)(cid:76)(cid:72)(cid:81)(cid:87)(cid:86) (cid:87)(cid:82) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75)(cid:76)(cid:81) (cid:72)(cid:68)(cid:70)(cid:75) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:68)(cid:87) (cid:86)(cid:75)(cid:68)(cid:85)(cid:72)(cid:71) (cid:68)(cid:70)(cid:85)(cid:82)(cid:86)(cid:86) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:83)(cid:79)(cid:72) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:86)(cid:72)(cid:83)(cid:68)(cid:85)(cid:68)(cid:87)(cid:72)(cid:79)(cid:92)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:69)(cid:85)(cid:76)(cid:81)(cid:74)(cid:86) (cid:75)(cid:82)(cid:79)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72) (cid:87)(cid:75)(cid:72) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92)(cid:17)", "(cid:11)(cid:22)(cid:12) (cid:40)(cid:91)(cid:87)(cid:72)(cid:81)(cid:86)(cid:76)(cid:89)(cid:72) (cid:72)(cid:91)(cid:83)(cid:72)(cid:85)(cid:76)(cid:80)(cid:72)(cid:81)(cid:87)(cid:68)(cid:79) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:76)(cid:79)(cid:79)(cid:88)(cid:86)(cid:87)(cid:85)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:88)(cid:85) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:92)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:86)(cid:88)(cid:83)(cid:85)(cid:72)(cid:80)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:76)(cid:81) (cid:89)(cid:68)(cid:85)(cid:76)(cid:82)(cid:88)(cid:86) (cid:49)(cid:47)(cid:51) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:50)(cid:88)(cid:85) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:86) (cid:69)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:87)(cid:75)(cid:68)(cid:81) (cid:83)(cid:85)(cid:72)(cid:89)(cid:76)(cid:82)(cid:88)(cid:86) (cid:80)(cid:72)(cid:87)(cid:75)(cid:16) (cid:82)(cid:71)(cid:86) (cid:76)(cid:81) (cid:69)(cid:82)(cid:87)(cid:75) (cid:79)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:68)(cid:81)(cid:71) (cid:71)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:48)(cid:82)(cid:85)(cid:72)(cid:82)(cid:89)(cid:72)(cid:85)(cid:15) (cid:82)(cid:88)(cid:85) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:72)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:16) (cid:73)(cid:82)(cid:85)(cid:80) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:76)(cid:81) (cid:71)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:87)(cid:72)(cid:91)(cid:87) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:49)(cid:40)(cid:53)(cid:17) (cid:41)(cid:88)(cid:85)(cid:87)(cid:75)(cid:72)(cid:85)(cid:80)(cid:82)(cid:85)(cid:72)(cid:15) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:16)", "(cid:44)(cid:87) (cid:76)(cid:86) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:87)(cid:82)(cid:82)(cid:79)(cid:78)(cid:76)(cid:87) (cid:87)(cid:75)(cid:68)(cid:87) (cid:70)(cid:68)(cid:81) (cid:80)(cid:72)(cid:68)(cid:86)(cid:88)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:16) (cid:87)(cid:76)(cid:89)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:89)(cid:68)(cid:85)(cid:76)(cid:82)(cid:88)(cid:86) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72)(cid:76)(cid:85) (cid:70)(cid:82)(cid:80)(cid:69)(cid:76)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:76)(cid:81) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:44)(cid:87)(cid:86) (cid:86)(cid:76)(cid:80)(cid:16) (cid:83)(cid:79)(cid:76)(cid:70)(cid:76)(cid:87)(cid:92) (cid:68)(cid:81)(cid:71) (cid:70)(cid:82)(cid:80)(cid:83)(cid:68)(cid:87)(cid:76)(cid:69)(cid:76)(cid:79)(cid:76)(cid:87)(cid:92) (cid:86)(cid:83)(cid:68)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:79)(cid:68)(cid:69)(cid:82)(cid:85)(cid:76)(cid:82)(cid:88)(cid:86) (cid:68)(cid:81)(cid:71) (cid:87)(cid:76)(cid:80)(cid:72)(cid:16)(cid:70)(cid:82)(cid:81)(cid:86)(cid:88)(cid:80)(cid:76)(cid:81)(cid:74) (cid:71)(cid:72)(cid:89)(cid:72)(cid:79)(cid:82)(cid:83)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:68)(cid:81)(cid:71) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:82)(cid:73) (cid:81)(cid:72)(cid:90) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:16)(cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72) (cid:70)(cid:82)(cid:71)(cid:72) (cid:68)(cid:81)(cid:71) (cid:71)(cid:68)(cid:87)(cid:68) (cid:90)(cid:76)(cid:79)(cid:79) (cid:69)(cid:72) (cid:85)(cid:72)(cid:79)(cid:72)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:42)(cid:76)(cid:87)(cid:75)(cid:88)(cid:69)(cid:17)", "(cid:41)(cid:76)(cid:72)(cid:79)(cid:71) (cid:68)(cid:81)(cid:71) (cid:42)(cid:85)(cid:68)(cid:76)(cid:81) (cid:36) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:76)(cid:86) (cid:68) (cid:79)(cid:76)(cid:86)(cid:87) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86)(cid:17) (cid:36) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:71)(cid:72)(cid:86)(cid:70)(cid:85)(cid:76)(cid:69)(cid:72)(cid:86) (cid:82)(cid:81)(cid:72) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:68)(cid:86)(cid:83)(cid:72)(cid:70)(cid:87) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72)(cid:15) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87) (cid:68)(cid:81)(cid:71) (cid:86)(cid:87)(cid:85)(cid:82)(cid:78)(cid:72) (cid:68)(cid:85)(cid:72) (cid:80)(cid:82)(cid:85)(cid:83)(cid:75)(cid:82)(cid:79)(cid:82)(cid:74)(cid:76)(cid:70)(cid:68)(cid:79) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:71)(cid:72)(cid:86)(cid:70)(cid:85)(cid:76)(cid:69)(cid:72) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:86)(cid:75)(cid:68)(cid:83)(cid:72)(cid:15) (cid:83)(cid:76)(cid:81)(cid:92)(cid:76)(cid:81) (cid:15) (cid:68) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:15) (cid:71)(cid:72)(cid:86)(cid:70)(cid:85)(cid:76)(cid:69)(cid:72)(cid:86) (cid:83)(cid:85)(cid:82)(cid:81)(cid:88)(cid:81)(cid:16) (cid:70)(cid:76)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72)(cid:86)(cid:72) (cid:68)(cid:85)(cid:72) (cid:86)(cid:88)(cid:69)(cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:68)(cid:85)(cid:72) (cid:71)(cid:72)(cid:87)(cid:72)(cid:85)(cid:16) (cid:80)(cid:76)(cid:81)(cid:72)(cid:71) (cid:72)(cid:91)(cid:70)(cid:79)(cid:88)(cid:86)(cid:76)(cid:89)(cid:72)(cid:79)(cid:92) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71)(cid:17) (cid:44)(cid:81) (cid:70)(cid:82)(cid:81)(cid:87)(cid:85)(cid:68)(cid:86)(cid:87)(cid:15) (cid:75)(cid:92)(cid:16) (cid:83)(cid:72)(cid:85)(cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:71)(cid:72)(cid:87)(cid:72)(cid:85)(cid:80)(cid:76)(cid:81)(cid:72)(cid:71) (cid:69)(cid:92) (cid:69)(cid:82)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:76)(cid:87)(cid:86) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:83)(cid:68)(cid:85)(cid:87)(cid:16)(cid:82)(cid:73)(cid:16) (cid:86)(cid:83)(cid:72)(cid:72)(cid:70)(cid:75) (cid:11)(cid:51)(cid:50)(cid:54)(cid:12) (cid:15) (cid:68) (cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:70)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:17) (cid:36) (cid:86)(cid:76)(cid:80)(cid:83)(cid:79)(cid:72) (cid:68)(cid:81)(cid:71) (cid:72)(cid:73)(cid:73)(cid:76)(cid:70)(cid:76)(cid:72)(cid:81)(cid:87) (cid:90)(cid:68)(cid:92) (cid:87)(cid:82) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:86) (cid:88)(cid:86)(cid:16) (cid:76)(cid:81)(cid:74) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:82)(cid:73) (cid:86)(cid:92)(cid:80)(cid:69)(cid:82)(cid:79)(cid:86)(cid:17) (cid:54)(cid:92)(cid:80)(cid:69)(cid:82)(cid:79)(cid:86) (cid:68)(cid:85)(cid:72) (cid:85)(cid:72)(cid:73)(cid:72)(cid:85)(cid:85)(cid:72)(cid:71) (cid:87)(cid:82) (cid:68)(cid:86) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:68)(cid:81)(cid:71) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:68)(cid:86) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:90)(cid:76)(cid:86)(cid:71)(cid:82)(cid:80) (cid:182)(cid:86) (cid:90)(cid:82)(cid:85)(cid:71) (cid:85)(cid:82)(cid:82)(cid:87) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:76)(cid:86) (cid:62)(cid:90)(cid:76)(cid:86)(cid:16)(cid:15) (cid:16) (cid:71)(cid:82)(cid:80)(cid:64)(cid:17)", "(cid:54)(cid:87)(cid:68)(cid:87)(cid:76)(cid:70) (cid:40)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:54)(cid:87)(cid:68)(cid:87)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87) (cid:72)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:11)(cid:82)(cid:85) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:12) (cid:90)(cid:76)(cid:87)(cid:75) (cid:68) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85) (cid:76)(cid:81)(cid:71)(cid:72)(cid:16) (cid:83)(cid:72)(cid:81)(cid:71)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:76)(cid:87)(cid:86) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:86)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:90)(cid:82)(cid:85)(cid:78) (cid:73)(cid:82)(cid:70)(cid:88)(cid:86)(cid:72)(cid:86) (cid:82)(cid:81) (cid:72)(cid:91)(cid:16) (cid:83)(cid:79)(cid:82)(cid:76)(cid:87)(cid:76)(cid:81)(cid:74) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:83)(cid:82)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:68)(cid:79) (cid:73)(cid:82)(cid:85) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:86)(cid:87)(cid:68)(cid:87)(cid:76)(cid:70) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:51)(cid:68)(cid:86)(cid:87) (cid:90)(cid:82)(cid:85)(cid:78)(cid:86) (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72)(cid:71) (cid:75)(cid:88)(cid:74)(cid:72) (cid:83)(cid:85)(cid:82)(cid:74)(cid:85)(cid:72)(cid:86)(cid:86) (cid:76)(cid:81) (cid:76)(cid:81)(cid:87)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:76)(cid:81)(cid:74) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:73)(cid:82)(cid:85) (cid:68)(cid:71)(cid:16) (cid:89)(cid:68)(cid:81)(cid:70)(cid:72)(cid:71) (cid:86)(cid:87)(cid:68)(cid:87)(cid:76)(cid:70) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75)(cid:15) (cid:82)(cid:81)(cid:72) (cid:87)(cid:92)(cid:83)(cid:76)(cid:70)(cid:68)(cid:79) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:76)(cid:86) (cid:79)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:75)(cid:68)(cid:86) (cid:69)(cid:72)(cid:72)(cid:81) (cid:72)(cid:91)(cid:16) (cid:83)(cid:79)(cid:82)(cid:76)(cid:87)(cid:72)(cid:71) (cid:87)(cid:82) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:11)(cid:37)(cid:82)(cid:77)(cid:68)(cid:81)(cid:82)(cid:90)(cid:86)(cid:78)(cid:76) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:26)(cid:12)(cid:17) (cid:41)(cid:82)(cid:85) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72)(cid:15) (cid:86)(cid:88)(cid:69)(cid:90)(cid:82)(cid:85)(cid:71) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:15) (cid:76)(cid:81)(cid:70)(cid:79)(cid:88)(cid:71)(cid:16) (cid:76)(cid:81)(cid:74) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85) (cid:15) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87) (cid:15) (cid:68)(cid:81)(cid:71) (cid:86)(cid:87)(cid:85)(cid:82)(cid:78)(cid:72) (cid:15) (cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:92) (cid:73)(cid:85)(cid:88)(cid:76)(cid:87)(cid:16) (cid:73)(cid:88)(cid:79) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:11)(cid:58)(cid:76)(cid:72)(cid:87)(cid:76)(cid:81)(cid:74) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:30) (cid:47)(cid:76)(cid:88) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:26)(cid:12) (cid:68)(cid:81)(cid:71) (cid:75)(cid:68)(cid:89)(cid:72) (cid:69)(cid:72)(cid:72)(cid:81) (cid:86)(cid:87)(cid:88)(cid:71)(cid:76)(cid:72)(cid:71) (cid:69)(cid:92) (cid:38)(cid:58)(cid:40) (cid:11)(cid:38)(cid:75)(cid:72)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:68)(cid:12)(cid:15) (cid:45)(cid:58)(cid:40) (cid:11)(cid:60)(cid:88) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:26)(cid:12)(cid:15) (cid:68)(cid:81)(cid:71) (cid:70)(cid:90)(cid:21)(cid:89)(cid:72)(cid:70) (cid:11)(cid:38)(cid:68)(cid:82) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:12) (cid:86)(cid:72)(cid:83)(cid:68)(cid:85)(cid:68)(cid:87)(cid:72)(cid:79)(cid:92)(cid:17) (cid:55)(cid:75)(cid:72) (cid:68)(cid:69)(cid:82)(cid:89)(cid:72) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87)(cid:72)(cid:71) (cid:86)(cid:75)(cid:68)(cid:79)(cid:79)(cid:82)(cid:90) (cid:69)(cid:88)(cid:87) (cid:72)(cid:73)(cid:73)(cid:76)(cid:70)(cid:76)(cid:72)(cid:81)(cid:87) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72)(cid:86)(cid:17) (cid:44)(cid:81) (cid:70)(cid:82)(cid:81)(cid:16) (cid:87)(cid:85)(cid:68)(cid:86)(cid:87)(cid:15) (cid:80)(cid:68)(cid:81)(cid:92) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:76)(cid:81)(cid:87)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72)(cid:71) (cid:71)(cid:72)(cid:72)(cid:83) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:16) (cid:90)(cid:82)(cid:85)(cid:78)(cid:86) (cid:76)(cid:81) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:86)(cid:87)(cid:68)(cid:87)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:11)(cid:46)(cid:76)(cid:80) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:30) (cid:38)(cid:68)(cid:82) (cid:68)(cid:81)(cid:71) (cid:47)(cid:88)(cid:15) (cid:21)(cid:19)(cid:20)(cid:26)(cid:12)(cid:17) (cid:43)(cid:82)(cid:90)(cid:72)(cid:89)(cid:72)(cid:85)(cid:15) (cid:87)(cid:75)(cid:72)(cid:92) (cid:70)(cid:82)(cid:86)(cid:87) (cid:75)(cid:88)(cid:74)(cid:72) (cid:70)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:85)(cid:72)(cid:86)(cid:82)(cid:88)(cid:85)(cid:70)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:92)(cid:76)(cid:72)(cid:79)(cid:71)(cid:72)(cid:71) (cid:79)(cid:76)(cid:80)(cid:76)(cid:87)(cid:72)(cid:71) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86)(cid:17) (cid:55)(cid:82) (cid:78)(cid:72)(cid:72)(cid:83) (cid:87)(cid:75)(cid:72) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:86)(cid:87)(cid:85)(cid:68)(cid:76)(cid:74)(cid:75)(cid:87)(cid:16) (cid:73)(cid:82)(cid:85)(cid:90)(cid:68)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:72)(cid:73)(cid:73)(cid:76)(cid:70)(cid:76)(cid:72)(cid:81)(cid:87)(cid:15) (cid:87)(cid:75)(cid:76)(cid:86) (cid:90)(cid:82)(cid:85)(cid:78) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87)(cid:86) (cid:68) (cid:86)(cid:75)(cid:68)(cid:79)(cid:79)(cid:82)(cid:90) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72)(cid:17)", "(cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:21)(cid:29) (cid:41)(cid:76)(cid:72)(cid:79)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:70)(cid:69)(cid:82)(cid:90) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72)(cid:17)", "(cid:68) (cid:90)(cid:75)(cid:82)(cid:79)(cid:72) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:90)(cid:76)(cid:87)(cid:75) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:55)(cid:75)(cid:72) (cid:71)(cid:92)(cid:81)(cid:68)(cid:80)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:40)(cid:47)(cid:48)(cid:82) (cid:11)(cid:51)(cid:72)(cid:87)(cid:72)(cid:85)(cid:86) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:12)(cid:15) (cid:68)(cid:81)(cid:71) (cid:37)(cid:40)(cid:53)(cid:55) (cid:11)(cid:39)(cid:72)(cid:89)(cid:79)(cid:76)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:12)(cid:15) (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72)(cid:71) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72)(cid:16)(cid:82)(cid:73)(cid:16)(cid:87)(cid:75)(cid:72)(cid:16)(cid:68)(cid:85)(cid:87) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:76)(cid:81) (cid:80)(cid:68)(cid:81)(cid:92) (cid:49)(cid:47)(cid:51) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:90)(cid:82)(cid:85)(cid:78) (cid:70)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:72)(cid:86) (cid:82)(cid:88)(cid:85) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72)(cid:80) (cid:82)(cid:81) (cid:71)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:43)(cid:82)(cid:90)(cid:72)(cid:89)(cid:72)(cid:85)(cid:15) (cid:90)(cid:72) (cid:71)(cid:76)(cid:71) (cid:81)(cid:82)(cid:87) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72)(cid:80) (cid:90)(cid:76)(cid:87)(cid:75) (cid:79)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:15) (cid:68)(cid:86) (cid:87)(cid:75)(cid:72)(cid:92) (cid:70)(cid:68)(cid:81)(cid:81)(cid:82)(cid:87) (cid:83)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85) (cid:90)(cid:76)(cid:87)(cid:75)(cid:82)(cid:88)(cid:87) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:86) (cid:71)(cid:76)(cid:85)(cid:72)(cid:70)(cid:87)(cid:79)(cid:92)(cid:17)", "(cid:58)(cid:72) (cid:88)(cid:86)(cid:72) C (cid:87)(cid:82) (cid:71)(cid:72)(cid:81)(cid:82)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:70)(cid:82)(cid:85)(cid:83)(cid:88)(cid:86)(cid:15) V (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:89)(cid:82)(cid:70)(cid:68)(cid:69)(cid:88)(cid:79)(cid:68)(cid:85)(cid:92)(cid:15) (cid:68)(cid:81)(cid:71) F (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:87) (cid:82)(cid:73) (cid:86)(cid:72)(cid:79)(cid:72)(cid:70)(cid:87)(cid:72)(cid:71) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:16) (cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:68) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) f F (cid:15) (cid:90)(cid:72) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72) (cid:76)(cid:87)(cid:86) (cid:81)(cid:16) (cid:74)(cid:85)(cid:68)(cid:80) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:89)(cid:82)(cid:70)(cid:68)(cid:69)(cid:88)(cid:79)(cid:68)(cid:85)(cid:92) V f (cid:69)(cid:92) (cid:86)(cid:70)(cid:68)(cid:81)(cid:81)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:90)(cid:75)(cid:82)(cid:79)(cid:72) (cid:70)(cid:82)(cid:85)(cid:83)(cid:88)(cid:86)(cid:17) G f ( w ) = [ g 1 , g 2 , . . . , g n ] (cid:76)(cid:86) (cid:87)(cid:75)(cid:72) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71) w (cid:76)(cid:81) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) f (cid:17) (cid:39)(cid:68)(cid:87)(cid:68) (cid:83)(cid:82)(cid:76)(cid:81)(cid:87)(cid:86) (cid:73)(cid:72)(cid:71) (cid:76)(cid:81)(cid:87)(cid:82) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86) (cid:68)(cid:85)(cid:72) (cid:83)(cid:68)(cid:76)(cid:85)(cid:86) (cid:82)(cid:73) (cid:68) (cid:87)(cid:68)(cid:85)(cid:16) (cid:74)(cid:72)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) w t (cid:68)(cid:81)(cid:71) (cid:76)(cid:87)(cid:86) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) (cid:86)(cid:72)(cid:87) S ( w t ) (cid:15) (cid:82)(cid:85) S (cid:73)(cid:82)(cid:85) (cid:86)(cid:76)(cid:80)(cid:83)(cid:79)(cid:76)(cid:70)(cid:76)(cid:87)(cid:92)(cid:17) (cid:55)(cid:68)(cid:78)(cid:72) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:55)(cid:75)(cid:72) (cid:73)(cid:82)(cid:91) (cid:85)(cid:88)(cid:81)(cid:86) (cid:68)(cid:73)(cid:87)(cid:72)(cid:85) (cid:70)(cid:68)(cid:87)(cid:86) (cid:68)(cid:86) (cid:68)(cid:81) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72) (cid:68)(cid:81)(cid:71) (cid:86)(cid:88)(cid:83)(cid:83)(cid:82)(cid:86)(cid:72) (cid:85)(cid:88)(cid:81)(cid:86) (cid:76)(cid:86) (cid:87)(cid:75)(cid:72) (cid:87)(cid:68)(cid:85)(cid:74)(cid:72)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71)(cid:17) (cid:37)(cid:92) (cid:68)(cid:83)(cid:83)(cid:79)(cid:92)(cid:76)(cid:81)(cid:74) (cid:70)(cid:69)(cid:82)(cid:90) (cid:15) (cid:87)(cid:75)(cid:72) (cid:71)(cid:68)(cid:87)(cid:68) (cid:83)(cid:82)(cid:76)(cid:81)(cid:87)(cid:86) ( S, w t ) (cid:90)(cid:76)(cid:79)(cid:79) (cid:69)(cid:72) (cid:11)(cid:62)(cid:55)(cid:75)(cid:72)(cid:15) (cid:73)(cid:82)(cid:91)(cid:15) (cid:68)(cid:73)(cid:87)(cid:72)(cid:85)(cid:15) (cid:70)(cid:68)(cid:87)(cid:86)(cid:64)(cid:15) (cid:85)(cid:88)(cid:81)(cid:86)(cid:12)(cid:17) (cid:37)(cid:92) (cid:68)(cid:83)(cid:83)(cid:79)(cid:92)(cid:76)(cid:81)(cid:74) (cid:86)(cid:78)(cid:76)(cid:83)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:15) (cid:87)(cid:75)(cid:72)(cid:85)(cid:72) (cid:68)(cid:85)(cid:72) (cid:73)(cid:82)(cid:88)(cid:85) ( S, w t ) (cid:86)(cid:29) (cid:11)(cid:62) (cid:70)(cid:87)(cid:91) (cid:64)(cid:15) (cid:85)(cid:88)(cid:81)(cid:86)(cid:12) (cid:73)(cid:82)(cid:85) (cid:70)(cid:87)(cid:91) (cid:76)(cid:81) (cid:62)(cid:55)(cid:75)(cid:72)(cid:15) (cid:73)(cid:82)(cid:91)(cid:15) (cid:68)(cid:73)(cid:87)(cid:72)(cid:85)(cid:15) (cid:70)(cid:68)(cid:87)(cid:86)(cid:64)(cid:17)", "(cid:55)(cid:75)(cid:72) (cid:70)(cid:69)(cid:82)(cid:90) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72) (cid:76)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90)(cid:81) (cid:76)(cid:81) (cid:41)(cid:76)(cid:74)(cid:17) (cid:21)(cid:17) (cid:44)(cid:87) (cid:70)(cid:82)(cid:81)(cid:16) (cid:87)(cid:68)(cid:76)(cid:81)(cid:86) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85)(cid:86)(cid:29) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87)(cid:15) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:68)(cid:81)(cid:71) (cid:83)(cid:85)(cid:72)(cid:16) (cid:71)(cid:76)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85)(cid:17) (cid:41)(cid:82)(cid:85) (cid:68) (cid:71)(cid:68)(cid:87)(cid:68) (cid:83)(cid:82)(cid:76)(cid:81)(cid:87) ( S, w t ) (cid:15) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:16)(cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87)(cid:86) (cid:82)(cid:73) S (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:68)(cid:85)(cid:72) (cid:73)(cid:72)(cid:71) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:82)(cid:16) (cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:68)(cid:81)(cid:71) (cid:69)(cid:72)(cid:70)(cid:82)(cid:80)(cid:72) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85)(cid:86)(cid:29) P 0 (cid:87)(cid:82) P | F | (cid:17) (cid:40)(cid:68)(cid:70)(cid:75) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:72)(cid:81)(cid:87)(cid:72)(cid:85)(cid:86) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:87)(cid:82) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87) w t (cid:68)(cid:81)(cid:71) (cid:74)(cid:72)(cid:87) (cid:68) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:79)(cid:82)(cid:86)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72) (cid:86)(cid:88)(cid:80)(cid:16) (cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72)(cid:86)(cid:72) (cid:79)(cid:82)(cid:86)(cid:86)(cid:72)(cid:86) (cid:76)(cid:86) (cid:87)(cid:75)(cid:72) (cid:87)(cid:82)(cid:87)(cid:68)(cid:79) (cid:79)(cid:82)(cid:86)(cid:86)(cid:17) (cid:48)(cid:82)(cid:71)(cid:72)(cid:79)(cid:182)(cid:86) (cid:83)(cid:68)(cid:85)(cid:68)(cid:80)(cid:72)(cid:87)(cid:72)(cid:85)(cid:86) (cid:76)(cid:81)(cid:70)(cid:79)(cid:88)(cid:71)(cid:72) (cid:86)(cid:72)(cid:89)(cid:72)(cid:85)(cid:68)(cid:79) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:80)(cid:68)(cid:87)(cid:85)(cid:76)(cid:16) (cid:70)(cid:72)(cid:86) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85) (cid:68)(cid:81)(cid:71) (cid:68) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:80)(cid:68)(cid:87)(cid:85)(cid:76)(cid:91) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85)(cid:17) (cid:58)(cid:72) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:86) EW (cid:82)(cid:73) (cid:86)(cid:76)(cid:93)(cid:72) | V | d (cid:15) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) d (cid:76)(cid:86) (cid:87)(cid:75)(cid:72)", "(cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:22)(cid:29) (cid:41)(cid:76)(cid:81)(cid:72)(cid:16)(cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:73)(cid:82)(cid:85) (cid:72)(cid:91)(cid:87)(cid:85)(cid:68)(cid:70)(cid:87)(cid:76)(cid:81)(cid:74) (cid:75)(cid:76)(cid:74)(cid:75)(cid:16) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:17)", "(cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:86)(cid:76)(cid:93)(cid:72)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:68)(cid:70)(cid:75) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) f (cid:15) (cid:76)(cid:87)(cid:86) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74)(cid:86) E fG (cid:76)(cid:86) (cid:82)(cid:73) (cid:86)(cid:76)(cid:93)(cid:72) | V f | d (cid:17) (cid:58)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) EW (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85)(cid:76)(cid:93)(cid:72) (cid:87)(cid:75)(cid:72) (cid:87)(cid:68)(cid:85)(cid:74)(cid:72)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) w t (cid:90)(cid:75)(cid:76)(cid:79)(cid:72) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) E fG (cid:82)(cid:81)(cid:79)(cid:92) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85)(cid:76)(cid:93)(cid:72) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:73)(cid:85)(cid:82)(cid:80) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:76)(cid:81) S ( w t ) (cid:17)", "(cid:44)(cid:81) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85)(cid:15) (cid:90)(cid:72) (cid:71)(cid:72)(cid:86)(cid:76)(cid:74)(cid:81) (cid:68) (cid:73)(cid:76)(cid:81)(cid:72)(cid:16)(cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:70)(cid:82)(cid:81)(cid:86)(cid:76)(cid:86)(cid:87)(cid:86) (cid:82)(cid:73) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:76)(cid:81)(cid:74) (cid:87)(cid:82) (cid:83)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72) (cid:75)(cid:76)(cid:74)(cid:75)(cid:16)(cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86)(cid:17) (cid:41)(cid:76)(cid:74)(cid:17) (cid:22) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:76)(cid:87)(cid:86) (cid:80)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:86)(cid:80)(cid:17)", "(cid:49)(cid:16)(cid:42)(cid:85)(cid:68)(cid:80) (cid:38)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:72)(cid:71) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:89)(cid:82)(cid:70)(cid:68)(cid:69)(cid:88)(cid:79)(cid:68)(cid:85)(cid:92) V (cid:15) (cid:86)(cid:76)(cid:93)(cid:72)(cid:86) (cid:82)(cid:73) V f (cid:68)(cid:81)(cid:71) E fG (cid:68)(cid:85)(cid:72) (cid:86)(cid:80)(cid:68)(cid:79)(cid:79)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:80)(cid:68)(cid:92) (cid:79)(cid:72)(cid:68)(cid:71) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:88)(cid:81)(cid:71)(cid:72)(cid:85)(cid:73)(cid:76)(cid:87)(cid:87)(cid:76)(cid:81)(cid:74) (cid:83)(cid:85)(cid:82)(cid:69)(cid:79)(cid:72)(cid:80)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:68)(cid:70)(cid:75) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) f (cid:15) (cid:90)(cid:72) (cid:74)(cid:72)(cid:81)(cid:16) (cid:72)(cid:85)(cid:68)(cid:87)(cid:72) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:87)(cid:82) (cid:72)(cid:81)(cid:79)(cid:68)(cid:85)(cid:74)(cid:72) (cid:76)(cid:87)(cid:86) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:89)(cid:82)(cid:70)(cid:68)(cid:69)(cid:88)(cid:79)(cid:68)(cid:85)(cid:92) V f (cid:17) (cid:37)(cid:92) (cid:76)(cid:81)(cid:70)(cid:85)(cid:72)(cid:68)(cid:86)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:89)(cid:82)(cid:70)(cid:68)(cid:69)(cid:88)(cid:79)(cid:68)(cid:85)(cid:92) (cid:86)(cid:76)(cid:93)(cid:72)(cid:15) (cid:76)(cid:87) (cid:72)(cid:81)(cid:79)(cid:68)(cid:85)(cid:74)(cid:72)(cid:86) (cid:72)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71)(cid:182)(cid:86) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:73)(cid:82)(cid:85) (cid:68) (cid:75)(cid:76)(cid:74)(cid:75)(cid:72)(cid:85) (cid:70)(cid:68)(cid:83)(cid:68)(cid:70)(cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:70)(cid:68)(cid:85)(cid:85)(cid:92)(cid:76)(cid:81)(cid:74) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:78)(cid:81)(cid:82)(cid:90)(cid:79)(cid:72)(cid:71)(cid:74)(cid:72)(cid:17) (cid:36)(cid:86) (cid:68)(cid:81) (cid:76)(cid:81)(cid:86)(cid:87)(cid:68)(cid:81)(cid:70)(cid:72)(cid:15) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:11)(cid:90)(cid:76)(cid:86)(cid:71)(cid:82)(cid:80)(cid:12) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:86) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:16) (cid:81)(cid:72)(cid:81)(cid:87) (cid:11)(cid:68)(cid:85)(cid:85)(cid:82)(cid:90)(cid:12)(cid:15) (cid:11)(cid:80)(cid:82)(cid:88)(cid:87)(cid:75)(cid:12)(cid:15) (cid:68)(cid:81)(cid:71) (cid:11)(cid:71)(cid:68)(cid:92)(cid:12)(cid:17) (cid:55)(cid:75)(cid:72)(cid:86)(cid:72) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87)(cid:86) (cid:68)(cid:85)(cid:72) (cid:81)(cid:82)(cid:87) (cid:85)(cid:72)(cid:79)(cid:72)(cid:89)(cid:68)(cid:81)(cid:87) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70)(cid:86) (cid:82)(cid:73) (cid:11)(cid:90)(cid:76)(cid:86)(cid:71)(cid:82)(cid:80)(cid:12)(cid:17) (cid:58)(cid:76)(cid:87)(cid:75)(cid:82)(cid:88)(cid:87) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86)(cid:15) G f ( w ) (cid:76)(cid:86) (cid:68) (cid:86)(cid:75)(cid:82)(cid:85)(cid:87) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:75)(cid:68)(cid:85)(cid:71)(cid:79)(cid:92) (cid:70)(cid:68)(cid:87)(cid:70)(cid:75)(cid:72)(cid:86) (cid:72)(cid:81)(cid:82)(cid:88)(cid:74)(cid:75) (cid:76)(cid:81)(cid:16) (cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72)(cid:68)(cid:86) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:76)(cid:81)(cid:87)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72) (cid:80)(cid:82)(cid:85)(cid:72) (cid:85)(cid:72)(cid:79)(cid:72)(cid:89)(cid:68)(cid:81)(cid:87) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:70)(cid:68)(cid:85)(cid:85)(cid:92) (cid:85)(cid:76)(cid:70)(cid:75) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:68)(cid:73)(cid:87)(cid:72)(cid:85) (cid:76)(cid:81)(cid:70)(cid:79)(cid:88)(cid:71)(cid:76)(cid:81)(cid:74) (cid:21)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86)(cid:15) (cid:90)(cid:72) (cid:75)(cid:68)(cid:89)(cid:72) (cid:68)(cid:81)(cid:71) (cid:17) (cid:55)(cid:75)(cid:72) (cid:81)(cid:72)(cid:90) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:85)(cid:72)(cid:74)(cid:68)(cid:85)(cid:71)(cid:72)(cid:71) (cid:68)(cid:86) (cid:11)(cid:78)(cid:81)(cid:82)(cid:90)(cid:79)(cid:72)(cid:71)(cid:74)(cid:72)(cid:12)(cid:15) (cid:90)(cid:75)(cid:82)(cid:86)(cid:72) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70)(cid:86) (cid:76)(cid:86) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85) (cid:87)(cid:82) (cid:11)(cid:90)(cid:76)(cid:86)(cid:71)(cid:82)(cid:80)(cid:12)(cid:17)", "(cid:42)(cid:85)(cid:68)(cid:76)(cid:81) (cid:39)(cid:85)(cid:82)(cid:83)(cid:83)(cid:76)(cid:81)(cid:74) (cid:58)(cid:75)(cid:76)(cid:79)(cid:72) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:76)(cid:81)(cid:87)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72)(cid:86) (cid:80)(cid:72)(cid:68)(cid:81)(cid:76)(cid:81)(cid:74)(cid:73)(cid:88)(cid:79) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86)(cid:15) (cid:76)(cid:87) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72)(cid:86) (cid:80)(cid:68)(cid:81)(cid:92) (cid:79)(cid:82)(cid:90)(cid:16) (cid:73)(cid:85)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:92) (cid:68)(cid:81)(cid:71) (cid:80)(cid:72)(cid:68)(cid:81)(cid:76)(cid:81)(cid:74)(cid:79)(cid:72)(cid:86)(cid:86) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:76)(cid:81) (cid:70)(cid:68)(cid:85)(cid:85)(cid:76)(cid:72)(cid:86) (cid:68)(cid:79)(cid:80)(cid:82)(cid:86)(cid:87) (cid:81)(cid:82) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:86)(cid:72)(cid:79)(cid:71)(cid:82)(cid:80) (cid:68)(cid:83)(cid:83)(cid:72)(cid:68)(cid:85)(cid:86) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:85)(cid:83)(cid:88)(cid:86)(cid:17) (cid:58)(cid:72) (cid:73)(cid:76)(cid:79)(cid:87)(cid:72)(cid:85) (cid:82)(cid:88)(cid:87) (cid:86)(cid:88)(cid:70)(cid:75) (cid:81)(cid:82)(cid:76)(cid:86)(cid:72) (cid:69)(cid:92) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:76)(cid:81)(cid:74) (cid:72)(cid:91)(cid:87)(cid:85)(cid:72)(cid:80)(cid:72)(cid:79)(cid:92) (cid:79)(cid:82)(cid:90)(cid:16)(cid:73)(cid:85)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:92) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:70)(cid:68)(cid:81) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72) (cid:87)(cid:75)(cid:72) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:71)(cid:68)(cid:87)(cid:68)(cid:15) (cid:85)(cid:72)(cid:71)(cid:88)(cid:70)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:83)(cid:68)(cid:85)(cid:68)(cid:80)(cid:72)(cid:87)(cid:72)(cid:85)(cid:86)(cid:15) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:88)(cid:86) (cid:68)(cid:70)(cid:70)(cid:72)(cid:79)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:83)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:17) (cid:48)(cid:82)(cid:85)(cid:72)(cid:82)(cid:89)(cid:72)(cid:85)(cid:15) (cid:80)(cid:82)(cid:87)(cid:76)(cid:89)(cid:68)(cid:87)(cid:72)(cid:71) (cid:69)(cid:92) (cid:71)(cid:85)(cid:82)(cid:83)(cid:82)(cid:88)(cid:87) (cid:11)(cid:54)(cid:85)(cid:76)(cid:89)(cid:68)(cid:86)(cid:87)(cid:68)(cid:89)(cid:68) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:23)(cid:12) (cid:68)(cid:81)(cid:71) (cid:86)(cid:88)(cid:69)(cid:86)(cid:68)(cid:80)(cid:83)(cid:79)(cid:76)(cid:81)(cid:74) (cid:11)(cid:48)(cid:76)(cid:78)(cid:82)(cid:79)(cid:82)(cid:89) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:22)(cid:69)(cid:12)(cid:15) (cid:71)(cid:88)(cid:85)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:83)(cid:75)(cid:68)(cid:86)(cid:72)(cid:15) (cid:90)(cid:72) (cid:85)(cid:68)(cid:81)(cid:71)(cid:82)(cid:80)(cid:79)(cid:92) (cid:71)(cid:85)(cid:82)(cid:83) (cid:86)(cid:82)(cid:80)(cid:72) (cid:75)(cid:76)(cid:74)(cid:75)(cid:16)(cid:73)(cid:85)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:92) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86)(cid:17) (cid:36)(cid:87) (cid:87)(cid:75)(cid:72) (cid:86)(cid:68)(cid:80)(cid:72) (cid:87)(cid:76)(cid:80)(cid:72)(cid:15) (cid:87)(cid:75)(cid:76)(cid:86) (cid:68)(cid:70)(cid:70)(cid:72)(cid:79)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74)", "(cid:36)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90)(cid:81) (cid:76)(cid:81) (cid:41)(cid:76)(cid:74)(cid:17) (cid:22)(cid:15) (cid:68)(cid:73)(cid:87)(cid:72)(cid:85) (cid:87)(cid:75)(cid:72) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:83)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:15) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:182)(cid:86) (cid:70)(cid:82)(cid:68)(cid:85)(cid:86)(cid:72) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:62) (cid:15) (cid:15) (cid:64) (cid:76)(cid:86) (cid:88)(cid:83)(cid:71)(cid:68)(cid:87)(cid:72)(cid:71) (cid:87)(cid:82) (cid:62) (cid:15) (cid:15) (cid:64)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:70)(cid:68)(cid:85)(cid:85)(cid:76)(cid:72)(cid:86) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79)(cid:79)(cid:92) (cid:80)(cid:72)(cid:68)(cid:81)(cid:76)(cid:81)(cid:74)(cid:73)(cid:88)(cid:79) (cid:81)(cid:72)(cid:90) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:73)(cid:82)(cid:85) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:17) (cid:55)(cid:75)(cid:72) (cid:70)(cid:68)(cid:86)(cid:72) (cid:82)(cid:73) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:90)(cid:76)(cid:86)(cid:71)(cid:82)(cid:80) (cid:76)(cid:86) (cid:68)(cid:79)(cid:86)(cid:82) (cid:87)(cid:75)(cid:72) (cid:86)(cid:68)(cid:80)(cid:72)(cid:17) (cid:44)(cid:81) (cid:86)(cid:75)(cid:82)(cid:85)(cid:87)(cid:15) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:71)(cid:85)(cid:82)(cid:83)(cid:16) (cid:83)(cid:76)(cid:81)(cid:74) (cid:75)(cid:72)(cid:79)(cid:83) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72) (cid:68) (cid:69)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) G f ( w ) (cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:90)(cid:76)(cid:79)(cid:79) (cid:69)(cid:72) (cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:71) (cid:87)(cid:82) (cid:69)(cid:72) (cid:70)(cid:85)(cid:88)(cid:70)(cid:76)(cid:68)(cid:79) (cid:87)(cid:82) (cid:72)(cid:81)(cid:75)(cid:68)(cid:81)(cid:70)(cid:72) (cid:87)(cid:75)(cid:72) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:36)(cid:73)(cid:87)(cid:72)(cid:85)(cid:90)(cid:68)(cid:85)(cid:71)(cid:86)(cid:15) (cid:90)(cid:72) (cid:70)(cid:68)(cid:81) (cid:74)(cid:72)(cid:87) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) f (cid:15) (cid:90)(cid:72) (cid:70)(cid:68)(cid:81) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:16) (cid:86)(cid:72)(cid:81)(cid:87) (cid:68) (cid:90)(cid:82)(cid:85)(cid:71) w (cid:69)(cid:92) (cid:68)(cid:89)(cid:72)(cid:85)(cid:68)(cid:74)(cid:76)(cid:81)(cid:74) (cid:68)(cid:79)(cid:79) (cid:82)(cid:73) (cid:76)(cid:87)(cid:86) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:89)(cid:72)(cid:70)(cid:16) (cid:87)(cid:82)(cid:85)(cid:86) (cid:87)(cid:82) (cid:74)(cid:72)(cid:87) (cid:76)(cid:87)(cid:86) (cid:90)(cid:82)(cid:85)(cid:71) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85)(cid:29) p f ( w ) = 1 | G f ( w ) | ! g G f ( w ) E fG ( g ) (cid:15) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) E fG ( g ) (cid:76)(cid:81)(cid:71)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85) (cid:82)(cid:73) g (cid:73)(cid:85)(cid:82)(cid:80) E fG (cid:17) (cid:55)(cid:75)(cid:72)(cid:81)(cid:15) (cid:90)(cid:72) (cid:70)(cid:68)(cid:79)(cid:70)(cid:88)(cid:79)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85) P f ( S ) (cid:29)", "(cid:44)(cid:81) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:79)(cid:68)(cid:92)(cid:72)(cid:85)(cid:15) (cid:82)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:70)(cid:88)(cid:86)(cid:87)(cid:82)(cid:80)(cid:76)(cid:93)(cid:72)(cid:71) (cid:68)(cid:79)(cid:74)(cid:82)(cid:85)(cid:76)(cid:87)(cid:75)(cid:80) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:86) (cid:68) (cid:81)(cid:82)(cid:89)(cid:72)(cid:79) (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:76)(cid:86) (cid:80)(cid:82)(cid:87)(cid:76)(cid:89)(cid:68)(cid:87)(cid:72)(cid:71) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:73)(cid:82)(cid:79)(cid:79)(cid:82)(cid:90)(cid:76)(cid:81)(cid:74) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:83)(cid:75)(cid:72)(cid:81)(cid:82)(cid:80)(cid:16) (cid:72)(cid:81)(cid:68)(cid:17) (cid:41)(cid:76)(cid:85)(cid:86)(cid:87)(cid:15) (cid:82)(cid:81)(cid:72) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:86) (cid:68) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:68)(cid:87)(cid:16) (cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:72)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:80)(cid:82)(cid:85)(cid:83)(cid:75)(cid:72)(cid:80)(cid:72) (cid:71)(cid:72)(cid:86)(cid:70)(cid:85)(cid:76)(cid:69)(cid:76)(cid:81)(cid:74) (cid:86)(cid:75)(cid:68)(cid:83)(cid:72) (cid:90)(cid:75)(cid:76)(cid:79)(cid:72) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:80)(cid:72) (cid:76)(cid:81)(cid:71)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74) (cid:86)(cid:82)(cid:88)(cid:81)(cid:71)(cid:17) (cid:55)(cid:75)(cid:72)(cid:85)(cid:72)(cid:73)(cid:82)(cid:85)(cid:72)(cid:15) (cid:72)(cid:68)(cid:70)(cid:75) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:86) (cid:76)(cid:87)(cid:86) (cid:70)(cid:82)(cid:85)(cid:85)(cid:72)(cid:86)(cid:83)(cid:82)(cid:81)(cid:71)(cid:76)(cid:81)(cid:74) (cid:88)(cid:81)(cid:76)(cid:84)(cid:88)(cid:72) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:16) (cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:48)(cid:82)(cid:85)(cid:72)(cid:82)(cid:89)(cid:72)(cid:85)(cid:15) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:75)(cid:68)(cid:89)(cid:72) (cid:86)(cid:87)(cid:85)(cid:82)(cid:81)(cid:74) (cid:70)(cid:82)(cid:81)(cid:81)(cid:72)(cid:70)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:72)(cid:68)(cid:70)(cid:75) (cid:82)(cid:87)(cid:75)(cid:72)(cid:85)(cid:17) (cid:58)(cid:72) (cid:70)(cid:68)(cid:81) (cid:72)(cid:68)(cid:86)(cid:76)(cid:79)(cid:92) (cid:73)(cid:76)(cid:81)(cid:71) (cid:80)(cid:68)(cid:81)(cid:92) (cid:80)(cid:82)(cid:85)(cid:83)(cid:75)(cid:72)(cid:80)(cid:72)(cid:16)(cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:91) (cid:83)(cid:68)(cid:76)(cid:85)(cid:86)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:16)(cid:87)(cid:76)(cid:82)(cid:81)(cid:29)(cid:49)(cid:82)(cid:88)(cid:81) (cid:68)(cid:81)(cid:71) (cid:16) (cid:76)(cid:82)(cid:88)(cid:86)(cid:29)(cid:36)(cid:71)(cid:77)(cid:17) (cid:55)(cid:75)(cid:72) (cid:80)(cid:82)(cid:85)(cid:83)(cid:75)(cid:72)(cid:80)(cid:72)(cid:16)(cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:80)(cid:72) (cid:83)(cid:68)(cid:76)(cid:85)(cid:86) (cid:68)(cid:85)(cid:72) (cid:68)(cid:79)(cid:86)(cid:82) (cid:88)(cid:69)(cid:76)(cid:84)(cid:88)(cid:76)(cid:87)(cid:82)(cid:88)(cid:86)(cid:29) (cid:86)(cid:82)(cid:88)(cid:81)(cid:71)(cid:86) (cid:82)(cid:73) (cid:11)(cid:86)(cid:78)(cid:72)(cid:87)(cid:70)(cid:75)(cid:12) (cid:68)(cid:81)(cid:71) (cid:11)(cid:86)(cid:76)(cid:85)(cid:12) (cid:68)(cid:85)(cid:72) (cid:73)(cid:85)(cid:82)(cid:80) (cid:87)(cid:75)(cid:72) (cid:83)(cid:68)(cid:76)(cid:85) (cid:11)(cid:68)(cid:83)(cid:83)(cid:79)(cid:92)(cid:12)(cid:29)(cid:86)(cid:75)(cid:72)(cid:81)(cid:17) (cid:55)(cid:82) (cid:69)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:87)(cid:75)(cid:72) (cid:68)(cid:69)(cid:82)(cid:89)(cid:72) (cid:83)(cid:75)(cid:72)(cid:81)(cid:82)(cid:80)(cid:72)(cid:81)(cid:68)(cid:15) (cid:90)(cid:72) (cid:71)(cid:72)(cid:86)(cid:76)(cid:74)(cid:81) (cid:68) (cid:81)(cid:82)(cid:89)(cid:72)(cid:79) (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75)(cid:76)(cid:81) (cid:72)(cid:68)(cid:70)(cid:75) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:68)(cid:81)(cid:71) (cid:86)(cid:75)(cid:68)(cid:85)(cid:72)(cid:71) (cid:68)(cid:70)(cid:85)(cid:82)(cid:86)(cid:86) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:83)(cid:79)(cid:72) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:17) (cid:49)(cid:72)(cid:91)(cid:87)(cid:15) (cid:90)(cid:72) (cid:90)(cid:76)(cid:79)(cid:79) (cid:86)(cid:75)(cid:82)(cid:90) (cid:87)(cid:75)(cid:72) (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:71)(cid:72)(cid:86)(cid:76)(cid:74)(cid:81) (cid:68)(cid:81)(cid:71) (cid:75)(cid:82)(cid:90) (cid:76)(cid:87) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:86) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17)", "(cid:36)(cid:73)(cid:87)(cid:72)(cid:85) (cid:70)(cid:68)(cid:79)(cid:70)(cid:88)(cid:79)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) P f ( S ) (cid:73)(cid:82)(cid:85) (cid:68)(cid:79)(cid:79) f F (cid:15) (cid:90)(cid:72) (cid:82)(cid:69)(cid:87)(cid:68)(cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:81)(cid:87) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) S (cid:87)(cid:75)(cid:68)(cid:87) (cid:71)(cid:72)(cid:73)(cid:76)(cid:81)(cid:72)(cid:71) (cid:68)(cid:86)(cid:29) P 0 ( S ) = 1 | F | ! f FP f ( S ) (cid:17) (cid:55)(cid:75)(cid:72) (cid:82)(cid:69)(cid:16) (cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:76)(cid:86) (cid:83)(cid:85)(cid:82)(cid:69)(cid:79)(cid:72)(cid:80) (cid:76)(cid:86) (cid:87)(cid:82) (cid:80)(cid:76)(cid:81)(cid:76)(cid:80)(cid:76)(cid:93)(cid:72) (cid:87)(cid:75)(cid:72) (cid:81)(cid:72)(cid:74)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:79)(cid:82)(cid:74)(cid:16)(cid:79)(cid:76)(cid:78)(cid:72)(cid:79)(cid:76)(cid:75)(cid:82)(cid:82)(cid:71)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:81)(cid:71)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:83)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72) (cid:83)(cid:85)(cid:82)(cid:69)(cid:16) (cid:68)(cid:69)(cid:76)(cid:79)(cid:76)(cid:87)(cid:92) (cid:73)(cid:82)(cid:85) (cid:68) (cid:87)(cid:68)(cid:85)(cid:74)(cid:72)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) w t (cid:90)(cid:76)(cid:87)(cid:75) (cid:76)(cid:87)(cid:86) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) (cid:86)(cid:72)(cid:87) S ( w t ) (cid:29)", "L fe ( w t ) = ( w t | P 0 ( S ))+ \" f F ( w t | P f ( S )) (cid:11)(cid:20)(cid:12)", "!", "| V | j =1 exp ( PTEW ( w j )) (cid:11)(cid:21)(cid:12) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) P (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:85)(cid:85)(cid:72)(cid:86)(cid:83)(cid:82)(cid:81)(cid:71)(cid:76)(cid:81)(cid:74) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:83)(cid:85)(cid:82)(cid:16) (cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:44)(cid:81) (cid:83)(cid:85)(cid:68)(cid:70)(cid:87)(cid:76)(cid:70)(cid:72)(cid:15) (cid:90)(cid:72) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87) (cid:68)(cid:81) (cid:82)(cid:83)(cid:87)(cid:76)(cid:80)(cid:76)(cid:93)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:81)(cid:72)(cid:74)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:86)(cid:68)(cid:80)(cid:83)(cid:79)(cid:76)(cid:81)(cid:74) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:86)(cid:87)(cid:68)(cid:81)(cid:71)(cid:68)(cid:85)(cid:71) (cid:74)(cid:85)(cid:68)(cid:71)(cid:76)(cid:72)(cid:81)(cid:87) (cid:71)(cid:72)(cid:86)(cid:70)(cid:72)(cid:81)(cid:71)(cid:17) (cid:49)(cid:72)(cid:74)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:86)(cid:68)(cid:80)(cid:83)(cid:79)(cid:76)(cid:81)(cid:74) (cid:76)(cid:86) (cid:87)(cid:82) (cid:85)(cid:72)(cid:83)(cid:79)(cid:68)(cid:70)(cid:72) (cid:87)(cid:75)(cid:72) (cid:72)(cid:91)(cid:83)(cid:72)(cid:81)(cid:86)(cid:76)(cid:89)(cid:72) (cid:71)(cid:72)(cid:81)(cid:82)(cid:80)(cid:76)(cid:81)(cid:68)(cid:87)(cid:82)(cid:85) (cid:76)(cid:81) (cid:40)(cid:84)(cid:17) (cid:11)(cid:21)(cid:12) (cid:90)(cid:76)(cid:87)(cid:75) (cid:68) (cid:86)(cid:72)(cid:87) (cid:82)(cid:73) (cid:81)(cid:72)(cid:74)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:86)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:68) (cid:73)(cid:85)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:92) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) ( w t | P ) = [ (cid:79)(cid:82)(cid:74) ( PTEW ( w t ))+ E w ng D (cid:79)(cid:82)(cid:74) ( PTEW ( w ng )] (cid:11)(cid:22)(cid:12) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:76)(cid:86) (cid:68) (cid:86)(cid:76)(cid:74)(cid:80)(cid:82)(cid:76)(cid:71) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:76)(cid:86) (cid:87)(cid:75)(cid:72) (cid:81)(cid:88)(cid:80)(cid:69)(cid:72)(cid:85) (cid:82)(cid:73) (cid:81)(cid:72)(cid:74)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:86)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:86)(cid:15) E w ng D [ ] (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:86) (cid:72)(cid:91)(cid:83)(cid:72)(cid:70)(cid:16) (cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:81)(cid:72)(cid:74)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:86)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71) w ng (cid:69)(cid:72)(cid:79)(cid:82)(cid:81)(cid:74)(cid:86) (cid:87)(cid:82) (cid:90)(cid:82)(cid:85)(cid:71) (cid:73)(cid:85)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:92) (cid:71)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81) D (cid:17) (cid:42)(cid:76)(cid:89)(cid:72)(cid:81) (cid:68) (cid:86)(cid:83)(cid:72)(cid:16) (cid:70)(cid:76)(cid:73)(cid:76)(cid:70) (cid:70)(cid:82)(cid:85)(cid:83)(cid:88)(cid:86) C (cid:15) (cid:87)(cid:75)(cid:72) (cid:82)(cid:69)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72) (cid:79)(cid:76)(cid:78)(cid:72)(cid:79)(cid:76)(cid:75)(cid:82)(cid:82)(cid:71) (cid:76)(cid:86) L ( C ) = ! w t CL fe ( w t ) (cid:17) (cid:50)(cid:88)(cid:85) (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) L fe (cid:76)(cid:81) (cid:40)(cid:84)(cid:17) (cid:11)(cid:20)(cid:12) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:86) (cid:87)(cid:90)(cid:82) (cid:87)(cid:72)(cid:85)(cid:80)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:87)(cid:72)(cid:85)(cid:80)(cid:15) (cid:87)(cid:75)(cid:72) (cid:74)(cid:85)(cid:68)(cid:71)(cid:76)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) P 0 (cid:15) grad 0 (cid:15) (cid:76)(cid:86) (cid:69)(cid:68)(cid:70)(cid:78) (cid:83)(cid:85)(cid:82)(cid:83)(cid:68)(cid:74)(cid:68)(cid:87)(cid:72)(cid:71) (cid:87)(cid:82) (cid:72)(cid:89)(cid:72)(cid:85)(cid:92) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:68)(cid:70)(cid:85)(cid:82)(cid:86)(cid:86) (cid:90)(cid:75)(cid:82)(cid:79)(cid:72) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:74)(cid:85)(cid:68)(cid:71)(cid:76)(cid:72)(cid:81)(cid:87) grad 0 (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:83)(cid:85)(cid:72)(cid:87)(cid:72)(cid:71) (cid:68)(cid:86) (cid:88)(cid:83)(cid:71)(cid:68)(cid:87)(cid:72)(cid:71) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:86)(cid:75)(cid:68)(cid:85)(cid:76)(cid:81)(cid:74) (cid:69)(cid:92) (cid:68)(cid:79)(cid:79) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:70)(cid:82)(cid:81)(cid:71) (cid:87)(cid:72)(cid:85)(cid:80)(cid:15) (cid:87)(cid:75)(cid:72) (cid:74)(cid:85)(cid:68)(cid:71)(cid:76)(cid:72)(cid:81)(cid:87) (cid:82)(cid:73) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) P f (cid:15) grad f (cid:15) (cid:82)(cid:81)(cid:79)(cid:92) (cid:88)(cid:83)(cid:71)(cid:68)(cid:87)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:89)(cid:72)(cid:70)(cid:16) (cid:87)(cid:82)(cid:85)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75)(cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:86)(cid:83)(cid:72)(cid:70)(cid:76)(cid:73)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) f (cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:74)(cid:85)(cid:68)(cid:71)(cid:76)(cid:72)(cid:81)(cid:87) grad f (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:83)(cid:85)(cid:72)(cid:87)(cid:72)(cid:71) (cid:68)(cid:86) (cid:87)(cid:75)(cid:72) (cid:88)(cid:81)(cid:76)(cid:84)(cid:88)(cid:72) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:16) (cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) f (cid:17) (cid:44)(cid:81) (cid:87)(cid:75)(cid:76)(cid:86) (cid:70)(cid:68)(cid:86)(cid:72)(cid:15) (cid:87)(cid:75)(cid:72) (cid:74)(cid:85)(cid:68)(cid:71)(cid:76)(cid:72)(cid:81)(cid:87) (cid:87)(cid:75)(cid:68)(cid:87) (cid:88)(cid:83)(cid:71)(cid:68)(cid:87)(cid:72)(cid:86) (cid:72)(cid:68)(cid:70)(cid:75) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:76)(cid:81) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) f (cid:76)(cid:86) grad f + 1 | F | grad 0 (cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:86) (cid:69)(cid:82)(cid:87)(cid:75) (cid:88)(cid:81)(cid:76)(cid:84)(cid:88)(cid:72) (cid:68)(cid:81)(cid:71) (cid:86)(cid:75)(cid:68)(cid:85)(cid:72)(cid:71) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:40)(cid:91)(cid:76)(cid:86)(cid:87)(cid:76)(cid:81)(cid:74) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:80)(cid:68)(cid:76)(cid:81)(cid:79)(cid:92) (cid:88)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81)(cid:79)(cid:92) (cid:11)(cid:68)(cid:12) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:87)(cid:72)(cid:85)(cid:80)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:87)(cid:75)(cid:72) (cid:88)(cid:86)(cid:68)(cid:74)(cid:72) (cid:82)(cid:73) L w 2 v = f ( w t | P 0 ) (cid:76)(cid:81) (cid:90)(cid:82)(cid:85)(cid:71)(cid:21)(cid:89)(cid:72)(cid:70)(cid:15) (cid:82)(cid:85) (cid:11)(cid:69)(cid:12) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:70)(cid:82)(cid:81)(cid:71) (cid:87)(cid:72)(cid:85)(cid:80)(cid:15) L jwe = ! f F f ( w t | P f ) (cid:76)(cid:81) (cid:45)(cid:58)(cid:40)(cid:17) (cid:55)(cid:75)(cid:72)(cid:86)(cid:72) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:70)(cid:75)(cid:82)(cid:86)(cid:72) (cid:72)(cid:76)(cid:87)(cid:75)(cid:72)(cid:85) (cid:86)(cid:75)(cid:68)(cid:85)(cid:72)(cid:71) (cid:82)(cid:85) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:16)(cid:86)(cid:83)(cid:72)(cid:70)(cid:76)(cid:73)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:80)(cid:76)(cid:74)(cid:75)(cid:87) (cid:69)(cid:72) (cid:81)(cid:82)(cid:87) (cid:70)(cid:82)(cid:80)(cid:83)(cid:79)(cid:72)(cid:87)(cid:72) (cid:72)(cid:81)(cid:82)(cid:88)(cid:74)(cid:75) (cid:73)(cid:82)(cid:85) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:23) (cid:55)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:68)(cid:81)(cid:71) (cid:40)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:23)(cid:17)(cid:20) (cid:55)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:54)(cid:72)(cid:87)(cid:88)(cid:83)(cid:86) (cid:55)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:38)(cid:82)(cid:85)(cid:83)(cid:88)(cid:86) (cid:58)(cid:72) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87) (cid:87)(cid:75)(cid:72) (cid:69)(cid:72)(cid:81)(cid:70)(cid:75)(cid:80)(cid:68)(cid:85)(cid:78) (cid:70)(cid:82)(cid:85)(cid:83)(cid:88)(cid:86)(cid:15) (cid:69)(cid:82)(cid:87)(cid:75) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:68)(cid:81)(cid:71) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75) (cid:58)(cid:76)(cid:78)(cid:76)(cid:83)(cid:72)(cid:71)(cid:76)(cid:68) (cid:71)(cid:68)(cid:87)(cid:68)(cid:15) (cid:87)(cid:82) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:82)(cid:88)(cid:85) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78)(cid:17) (cid:41)(cid:82)(cid:85) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72)(cid:15) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:74)(cid:80)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82)(cid:82)(cid:79) (cid:76)(cid:86) (cid:77)(cid:76)(cid:72)(cid:69)(cid:68) (cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:90)(cid:68)(cid:86) (cid:90)(cid:76)(cid:71)(cid:72)(cid:79)(cid:92) (cid:88)(cid:86)(cid:72)(cid:71) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:49)(cid:47)(cid:51) (cid:90)(cid:82)(cid:85)(cid:78)(cid:86) (cid:11)(cid:47)(cid:76) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:28)(cid:12)(cid:17) (cid:58)(cid:72) (cid:86)(cid:72)(cid:87) (cid:87)(cid:75)(cid:72) (cid:80)(cid:76)(cid:81)(cid:76)(cid:80)(cid:68)(cid:79) (cid:90)(cid:82)(cid:85)(cid:71) (cid:73)(cid:85)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:92) (cid:68)(cid:86) (cid:20)(cid:19)(cid:15) (cid:82)(cid:69)(cid:87)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:22)(cid:28)(cid:19)(cid:15)(cid:20)(cid:19)(cid:25) (cid:88)(cid:81)(cid:76)(cid:84)(cid:88)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86)(cid:17) (cid:58)(cid:72) (cid:86)(cid:72)(cid:87) (cid:87)(cid:75)(cid:72) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80)(cid:86) (cid:82)(cid:73) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85) (cid:68)(cid:81)(cid:71) (cid:51)(cid:50)(cid:54) (cid:68)(cid:86) (cid:20) (cid:68)(cid:81)(cid:71) (cid:82)(cid:87)(cid:75)(cid:72)(cid:85)(cid:86) (cid:68)(cid:86) (cid:23)(cid:17) (cid:55)(cid:75)(cid:72) (cid:73)(cid:76)(cid:85)(cid:86)(cid:87) (cid:20)(cid:19)(cid:15)(cid:19)(cid:19)(cid:19) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:82)(cid:85)(cid:71)(cid:72)(cid:85)(cid:72)(cid:71) (cid:69)(cid:92) (cid:73)(cid:85)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:92) (cid:68)(cid:85)(cid:72) (cid:78)(cid:72)(cid:83)(cid:87)(cid:17) (cid:41)(cid:82)(cid:85) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75) (cid:70)(cid:82)(cid:85)(cid:83)(cid:88)(cid:86)(cid:15) (cid:90)(cid:72) (cid:86)(cid:72)(cid:87) (cid:87)(cid:75)(cid:72) (cid:80)(cid:76)(cid:81)(cid:76)(cid:80)(cid:68)(cid:79) (cid:90)(cid:82)(cid:85)(cid:71) (cid:73)(cid:85)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:92) (cid:68)(cid:86) (cid:22)(cid:19)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:92)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:25)(cid:23)(cid:28)(cid:15)(cid:19)(cid:25)(cid:27) (cid:88)(cid:81)(cid:76)(cid:84)(cid:88)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86)(cid:17) (cid:58)(cid:72) (cid:72)(cid:91)(cid:87)(cid:72)(cid:81)(cid:71) (cid:79)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:68)(cid:81)(cid:71) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:80)(cid:72) (cid:87)(cid:82) (cid:22)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80)(cid:15) (cid:68)(cid:81)(cid:71) (cid:86)(cid:72)(cid:87) (cid:51)(cid:50)(cid:54) (cid:87)(cid:82) (cid:20)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80)(cid:17) (cid:41)(cid:76)(cid:85)(cid:86)(cid:87) (cid:21)(cid:19)(cid:15)(cid:19)(cid:19)(cid:19) (cid:70)(cid:82)(cid:80)(cid:80)(cid:82)(cid:81) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:68)(cid:85)(cid:72) (cid:78)(cid:72)(cid:83)(cid:87)(cid:17) (cid:37)(cid:68)(cid:86)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72)(cid:86) (cid:55)(cid:82) (cid:68)(cid:86)(cid:86)(cid:72)(cid:86)(cid:86) (cid:87)(cid:75)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78)(cid:15) (cid:90)(cid:72) (cid:70)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:72) (cid:76)(cid:87) (cid:90)(cid:76)(cid:87)(cid:75) (cid:86)(cid:72)(cid:89)(cid:72)(cid:85)(cid:68)(cid:79) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72)(cid:16)(cid:82)(cid:73)(cid:16) (cid:87)(cid:75)(cid:72)(cid:16)(cid:68)(cid:85)(cid:87) (cid:68)(cid:79)(cid:74)(cid:82)(cid:85)(cid:76)(cid:87)(cid:75)(cid:80)(cid:86)(cid:17) (cid:58)(cid:82)(cid:85)(cid:71)(cid:21)(cid:89)(cid:72)(cid:70) (cid:82)(cid:81)(cid:79)(cid:92) (cid:88)(cid:86)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:76)(cid:87)(cid:86)(cid:72)(cid:79)(cid:73) (cid:68)(cid:86) (cid:68) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:15) (cid:68)(cid:81)(cid:71) (cid:76)(cid:86) (cid:68)(cid:81) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72) (cid:68)(cid:81)(cid:71) (cid:72)(cid:73)(cid:73)(cid:76)(cid:70)(cid:76)(cid:72)(cid:81)(cid:87) (cid:87)(cid:82)(cid:82)(cid:79)(cid:78)(cid:76)(cid:87) (cid:73)(cid:82)(cid:85) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:17) (cid:44)(cid:87) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) L w 2 v (cid:87)(cid:82) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:38)(cid:58)(cid:40) (cid:70)(cid:82)(cid:80)(cid:69)(cid:76)(cid:81)(cid:72)(cid:86) (cid:69)(cid:82)(cid:87)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85) (cid:68)(cid:86) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:76)(cid:81) (cid:68) (cid:70)(cid:69)(cid:82)(cid:90) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72)(cid:17) (cid:44)(cid:87) (cid:68)(cid:79)(cid:86)(cid:82) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) L w 2 v (cid:17) (cid:45)(cid:58)(cid:40) (cid:76)(cid:81)(cid:70)(cid:82)(cid:85)(cid:83)(cid:82)(cid:85)(cid:68)(cid:87)(cid:72)(cid:86) (cid:90)(cid:82)(cid:85)(cid:71) (cid:15) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85) (cid:15) (cid:68)(cid:81)(cid:71) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:70)(cid:69)(cid:82)(cid:90) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72) (cid:68)(cid:81)(cid:71) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) L jwe (cid:17) (cid:70)(cid:90)(cid:21)(cid:89)(cid:72)(cid:70) (cid:88)(cid:86)(cid:72)(cid:86) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:86)(cid:87)(cid:85)(cid:82)(cid:78)(cid:72) (cid:87)(cid:82) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:76)(cid:81) (cid:68) (cid:86)(cid:78)(cid:76)(cid:83)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:68)(cid:81)(cid:71) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) L w 2 v (cid:17) (cid:43)(cid:92)(cid:83)(cid:72)(cid:85)(cid:83)(cid:68)(cid:85)(cid:68)(cid:80)(cid:72)(cid:87)(cid:72)(cid:85)(cid:86) (cid:41)(cid:82)(cid:85) (cid:68) (cid:73)(cid:68)(cid:76)(cid:85) (cid:70)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:76)(cid:86)(cid:82)(cid:81)(cid:15) (cid:72)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:76)(cid:86) (cid:82)(cid:73) (cid:21)(cid:19)(cid:19) (cid:71)(cid:76)(cid:80)(cid:72)(cid:81)(cid:86)(cid:76)(cid:82)(cid:81)(cid:86) (cid:73)(cid:82)(cid:85) (cid:68)(cid:79)(cid:79) (cid:68)(cid:79)(cid:74)(cid:82)(cid:85)(cid:76)(cid:87)(cid:75)(cid:80)(cid:86)(cid:17) (cid:58)(cid:72) (cid:86)(cid:72)(cid:87) (cid:87)(cid:75)(cid:72) (cid:90)(cid:76)(cid:81)(cid:71)(cid:82)(cid:90) (cid:86)(cid:76)(cid:93)(cid:72) (cid:68)(cid:81)(cid:71) (cid:76)(cid:87)(cid:16) (cid:72)(cid:85)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:24)(cid:15) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:76)(cid:87)(cid:76)(cid:68)(cid:79) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:85)(cid:68)(cid:87)(cid:72) (cid:87)(cid:82) (cid:19)(cid:17)(cid:19)(cid:21)(cid:24)(cid:15) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:81)(cid:72)(cid:74)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:86)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:86) (cid:87)(cid:82) (cid:20)(cid:19)(cid:17) (cid:23)(cid:17)(cid:21) (cid:40)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:55)(cid:68)(cid:86)(cid:78)(cid:86) (cid:44)(cid:81) (cid:87)(cid:75)(cid:72) (cid:87)(cid:68)(cid:86)(cid:78) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:83)(cid:68)(cid:85)(cid:87)(cid:15) (cid:79)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:68)(cid:85)(cid:72) (cid:70)(cid:82)(cid:81)(cid:16) (cid:71)(cid:88)(cid:70)(cid:87)(cid:72)(cid:71) (cid:87)(cid:82) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72) (cid:72)(cid:73)(cid:16) (cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:81)(cid:72)(cid:90) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:15) (cid:73)(cid:76)(cid:81)(cid:72)(cid:16)(cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72)(cid:15) (cid:68)(cid:81)(cid:71) (cid:82)(cid:88)(cid:85) (cid:81)(cid:82)(cid:89)(cid:72)(cid:79) (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:39)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:68)(cid:85)(cid:72) (cid:70)(cid:82)(cid:81)(cid:71)(cid:88)(cid:70)(cid:87)(cid:72)(cid:71) (cid:87)(cid:82) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:82)(cid:73) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:70)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:72)(cid:71) (cid:87)(cid:82) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:52)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:68)(cid:81)(cid:68)(cid:79)(cid:92)(cid:86)(cid:76)(cid:86) (cid:76)(cid:86) (cid:88)(cid:86)(cid:72)(cid:71) (cid:87)(cid:82) (cid:89)(cid:68)(cid:79)(cid:76)(cid:71)(cid:68)(cid:87)(cid:72) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:16) (cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:76)(cid:81) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:36)(cid:79)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:86) (cid:68)(cid:85)(cid:72) (cid:90)(cid:76)(cid:71)(cid:72)(cid:79)(cid:92) (cid:88)(cid:86)(cid:72)(cid:71) (cid:76)(cid:81) (cid:83)(cid:85)(cid:72)(cid:89)(cid:76)(cid:82)(cid:88)(cid:86) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74) (cid:90)(cid:82)(cid:85)(cid:78)(cid:86)(cid:17) (cid:47)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:40)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:47)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:76)(cid:81)(cid:16) (cid:70)(cid:79)(cid:88)(cid:71)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85)(cid:76)(cid:87)(cid:92) (cid:68)(cid:81)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:68)(cid:79)(cid:82)(cid:74)(cid:92)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:68)(cid:85)(cid:72) (cid:90)(cid:76)(cid:71)(cid:72)(cid:79)(cid:92) (cid:68)(cid:83)(cid:83)(cid:79)(cid:76)(cid:72)(cid:71) (cid:87)(cid:82) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:11)(cid:68)(cid:12) (cid:58)(cid:82)(cid:85)(cid:71) (cid:54)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85)(cid:76)(cid:87)(cid:92) (cid:55)(cid:75)(cid:76)(cid:86) (cid:87)(cid:68)(cid:86)(cid:78) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:182)(cid:86) (cid:68)(cid:69)(cid:76)(cid:79)(cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:70)(cid:68)(cid:83)(cid:87)(cid:88)(cid:85)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:85)(cid:72)(cid:79)(cid:72)(cid:89)(cid:68)(cid:81)(cid:70)(cid:72) (cid:69)(cid:72)(cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:74)(cid:76)(cid:89)(cid:72)(cid:81) (cid:90)(cid:82)(cid:85)(cid:71) (cid:83)(cid:68)(cid:76)(cid:85)(cid:86)(cid:17) (cid:58)(cid:72) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:86) (cid:54)(cid:76)(cid:80)(cid:21)(cid:23)(cid:19) (cid:68)(cid:81)(cid:71) (cid:54)(cid:76)(cid:80)(cid:21)(cid:28)(cid:26) (cid:73)(cid:85)(cid:82)(cid:80) (cid:11)(cid:38)(cid:75)(cid:72)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:69)(cid:12) (cid:73)(cid:82)(cid:85) (cid:90)(cid:82)(cid:85)(cid:71) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85)(cid:76)(cid:87)(cid:92) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:87)(cid:82) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:15) (cid:68)(cid:81)(cid:71) (cid:88)(cid:86)(cid:72) (cid:54)(cid:76)(cid:80)(cid:22)(cid:24)(cid:22) (cid:73)(cid:85)(cid:82)(cid:80) (cid:11)(cid:48)(cid:76)(cid:78)(cid:82)(cid:79)(cid:82)(cid:89) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:22)(cid:68)(cid:12) (cid:73)(cid:82)(cid:85) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:68)(cid:70)(cid:75) (cid:83)(cid:68)(cid:76)(cid:85) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:76)(cid:81) (cid:72)(cid:68)(cid:70)(cid:75) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:15) (cid:68) (cid:75)(cid:88)(cid:80)(cid:68)(cid:81)(cid:16)(cid:79)(cid:68)(cid:69)(cid:72)(cid:79)(cid:72)(cid:71) (cid:86)(cid:70)(cid:82)(cid:85)(cid:72) (cid:76)(cid:86) (cid:83)(cid:85)(cid:82)(cid:89)(cid:76)(cid:71)(cid:72)(cid:71)(cid:17) (cid:58)(cid:72) (cid:70)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:86)(cid:76)(cid:81)(cid:72) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85)(cid:16) (cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:72)(cid:68)(cid:70)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:83)(cid:68)(cid:76)(cid:85) (cid:68)(cid:81)(cid:71) (cid:88)(cid:86)(cid:72) (cid:87)(cid:75)(cid:72) (cid:54)(cid:83)(cid:72)(cid:68)(cid:85)(cid:80)(cid:68)(cid:81) (cid:70)(cid:82)(cid:85)(cid:85)(cid:72)(cid:16) (cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:80)(cid:72)(cid:68)(cid:86)(cid:88)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:11)(cid:69)(cid:12) (cid:58)(cid:82)(cid:85)(cid:71) (cid:36)(cid:81)(cid:68)(cid:79)(cid:82)(cid:74)(cid:92) (cid:55)(cid:75)(cid:76)(cid:86) (cid:87)(cid:68)(cid:86)(cid:78) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72)(cid:86) (cid:90)(cid:75)(cid:72)(cid:87)(cid:75)(cid:72)(cid:85) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:70)(cid:68)(cid:83)(cid:87)(cid:88)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:85)(cid:72)(cid:79)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:75)(cid:76)(cid:83) (cid:69)(cid:72)(cid:87)(cid:90)(cid:72)(cid:72)(cid:81) (cid:90)(cid:82)(cid:85)(cid:71) (cid:83)(cid:68)(cid:76)(cid:85)(cid:86)(cid:17) (cid:42)(cid:76)(cid:89)(cid:72)(cid:81) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:79)(cid:76)(cid:78)(cid:72) (cid:37)(cid:72)(cid:85)(cid:79)(cid:76)(cid:81)(cid:15) (cid:42)(cid:72)(cid:85)(cid:80)(cid:68)(cid:81)(cid:92)(cid:15) (cid:51)(cid:68)(cid:85)(cid:76)(cid:86)(cid:15) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:86)(cid:75)(cid:82)(cid:88)(cid:79)(cid:71) (cid:76)(cid:81)(cid:16) (cid:73)(cid:72)(cid:85) (cid:87)(cid:75)(cid:68)(cid:87) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:86)(cid:87) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85) (cid:90)(cid:82)(cid:85)(cid:71) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85) (cid:89)(cid:72)(cid:70)(cid:11)(cid:41)(cid:85)(cid:68)(cid:81)(cid:70)(cid:72)(cid:12) (cid:90)(cid:76)(cid:87)(cid:75) (cid:89)(cid:72)(cid:70)(cid:11)(cid:42)(cid:72)(cid:85)(cid:80)(cid:68)(cid:81)(cid:92)(cid:12)(cid:16)(cid:89)(cid:72)(cid:70)(cid:11)(cid:37)(cid:72)(cid:85)(cid:79)(cid:76)(cid:81)(cid:12)(cid:14)(cid:89)(cid:72)(cid:70)(cid:11)(cid:51)(cid:68)(cid:85)(cid:76)(cid:86)(cid:12)(cid:17) (cid:58)(cid:72) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87) (cid:87)(cid:75)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87) (cid:83)(cid:85)(cid:82)(cid:89)(cid:76)(cid:71)(cid:72)(cid:71) (cid:69)(cid:92) (cid:11)(cid:38)(cid:75)(cid:72)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:24)(cid:69)(cid:12)(cid:15) (cid:68)(cid:81)(cid:71) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:86) (cid:73)(cid:85)(cid:82)(cid:80) (cid:42)(cid:82)(cid:82)(cid:74)(cid:79)(cid:72) (cid:11)(cid:48)(cid:76)(cid:78)(cid:82)(cid:79)(cid:82)(cid:89) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:22)(cid:68)(cid:12) (cid:68)(cid:81)(cid:71) (cid:48)(cid:54)(cid:53) (cid:11)(cid:48)(cid:76)(cid:78)(cid:82)(cid:79)(cid:82)(cid:89) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:22)(cid:70)(cid:12)(cid:17) (cid:39)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:55)(cid:68)(cid:86)(cid:78) (cid:44)(cid:81) (cid:71)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:15) (cid:90)(cid:72) (cid:88)(cid:86)(cid:72) (cid:82)(cid:88)(cid:85) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:87)(cid:82) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:76)(cid:81) (cid:68) (cid:87)(cid:72)(cid:91)(cid:87) (cid:82)(cid:85) (cid:86)(cid:72)(cid:81)(cid:16) (cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:68)(cid:86) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72)(cid:86)(cid:17) (cid:37)(cid:82)(cid:87)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:70)(cid:68)(cid:81) (cid:70)(cid:82)(cid:81)(cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87) (cid:68) (cid:90)(cid:82)(cid:85)(cid:71) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:68) (cid:90)(cid:82)(cid:85)(cid:71) w (cid:17) (cid:37)(cid:92) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) EW (cid:15) (cid:68) (cid:90)(cid:82)(cid:85)(cid:71) w (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:71) (cid:68)(cid:86) EW ( w ) (cid:17) (cid:37)(cid:92) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:15) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71) w (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:70)(cid:82)(cid:81)(cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:72)(cid:71) (cid:69)(cid:92) (cid:70)(cid:82)(cid:81)(cid:70)(cid:68)(cid:87)(cid:72)(cid:81)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74) (cid:68)(cid:79)(cid:79) (cid:90)(cid:82)(cid:85)(cid:71) (cid:83)(cid:85)(cid:82)(cid:77)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) p f ( w ) (cid:87)(cid:82)(cid:74)(cid:72)(cid:87)(cid:75)(cid:72)(cid:85)(cid:15) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) f F (cid:68)(cid:85)(cid:72) (cid:68)(cid:89)(cid:68)(cid:76)(cid:79)(cid:68)(cid:69)(cid:79)(cid:72) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:17) (cid:44)(cid:81) (cid:71)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:15) (cid:87)(cid:75)(cid:72) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:72)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:85)(cid:72) (cid:73)(cid:85)(cid:82)(cid:93)(cid:72)(cid:81) (cid:68)(cid:81)(cid:71) (cid:81)(cid:82)(cid:87) (cid:88)(cid:83)(cid:71)(cid:68)(cid:87)(cid:72)(cid:71) (cid:76)(cid:81) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:83)(cid:75)(cid:68)(cid:86)(cid:72)(cid:86)(cid:17) (cid:11)(cid:68)(cid:12) (cid:55)(cid:72)(cid:91)(cid:87) (cid:38)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:58)(cid:72) (cid:73)(cid:82)(cid:79)(cid:79)(cid:82)(cid:90) (cid:70)(cid:90)(cid:21)(cid:89)(cid:72)(cid:70) (cid:68)(cid:81)(cid:71) (cid:86)(cid:72)(cid:16) (cid:79)(cid:72)(cid:70)(cid:87) (cid:73)(cid:76)(cid:89)(cid:72) (cid:87)(cid:82)(cid:83)(cid:76)(cid:70)(cid:86) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:71)(cid:68)(cid:87)(cid:72)(cid:86)(cid:72)(cid:87) (cid:41)(cid:88)(cid:71)(cid:68)(cid:81)(cid:38)(cid:82)(cid:85)(cid:83)(cid:88)(cid:86) (cid:68)(cid:81)(cid:71) (cid:82)(cid:69)(cid:87)(cid:68)(cid:76)(cid:81) (cid:24)(cid:15)(cid:27)(cid:27)(cid:24) (cid:87)(cid:72)(cid:91)(cid:87)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75)(cid:15) (cid:90)(cid:72) (cid:88)(cid:86)(cid:72) (cid:49)(cid:72)(cid:90)(cid:86)(cid:16) (cid:42)(cid:85)(cid:82)(cid:88)(cid:83) (cid:68)(cid:81)(cid:71) (cid:82)(cid:69)(cid:87)(cid:68)(cid:76)(cid:81) (cid:21)(cid:19) (cid:87)(cid:82)(cid:83)(cid:76)(cid:70)(cid:86) (cid:68)(cid:86) (cid:90)(cid:72)(cid:79)(cid:79) (cid:68)(cid:86) (cid:20)(cid:27)(cid:15)(cid:26)(cid:24)(cid:25) (cid:87)(cid:72)(cid:91)(cid:87)(cid:86)(cid:17) (cid:58)(cid:72) (cid:68)(cid:89)(cid:72)(cid:85)(cid:68)(cid:74)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:76)(cid:81) (cid:68) (cid:87)(cid:72)(cid:91)(cid:87) (cid:68)(cid:86) (cid:76)(cid:87)(cid:86) (cid:76)(cid:81)(cid:83)(cid:88)(cid:87) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85)(cid:86)(cid:17) (cid:58)(cid:72) (cid:69)(cid:88)(cid:76)(cid:79)(cid:71) (cid:87)(cid:75)(cid:72) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:72)(cid:85) (cid:69)(cid:92) (cid:88)(cid:86)(cid:76)(cid:81)(cid:74) (cid:54)(cid:57)(cid:48) (cid:76)(cid:81) (cid:86)(cid:78)(cid:79)(cid:72)(cid:68)(cid:85)(cid:81) (cid:68)(cid:81)(cid:71) (cid:88)(cid:87)(cid:76)(cid:79)(cid:76)(cid:93)(cid:72) (cid:73)(cid:76)(cid:89)(cid:72)(cid:16)(cid:73)(cid:82)(cid:79)(cid:71) (cid:70)(cid:85)(cid:82)(cid:86)(cid:86) (cid:89)(cid:68)(cid:79)(cid:76)(cid:71)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:82)(cid:69)(cid:87)(cid:68)(cid:76)(cid:81) (cid:68)(cid:70)(cid:70)(cid:88)(cid:85)(cid:68)(cid:70)(cid:92) (cid:86)(cid:70)(cid:82)(cid:85)(cid:72)(cid:86)(cid:17) (cid:11)(cid:69)(cid:12) (cid:49)(cid:68)(cid:80)(cid:72)(cid:71) (cid:40)(cid:81)(cid:87)(cid:76)(cid:87)(cid:92) (cid:53)(cid:72)(cid:70)(cid:82)(cid:74)(cid:81)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:41)(cid:82)(cid:85) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72)(cid:15) (cid:90)(cid:72) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87) (cid:37)(cid:82)(cid:86)(cid:82)(cid:81)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:86) (cid:20)(cid:28)(cid:15)(cid:21)(cid:20)(cid:23) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:73)(cid:76)(cid:89)(cid:72) (cid:72)(cid:81)(cid:87)(cid:76)(cid:87)(cid:92) (cid:70)(cid:68)(cid:87)(cid:72)(cid:74)(cid:82)(cid:85)(cid:76)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:85)(cid:68)(cid:81)(cid:71)(cid:82)(cid:80)(cid:79)(cid:92) (cid:86)(cid:72)(cid:83)(cid:68)(cid:85)(cid:68)(cid:87)(cid:72) (cid:76)(cid:87) (cid:76)(cid:81)(cid:87)(cid:82) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:15) (cid:89)(cid:68)(cid:79)(cid:76)(cid:71)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:68)(cid:81)(cid:71) (cid:87)(cid:72)(cid:86)(cid:87) (cid:83)(cid:68)(cid:85)(cid:87)(cid:86) (cid:69)(cid:92) (cid:27)(cid:29)(cid:20)(cid:29)(cid:20)(cid:17) (cid:41)(cid:82)(cid:85) (cid:40)(cid:81)(cid:16) (cid:74)(cid:79)(cid:76)(cid:86)(cid:75)(cid:15) (cid:90)(cid:72) (cid:88)(cid:86)(cid:72) (cid:38)(cid:82)(cid:49)(cid:47)(cid:47)(cid:21)(cid:19)(cid:19)(cid:22)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:86) (cid:20)(cid:25)(cid:15)(cid:23)(cid:26)(cid:26) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86)(cid:15) (cid:23) (cid:72)(cid:81)(cid:87)(cid:76)(cid:87)(cid:92) (cid:70)(cid:68)(cid:87)(cid:72)(cid:74)(cid:82)(cid:85)(cid:76)(cid:72)(cid:86)(cid:15) (cid:68)(cid:81)(cid:71) (cid:76)(cid:87)(cid:86) (cid:82)(cid:90)(cid:81) (cid:71)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87) (cid:86)(cid:72)(cid:74)(cid:80)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:58)(cid:72) (cid:71)(cid:72)(cid:89)(cid:72)(cid:79)(cid:82)(cid:83) (cid:68) (cid:38)(cid:53)(cid:41) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:11)(cid:47)(cid:68)(cid:73)(cid:73)(cid:72)(cid:85)(cid:87)(cid:92) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:19)(cid:20)(cid:12) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:51)(cid:92)(cid:55)(cid:82)(cid:85)(cid:70)(cid:75) (cid:68)(cid:86) (cid:87)(cid:75)(cid:72) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:72)(cid:85)(cid:17) (cid:58)(cid:72) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87) (cid:68) (cid:86)(cid:76)(cid:80)(cid:83)(cid:79)(cid:72) (cid:40)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16)(cid:38)(cid:53)(cid:41) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72) (cid:87)(cid:82) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:68)(cid:81)(cid:71) (cid:68) (cid:70)(cid:82)(cid:80)(cid:83)(cid:79)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72)(cid:71) (cid:37)(cid:76)(cid:47)(cid:54)(cid:55)(cid:48)(cid:16) (cid:38)(cid:53)(cid:41) (cid:87)(cid:82) (cid:89)(cid:68)(cid:79)(cid:76)(cid:71)(cid:68)(cid:87)(cid:72) (cid:69)(cid:82)(cid:87)(cid:75) (cid:86)(cid:87)(cid:68)(cid:87)(cid:76)(cid:70) (cid:68)(cid:81)(cid:71) (cid:71)(cid:92)(cid:81)(cid:68)(cid:80)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:52)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:40)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:44)(cid:87) (cid:76)(cid:86) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:89)(cid:72)(cid:70)(cid:16) (cid:87)(cid:82)(cid:85)(cid:86) (cid:82)(cid:73) (cid:86)(cid:72)(cid:79)(cid:72)(cid:70)(cid:87)(cid:72)(cid:71) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85) (cid:15) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87) (cid:68)(cid:81)(cid:71) (cid:83)(cid:76)(cid:81)(cid:92)(cid:76)(cid:81) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:86) (cid:73)(cid:85)(cid:82)(cid:80) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:72)(cid:71) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:58)(cid:72) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72)(cid:76)(cid:85) (cid:87)(cid:82)(cid:83) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:73)(cid:85)(cid:82)(cid:80) (cid:87)(cid:75)(cid:72) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:72)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:68)(cid:85)(cid:72) (cid:85)(cid:72)(cid:87)(cid:85)(cid:76)(cid:72)(cid:89)(cid:72)(cid:71) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:16) (cid:86)(cid:76)(cid:81)(cid:72) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85)(cid:76)(cid:87)(cid:92)(cid:17) (cid:24) (cid:40)(cid:91)(cid:83)(cid:72)(cid:85)(cid:76)(cid:80)(cid:72)(cid:81)(cid:87)(cid:68)(cid:79) (cid:53)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87) (cid:55)(cid:75)(cid:72) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72)(cid:76)(cid:85) (cid:79)(cid:82)(cid:74)(cid:82)(cid:74)(cid:85)(cid:68)(cid:80)(cid:86) (cid:90)(cid:72) (cid:88)(cid:86)(cid:72)(cid:71) (cid:76)(cid:81) (cid:87)(cid:75)(cid:76)(cid:86) (cid:90)(cid:82)(cid:85)(cid:78) (cid:68)(cid:85)(cid:72)(cid:29) (cid:90)(cid:82)(cid:85)(cid:71) (cid:58)(cid:15) (cid:51)(cid:50)(cid:54) (cid:51)(cid:82)(cid:86)(cid:30) (cid:76)(cid:73) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72)(cid:15) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:16) (cid:87)(cid:72)(cid:85) (cid:43)(cid:15) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87) (cid:38)(cid:15) (cid:86)(cid:87)(cid:85)(cid:82)(cid:78)(cid:72) (cid:54)(cid:15) (cid:83)(cid:76)(cid:81)(cid:92)(cid:76)(cid:81) (cid:51)(cid:30) (cid:76)(cid:73) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75)(cid:15) (cid:79)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:38)(cid:15) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:80)(cid:72) (cid:51)(cid:17) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:48)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:54)(cid:76)(cid:80)(cid:21)(cid:23)(cid:19) (cid:54)(cid:76)(cid:80)(cid:21)(cid:28)(cid:26) (cid:36)(cid:81)(cid:68)(cid:79)(cid:82)(cid:74)(cid:92) (cid:90)(cid:82)(cid:85)(cid:71)(cid:21)(cid:89)(cid:72)(cid:70) (cid:23)(cid:27)(cid:17)(cid:20)(cid:25) (cid:24)(cid:27)(cid:17)(cid:19)(cid:22) (cid:26)(cid:21)(cid:17)(cid:23) (cid:38)(cid:58)(cid:40) (cid:24)(cid:19)(cid:17)(cid:27) (cid:24)(cid:24)(cid:17)(cid:22)(cid:22) (cid:22)(cid:22)(cid:17)(cid:22)(cid:26) (cid:45)(cid:58)(cid:40) (cid:24)(cid:22)(cid:17)(cid:23)(cid:23) (cid:24)(cid:27)(cid:17)(cid:28)(cid:24) (cid:25)(cid:25)(cid:17)(cid:23) (cid:70)(cid:90)(cid:21)(cid:89)(cid:72)(cid:70) (cid:23)(cid:28)(cid:17)(cid:27)(cid:27) (cid:23)(cid:27)(cid:17)(cid:23)(cid:21) (cid:22)(cid:28)(cid:17)(cid:22)(cid:24) (cid:58)(cid:17)(cid:51)(cid:82)(cid:86) (cid:24)(cid:27)(cid:17)(cid:19)(cid:27) (cid:24)(cid:28)(cid:17)(cid:21)(cid:23) (cid:26)(cid:27)(cid:17)(cid:27) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:82)(cid:86) (cid:24)(cid:25)(cid:17)(cid:25)(cid:25) (cid:24)(cid:28)(cid:17)(cid:25) (cid:26)(cid:28)(cid:17)(cid:23)(cid:22) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86) (cid:24)(cid:25)(cid:17)(cid:24)(cid:22) (cid:25)(cid:19)(cid:17)(cid:27)(cid:27) (cid:26)(cid:28)(cid:17)(cid:20)(cid:21) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75) (cid:48)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:54)(cid:76)(cid:80)(cid:22)(cid:24)(cid:22) (cid:36)(cid:81)(cid:68)(cid:48)(cid:54)(cid:53) (cid:36)(cid:81)(cid:68)(cid:42)(cid:82)(cid:82)(cid:74)(cid:79)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71)(cid:21)(cid:89)(cid:72)(cid:70) (cid:25)(cid:24)(cid:17)(cid:20)(cid:23) (cid:24)(cid:26)(cid:17)(cid:23)(cid:25) (cid:25)(cid:24)(cid:17)(cid:20)(cid:21) (cid:38)(cid:58)(cid:40) (cid:25)(cid:24)(cid:17)(cid:23)(cid:26) (cid:24)(cid:26)(cid:17)(cid:22)(cid:24) (cid:25)(cid:23)(cid:17)(cid:27)(cid:20) (cid:58)(cid:17)(cid:38) (cid:25)(cid:27)(cid:17)(cid:22)(cid:27) (cid:25)(cid:21)(cid:17)(cid:20)(cid:25) (cid:26)(cid:19)(cid:17)(cid:19)(cid:28) (cid:58)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86) (cid:25)(cid:27)(cid:17)(cid:22)(cid:21) (cid:25)(cid:20)(cid:17)(cid:24)(cid:20) (cid:26)(cid:19)(cid:17)(cid:19)(cid:21) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86) (cid:25)(cid:27)(cid:17)(cid:24)(cid:20) (cid:24)(cid:28)(cid:17)(cid:19)(cid:20) (cid:25)(cid:28)(cid:17)(cid:24) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:20)(cid:29) (cid:51)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:79)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72)(cid:71) (cid:69)(cid:92) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72)(cid:71) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:68)(cid:81)(cid:71) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72)(cid:16)(cid:82)(cid:73)(cid:16)(cid:87)(cid:75)(cid:72)(cid:16)(cid:68)(cid:85)(cid:87) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86)(cid:17) (cid:54)(cid:83)(cid:72)(cid:68)(cid:85)(cid:80)(cid:68)(cid:81) (cid:70)(cid:82)(cid:85)(cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:70)(cid:82)(cid:72)(cid:73)(cid:73)(cid:76)(cid:16) (cid:70)(cid:76)(cid:72)(cid:81)(cid:87) (cid:76)(cid:86) (cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:71) (cid:76)(cid:81) (cid:83)(cid:72)(cid:85)(cid:70)(cid:72)(cid:81)(cid:87)(cid:68)(cid:74)(cid:72) (cid:11)(cid:8)(cid:12)(cid:17) (cid:24)(cid:17)(cid:20) (cid:47)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:40)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:55)(cid:75)(cid:72) (cid:79)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:76)(cid:81) (cid:87)(cid:72)(cid:85)(cid:80)(cid:86) (cid:82)(cid:73) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:70)(cid:82)(cid:80)(cid:69)(cid:76)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:15) (cid:88)(cid:86)(cid:76)(cid:81)(cid:74) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:82)(cid:85) (cid:81)(cid:82)(cid:87)(cid:15) (cid:68)(cid:81)(cid:71) (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:87)(cid:82) (cid:89)(cid:72)(cid:85)(cid:76)(cid:73)(cid:92) (cid:87)(cid:75)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86)(cid:17) (cid:53)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:76)(cid:81) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:20) (cid:86)(cid:75)(cid:82)(cid:90) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72)(cid:86) (cid:69)(cid:72)(cid:86)(cid:87) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:73)(cid:82)(cid:85) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:76)(cid:81)(cid:86)(cid:87)(cid:68)(cid:81)(cid:70)(cid:72)(cid:15) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:54)(cid:76)(cid:80)(cid:21)(cid:23)(cid:19) (cid:87)(cid:68)(cid:86)(cid:78)(cid:15) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:58)(cid:17)(cid:51)(cid:82)(cid:86) (cid:74)(cid:72)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) (cid:69)(cid:72)(cid:86)(cid:87) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85)(cid:76)(cid:87)(cid:92) (cid:86)(cid:70)(cid:82)(cid:85)(cid:72) (cid:24)(cid:27)(cid:17)(cid:19)(cid:27)(cid:8)(cid:15) (cid:90)(cid:76)(cid:87)(cid:75) (cid:68) (cid:23)(cid:17)(cid:25)(cid:23)(cid:8) (cid:76)(cid:81)(cid:70)(cid:85)(cid:72)(cid:68)(cid:86)(cid:72) (cid:73)(cid:85)(cid:82)(cid:80) (cid:69)(cid:72)(cid:86)(cid:87) (cid:69)(cid:68)(cid:86)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:45)(cid:58)(cid:40)(cid:17) (cid:49)(cid:72)(cid:91)(cid:87)(cid:15) (cid:90)(cid:72) (cid:68)(cid:81)(cid:68)(cid:79)(cid:92)(cid:93)(cid:72) (cid:86)(cid:82)(cid:88)(cid:85)(cid:70)(cid:72)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86)(cid:17) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:41)(cid:76)(cid:72)(cid:79)(cid:71) (cid:44)(cid:81) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:20)(cid:15) (cid:73)(cid:82)(cid:85) (cid:69)(cid:82)(cid:87)(cid:75) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:68)(cid:81)(cid:71) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75)(cid:15) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:76)(cid:81)(cid:87)(cid:72)(cid:74)(cid:85)(cid:68)(cid:87)(cid:72) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:80)(cid:72) (cid:68)(cid:81)(cid:71) (cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:91) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80) (cid:69)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:82)(cid:81) (cid:87)(cid:75)(cid:85)(cid:72)(cid:72) (cid:71)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:68)(cid:81)(cid:71) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80) (cid:82)(cid:87)(cid:75)(cid:72)(cid:85) (cid:72)(cid:91)(cid:76)(cid:86)(cid:87)(cid:76)(cid:81)(cid:74) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86) (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72)(cid:86) (cid:69)(cid:72)(cid:86)(cid:87) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:76)(cid:81) (cid:69)(cid:82)(cid:87)(cid:75) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:54)(cid:76)(cid:80)(cid:21)(cid:28)(cid:26) (cid:11)(cid:20)(cid:17)(cid:28)(cid:22)(cid:8) (cid:76)(cid:81)(cid:70)(cid:85)(cid:72)(cid:68)(cid:86)(cid:72) (cid:73)(cid:85)(cid:82)(cid:80) (cid:87)(cid:75)(cid:72) (cid:69)(cid:72)(cid:86)(cid:87) (cid:69)(cid:68)(cid:86)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:45)(cid:58)(cid:40)(cid:12) (cid:68)(cid:81)(cid:71) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75) (cid:54)(cid:76)(cid:80)(cid:22)(cid:24)(cid:22) (cid:11)(cid:22)(cid:17)(cid:19)(cid:23)(cid:8) (cid:76)(cid:81)(cid:70)(cid:85)(cid:72)(cid:68)(cid:86)(cid:72) (cid:73)(cid:85)(cid:82)(cid:80) (cid:87)(cid:75)(cid:72) (cid:69)(cid:72)(cid:86)(cid:87) (cid:69)(cid:68)(cid:86)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:38)(cid:58)(cid:40)(cid:12)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:86)(cid:88)(cid:83)(cid:16) (cid:83)(cid:82)(cid:85)(cid:87)(cid:86) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:80)(cid:72) (cid:68)(cid:81)(cid:71) (cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:91) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:70)(cid:68)(cid:85)(cid:85)(cid:92) (cid:81)(cid:72)(cid:90) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:16) (cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:83)(cid:85)(cid:72)(cid:89)(cid:76)(cid:82)(cid:88)(cid:86)(cid:79)(cid:92) (cid:90)(cid:76)(cid:71)(cid:72)(cid:79)(cid:92) (cid:88)(cid:86)(cid:72)(cid:71) (cid:80)(cid:82)(cid:85)(cid:83)(cid:75)(cid:82)(cid:16) (cid:79)(cid:82)(cid:74)(cid:76)(cid:70)(cid:68)(cid:79) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:71)(cid:82) (cid:81)(cid:82)(cid:87) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:17) (cid:48)(cid:82)(cid:85)(cid:72)(cid:82)(cid:89)(cid:72)(cid:85)(cid:15) (cid:83)(cid:88)(cid:87)(cid:87)(cid:76)(cid:81)(cid:74) (cid:68)(cid:79)(cid:79) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:87)(cid:82)(cid:74)(cid:72)(cid:87)(cid:75)(cid:72)(cid:85) (cid:87)(cid:82) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:71)(cid:82)(cid:72)(cid:86) (cid:81)(cid:82)(cid:87) (cid:74)(cid:88)(cid:68)(cid:85)(cid:68)(cid:81)(cid:87)(cid:72)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:87)(cid:75)(cid:72) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:72)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:70)(cid:68)(cid:81) (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72) (cid:69)(cid:72)(cid:86)(cid:87) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:73)(cid:82)(cid:85) (cid:68)(cid:79)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:87)(cid:75)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:76)(cid:81) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:20)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:76)(cid:81)(cid:71)(cid:76)(cid:16) (cid:70)(cid:68)(cid:87)(cid:72)(cid:86) (cid:86)(cid:82)(cid:80)(cid:72) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:69)(cid:85)(cid:76)(cid:81)(cid:74) (cid:80)(cid:82)(cid:85)(cid:72) (cid:81)(cid:82)(cid:76)(cid:86)(cid:72) (cid:87)(cid:75)(cid:68)(cid:81) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:85)(cid:85)(cid:72)(cid:86)(cid:83)(cid:82)(cid:81)(cid:71)(cid:76)(cid:81)(cid:74) (cid:87)(cid:68)(cid:86)(cid:78)(cid:17) (cid:44)(cid:81)(cid:86)(cid:87)(cid:72)(cid:68)(cid:71)(cid:15) (cid:73)(cid:76)(cid:81)(cid:71)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:69)(cid:72)(cid:86)(cid:87) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:70)(cid:82)(cid:80)(cid:69)(cid:76)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:73)(cid:82)(cid:85) (cid:86)(cid:83)(cid:72)(cid:70)(cid:76)(cid:73)(cid:76)(cid:70) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86) (cid:76)(cid:86) (cid:80)(cid:82)(cid:85)(cid:72) (cid:76)(cid:80)(cid:83)(cid:82)(cid:85)(cid:87)(cid:68)(cid:81)(cid:87)(cid:17) (cid:50)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:80)(cid:68)(cid:78)(cid:72)(cid:86) (cid:76)(cid:87) (cid:72)(cid:68)(cid:86)(cid:92) (cid:87)(cid:82) (cid:72)(cid:91)(cid:83)(cid:79)(cid:82)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:69)(cid:72)(cid:86)(cid:87) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:70)(cid:82)(cid:80)(cid:69)(cid:76)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:72)(cid:68)(cid:70)(cid:75) (cid:86)(cid:83)(cid:72)(cid:70)(cid:76)(cid:73)(cid:76)(cid:70) (cid:87)(cid:68)(cid:86)(cid:78)(cid:17) (cid:48)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:49)(cid:72)(cid:90)(cid:86)(cid:42)(cid:85)(cid:82)(cid:88)(cid:83) (cid:11)(cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75)(cid:12) (cid:41)(cid:88)(cid:71)(cid:68)(cid:81) (cid:11)(cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72)(cid:12) EWEGEWEG (cid:58)(cid:82)(cid:85)(cid:71)(cid:21)(cid:57)(cid:72)(cid:70) (cid:25)(cid:28)(cid:17)(cid:23)(cid:27) (cid:16) (cid:28)(cid:22)(cid:17)(cid:27)(cid:27) (cid:16) (cid:58)(cid:17)(cid:38) (cid:26)(cid:21)(cid:17)(cid:28)(cid:27) (cid:26)(cid:24)(cid:17)(cid:22)(cid:25) (cid:28)(cid:23)(cid:17)(cid:21)(cid:23) (cid:28)(cid:23)(cid:17)(cid:24)(cid:24) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51) (cid:26)(cid:22)(cid:17)(cid:19)(cid:22) (cid:26)(cid:25)(cid:17)(cid:27)(cid:21) (cid:28)(cid:22)(cid:17)(cid:28)(cid:21) (cid:28)(cid:23)(cid:17)(cid:25)(cid:20) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86) (cid:26)(cid:26)(cid:17)(cid:19)(cid:23) (cid:27)(cid:19)(cid:17)(cid:20)(cid:21) (cid:28)(cid:23)(cid:17)(cid:21)(cid:25) (cid:28)(cid:24)(cid:17)(cid:19)(cid:19) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:21)(cid:29) (cid:38)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:76)(cid:86)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:58)(cid:82)(cid:85)(cid:71)(cid:21)(cid:57)(cid:72)(cid:70) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:87)(cid:72)(cid:91)(cid:87) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:36)(cid:70)(cid:70)(cid:88)(cid:85)(cid:68)(cid:70)(cid:92) (cid:76)(cid:86) (cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:71) (cid:76)(cid:81) (cid:83)(cid:72)(cid:85)(cid:70)(cid:72)(cid:81)(cid:87)(cid:68)(cid:74)(cid:72) (cid:11)(cid:8)(cid:12)(cid:17) (cid:41)(cid:76)(cid:81)(cid:72)(cid:16)(cid:42)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:51)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:41)(cid:76)(cid:74)(cid:17)(cid:23) (cid:89)(cid:72)(cid:85)(cid:76)(cid:73)(cid:76)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:81)(cid:72)(cid:16) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72)(cid:182)(cid:86) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86)(cid:17) (cid:42)(cid:76)(cid:89)(cid:72)(cid:81) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:15) (cid:79)(cid:72)(cid:89)(cid:72)(cid:85)(cid:68)(cid:74)(cid:76)(cid:81)(cid:74) (cid:87)(cid:75)(cid:72) (cid:90)(cid:75)(cid:82)(cid:79)(cid:72) (cid:73)(cid:76)(cid:81)(cid:72)(cid:16)(cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:76)(cid:81)(cid:70)(cid:79)(cid:88)(cid:71)(cid:76)(cid:81)(cid:74) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:76)(cid:81)(cid:74) (cid:70)(cid:68)(cid:81) (cid:79)(cid:72)(cid:68)(cid:71) (cid:87)(cid:82) (cid:68)(cid:71)(cid:89)(cid:68)(cid:81)(cid:70)(cid:72)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:58)(cid:17)(cid:43)(cid:17)(cid:38) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:15) (cid:87)(cid:75)(cid:72) (cid:73)(cid:76)(cid:81)(cid:72)(cid:16)(cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:92)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:68) (cid:28)(cid:17)(cid:20)(cid:22)(cid:8) (cid:76)(cid:81)(cid:70)(cid:85)(cid:72)(cid:68)(cid:86)(cid:72) (cid:76)(cid:81) (cid:87)(cid:75)(cid:72) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:36)(cid:81)(cid:68)(cid:79)(cid:82)(cid:74)(cid:92) (cid:87)(cid:68)(cid:86)(cid:78)(cid:17) (cid:37)(cid:72)(cid:92)(cid:82)(cid:81)(cid:71) (cid:87)(cid:75)(cid:76)(cid:86)(cid:15) (cid:41)(cid:76)(cid:74)(cid:17) (cid:24) (cid:73)(cid:88)(cid:85)(cid:87)(cid:75)(cid:72)(cid:85) (cid:89)(cid:68)(cid:79)(cid:76)(cid:71)(cid:68)(cid:87)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:76)(cid:81)(cid:74)(cid:17) (cid:44)(cid:81) (cid:87)(cid:75)(cid:72) (cid:58)(cid:17)(cid:43)(cid:17)(cid:51) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:15) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:76)(cid:81)(cid:74) (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72)(cid:86) (cid:68) (cid:20)(cid:17)(cid:24)(cid:26)(cid:8) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:16) (cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:76)(cid:81) (cid:36)(cid:81)(cid:68)(cid:79)(cid:82)(cid:74)(cid:92) (cid:87)(cid:68)(cid:86)(cid:78)(cid:17) (cid:55)(cid:75)(cid:72) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:71)(cid:72)(cid:80)(cid:82)(cid:81)(cid:16) (cid:86)(cid:87)(cid:85)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:88)(cid:85) (cid:73)(cid:76)(cid:81)(cid:72)(cid:16)(cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:71)(cid:82)(cid:72)(cid:86) (cid:83)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72) (cid:75)(cid:76)(cid:74)(cid:75)(cid:16)(cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:70)(cid:68)(cid:81) (cid:86)(cid:88)(cid:70)(cid:70)(cid:72)(cid:86)(cid:86)(cid:73)(cid:88)(cid:79)(cid:79)(cid:92) (cid:70)(cid:68)(cid:83)(cid:87)(cid:88)(cid:85)(cid:72) (cid:80)(cid:82)(cid:85)(cid:72) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:75)(cid:68)(cid:87) (cid:83)(cid:85)(cid:72)(cid:89)(cid:76)(cid:82)(cid:88)(cid:86) (cid:90)(cid:82)(cid:85)(cid:78)(cid:86) (cid:80)(cid:76)(cid:86)(cid:86)(cid:72)(cid:71)(cid:17) (cid:47)(cid:82)(cid:86)(cid:86) (cid:41)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:41)(cid:76)(cid:74)(cid:17) (cid:25) (cid:68)(cid:86)(cid:86)(cid:72)(cid:85)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:82)(cid:88)(cid:85) (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) L fe (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:16) (cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:82)(cid:87)(cid:75)(cid:72)(cid:85) (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:76)(cid:81) (cid:68)(cid:79)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:16) (cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:76)(cid:81) (cid:54)(cid:76)(cid:80)(cid:21)(cid:23)(cid:19) (cid:87)(cid:68)(cid:86)(cid:78)(cid:15) L fe (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72)(cid:86) (cid:23)(cid:17)(cid:23)(cid:28)(cid:8) (cid:76)(cid:81)(cid:16) (cid:70)(cid:85)(cid:72)(cid:68)(cid:86)(cid:72) (cid:70)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:72)(cid:71) (cid:87)(cid:82) L w 2 v (cid:68)(cid:81)(cid:71) (cid:22)(cid:17)(cid:19)(cid:23)(cid:8) (cid:76)(cid:81)(cid:70)(cid:85)(cid:72)(cid:68)(cid:86)(cid:72) (cid:70)(cid:82)(cid:80)(cid:16) (cid:83)(cid:68)(cid:85)(cid:72)(cid:71) (cid:87)(cid:82) L jwe (cid:17) (cid:44)(cid:87)(cid:182)(cid:86) (cid:87)(cid:75)(cid:72) (cid:86)(cid:68)(cid:80)(cid:72) (cid:73)(cid:82)(cid:85) (cid:54)(cid:76)(cid:80)(cid:21)(cid:28)(cid:26) (cid:68)(cid:81)(cid:71) (cid:36)(cid:81)(cid:68)(cid:79)(cid:82)(cid:74)(cid:92)(cid:17) (cid:38)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) L w 2 v (cid:68)(cid:81)(cid:71) L jwe (cid:15) L fe (cid:70)(cid:82)(cid:81)(cid:86)(cid:76)(cid:71)(cid:72)(cid:85)(cid:86) (cid:83)(cid:85)(cid:82)(cid:80)(cid:76)(cid:81)(cid:72)(cid:81)(cid:87) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:83)(cid:75)(cid:72)(cid:81)(cid:82)(cid:80)(cid:72)(cid:81)(cid:68) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:80)(cid:82)(cid:85)(cid:83)(cid:75)(cid:72)(cid:80)(cid:72)(cid:16)(cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:80)(cid:72) (cid:83)(cid:68)(cid:76)(cid:85)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72)(cid:85)(cid:72)(cid:73)(cid:82)(cid:85)(cid:72)(cid:15) (cid:76)(cid:87) (cid:86)(cid:88)(cid:70)(cid:70)(cid:72)(cid:86)(cid:86)(cid:16) (cid:73)(cid:88)(cid:79)(cid:79)(cid:92) (cid:70)(cid:68)(cid:83)(cid:87)(cid:88)(cid:85)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:90)(cid:76)(cid:87)(cid:75)(cid:76)(cid:81)(cid:16)(cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:68)(cid:81)(cid:71) (cid:70)(cid:85)(cid:82)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74)(cid:16)(cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:24)(cid:17)(cid:21) (cid:39)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:55)(cid:68)(cid:86)(cid:78) (cid:58)(cid:72) (cid:70)(cid:82)(cid:81)(cid:71)(cid:88)(cid:70)(cid:87) (cid:69)(cid:82)(cid:87)(cid:75) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:68)(cid:81)(cid:71) (cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75) (cid:71)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:15) (cid:76)(cid:81)(cid:70)(cid:79)(cid:88)(cid:71)(cid:76)(cid:81)(cid:74) (cid:87)(cid:72)(cid:91)(cid:87) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:49)(cid:40)(cid:53)(cid:15) (cid:87)(cid:82) (cid:87)(cid:72)(cid:86)(cid:87) (cid:87)(cid:75)(cid:72) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:17) (cid:55)(cid:72)(cid:91)(cid:87) (cid:38)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:24)(cid:17)(cid:21) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:80)(cid:82)(cid:85)(cid:72) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:15) (cid:69)(cid:82)(cid:87)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:74)(cid:68)(cid:76)(cid:81) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:81)(cid:70)(cid:72) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:82)(cid:81) (cid:87)(cid:72)(cid:91)(cid:87) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:16) (cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:85)(cid:72)(cid:89)(cid:72)(cid:68)(cid:79)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:80)(cid:72) (cid:68)(cid:81)(cid:71) (cid:86)(cid:92)(cid:81)(cid:16) (cid:87)(cid:68)(cid:91) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:70)(cid:68)(cid:81) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72) (cid:87)(cid:75)(cid:72) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:69)(cid:82)(cid:87)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:48)(cid:82)(cid:85)(cid:72)(cid:82)(cid:89)(cid:72)(cid:85)(cid:15) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74)(cid:86) EG (cid:68)(cid:79)(cid:90)(cid:68)(cid:92)(cid:86) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) EW (cid:17) (cid:44)(cid:81) (cid:49)(cid:72)(cid:90)(cid:86)(cid:42)(cid:85)(cid:82)(cid:88)(cid:83) (cid:87)(cid:68)(cid:86)(cid:78)(cid:15) EG (cid:82)(cid:73) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86) (cid:72)(cid:91)(cid:16) (cid:70)(cid:72)(cid:72)(cid:71) EW (cid:90)(cid:76)(cid:87)(cid:75) (cid:22)(cid:17)(cid:19)(cid:27)(cid:8)(cid:17) (cid:41)(cid:88)(cid:85)(cid:87)(cid:75)(cid:72)(cid:85)(cid:80)(cid:82)(cid:85)(cid:72)(cid:15) (cid:70)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:15) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:80)(cid:68)(cid:78)(cid:72) (cid:79)(cid:68)(cid:85)(cid:74)(cid:72)(cid:85) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:80)(cid:82)(cid:85)(cid:72) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:16) (cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:23)(cid:29) (cid:41)(cid:76)(cid:81)(cid:72)(cid:16)(cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:83)(cid:72)(cid:85)(cid:16) (cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:82)(cid:81) (cid:79)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:24)(cid:29) (cid:42)(cid:85)(cid:68)(cid:76)(cid:81) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:76)(cid:81)(cid:74) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:16) (cid:80)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:82)(cid:81) (cid:79)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:41)(cid:76)(cid:74)(cid:88)(cid:85)(cid:72) (cid:25)(cid:29) (cid:47)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:16) (cid:80)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:82)(cid:81) (cid:79)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:48)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:38)(cid:82)(cid:49)(cid:47)(cid:47)(cid:21)(cid:19)(cid:19)(cid:22) (cid:11)(cid:40)(cid:81)(cid:74)(cid:79)(cid:76)(cid:86)(cid:75)(cid:12) (cid:37)(cid:82)(cid:86)(cid:82)(cid:81) (cid:11)(cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72)(cid:12) EWEGEWEG (cid:58)(cid:82)(cid:85)(cid:71)(cid:21)(cid:57)(cid:72)(cid:70) (cid:26)(cid:26)(cid:17)(cid:19)(cid:23) (cid:16) (cid:25)(cid:23)(cid:17)(cid:20)(cid:22) (cid:16) (cid:58)(cid:17)(cid:38) (cid:26)(cid:26)(cid:17)(cid:20)(cid:28) (cid:27)(cid:19)(cid:17)(cid:22)(cid:24) (cid:25)(cid:23)(cid:17)(cid:28)(cid:25) (cid:25)(cid:24)(cid:17)(cid:19)(cid:27) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51) (cid:26)(cid:27)(cid:17)(cid:19)(cid:25) (cid:27)(cid:21)(cid:17)(cid:19)(cid:20) (cid:25)(cid:24)(cid:17)(cid:19)(cid:27) (cid:25)(cid:24)(cid:17)(cid:22)(cid:23) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86) (cid:26)(cid:27)(cid:17)(cid:23)(cid:23) (cid:27)(cid:25)(cid:17)(cid:28)(cid:21) (cid:25)(cid:24)(cid:17)(cid:23)(cid:19) (cid:26)(cid:20)(cid:17)(cid:26)(cid:24) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:22)(cid:29) (cid:38)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:76)(cid:86)(cid:82)(cid:81) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:87)(cid:75)(cid:72) (cid:40)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16)(cid:38)(cid:53)(cid:41) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72) (cid:68)(cid:81)(cid:71) (cid:58)(cid:82)(cid:85)(cid:71)(cid:21)(cid:57)(cid:72)(cid:70) (cid:82)(cid:81) (cid:49)(cid:40)(cid:53)(cid:17) (cid:41)(cid:20) (cid:86)(cid:70)(cid:82)(cid:85)(cid:72) (cid:76)(cid:86) (cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:71) (cid:76)(cid:81) (cid:83)(cid:72)(cid:85)(cid:70)(cid:72)(cid:81)(cid:87)(cid:68)(cid:74)(cid:72) (cid:11)(cid:8)(cid:12)(cid:17) (cid:83)(cid:79)(cid:72)(cid:15) (cid:73)(cid:85)(cid:82)(cid:80) (cid:58)(cid:17)(cid:38) (cid:87)(cid:82) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86)(cid:15) EW (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72) (cid:68) (cid:23)(cid:17)(cid:19)(cid:25)(cid:8) (cid:76)(cid:81)(cid:70)(cid:85)(cid:72)(cid:68)(cid:86)(cid:72) (cid:90)(cid:75)(cid:76)(cid:79)(cid:72) EG (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72)(cid:86) (cid:68) (cid:23)(cid:17)(cid:26)(cid:25)(cid:8) (cid:76)(cid:81)(cid:70)(cid:85)(cid:72)(cid:68)(cid:86)(cid:72) (cid:76)(cid:81) (cid:49)(cid:72)(cid:90)(cid:86)(cid:42)(cid:85)(cid:82)(cid:88)(cid:83) (cid:87)(cid:68)(cid:86)(cid:78)(cid:17) (cid:54)(cid:88)(cid:70)(cid:75) (cid:68)(cid:71)(cid:71)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87) (cid:71)(cid:72)(cid:85)(cid:76)(cid:89)(cid:72)(cid:86) (cid:73)(cid:85)(cid:82)(cid:80) (cid:68)(cid:71)(cid:71)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:75)(cid:68)(cid:87) (cid:76)(cid:86) (cid:81)(cid:82)(cid:87) (cid:76)(cid:81)(cid:70)(cid:79)(cid:88)(cid:71)(cid:72)(cid:71) (cid:76)(cid:81) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:86)(cid:87)(cid:85)(cid:82)(cid:81)(cid:74)(cid:79)(cid:92) (cid:76)(cid:81)(cid:71)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:70)(cid:68)(cid:85)(cid:85)(cid:92) (cid:80)(cid:82)(cid:85)(cid:72) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:16) (cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:75)(cid:68)(cid:81) (cid:87)(cid:75)(cid:72) (cid:68)(cid:86)(cid:86)(cid:82)(cid:70)(cid:76)(cid:68)(cid:87)(cid:72)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:81)(cid:71) (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:68) (cid:69)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:68)(cid:79)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:87)(cid:82) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:49)(cid:68)(cid:80)(cid:72)(cid:71) (cid:40)(cid:81)(cid:87)(cid:76)(cid:87)(cid:92) (cid:53)(cid:72)(cid:70)(cid:82)(cid:74)(cid:81)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81) (cid:36)(cid:86) (cid:87)(cid:82) (cid:49)(cid:40)(cid:53) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:16) (cid:80)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:82)(cid:73) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:22)(cid:15) (cid:68) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85) (cid:83)(cid:68)(cid:87)(cid:87)(cid:72)(cid:85)(cid:81) (cid:76)(cid:86) (cid:82)(cid:69)(cid:86)(cid:72)(cid:85)(cid:89)(cid:72)(cid:71) (cid:87)(cid:82) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:73) (cid:87)(cid:72)(cid:91)(cid:87) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:44)(cid:87) (cid:71)(cid:72)(cid:80)(cid:82)(cid:81)(cid:86)(cid:87)(cid:85)(cid:68)(cid:87)(cid:72)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:80)(cid:82)(cid:85)(cid:72) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:69)(cid:72)(cid:81)(cid:72)(cid:73)(cid:76)(cid:87) (cid:87)(cid:75)(cid:72) (cid:49)(cid:40)(cid:53) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:73)(cid:85)(cid:82)(cid:80) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51) (cid:87)(cid:82) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86)(cid:15) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:74)(cid:68)(cid:76)(cid:81) (cid:23)(cid:17)(cid:28)(cid:20)(cid:8) (cid:68)(cid:81)(cid:71) (cid:25)(cid:17)(cid:23)(cid:20)(cid:8) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:76)(cid:81) (cid:38)(cid:82)(cid:49)(cid:47)(cid:47)(cid:21)(cid:19)(cid:19)(cid:22) (cid:68)(cid:81)(cid:71) (cid:37)(cid:82)(cid:86)(cid:82)(cid:81) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72) (cid:85)(cid:72)(cid:68)(cid:86)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:87)(cid:75)(cid:72) (cid:76)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:76)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:87)(cid:75)(cid:72) (cid:75)(cid:92)(cid:83)(cid:72)(cid:85)(cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:51)(cid:50)(cid:54) (cid:70)(cid:68)(cid:85)(cid:85)(cid:76)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:83)(cid:68)(cid:85)(cid:87) (cid:82)(cid:73) (cid:68) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) ' (cid:86) (cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:70)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:76)(cid:86) (cid:70)(cid:85)(cid:88)(cid:70)(cid:76)(cid:68)(cid:79) (cid:76)(cid:81) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:79)(cid:68)(cid:69)(cid:72)(cid:79)(cid:76)(cid:81)(cid:74) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:76)(cid:79)(cid:16) (cid:79)(cid:88)(cid:86)(cid:87)(cid:85)(cid:68)(cid:87)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:75)(cid:92)(cid:83)(cid:72)(cid:85)(cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:182)(cid:86) (cid:86)(cid:76)(cid:74)(cid:81)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:81)(cid:70)(cid:72) (cid:76)(cid:81) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:48)(cid:82)(cid:85)(cid:72)(cid:82)(cid:89)(cid:72)(cid:85)(cid:15) (cid:76)(cid:87) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:76)(cid:81)(cid:86)(cid:87)(cid:68)(cid:81)(cid:70)(cid:72)(cid:15) (cid:76)(cid:81) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:15) (cid:41)(cid:20) (cid:86)(cid:70)(cid:82)(cid:85)(cid:72)(cid:86) (cid:82)(cid:73) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:72)(cid:91)(cid:70)(cid:72)(cid:72)(cid:71) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:27)(cid:17)(cid:23)(cid:27)(cid:8) (cid:68)(cid:81)(cid:71) (cid:25)(cid:17)(cid:22)(cid:24)(cid:8) (cid:76)(cid:81) (cid:38)(cid:82)(cid:49)(cid:47)(cid:47)(cid:21)(cid:19)(cid:19)(cid:22) (cid:68)(cid:81)(cid:71) (cid:37)(cid:82)(cid:86)(cid:82)(cid:81) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:55)(cid:82) (cid:73)(cid:88)(cid:85)(cid:87)(cid:75)(cid:72)(cid:85) (cid:83)(cid:85)(cid:82)(cid:89)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:88)(cid:85) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:85)(cid:72) (cid:72)(cid:73)(cid:16) (cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72) (cid:90)(cid:76)(cid:87)(cid:75) (cid:70)(cid:82)(cid:80)(cid:83)(cid:79)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72)(cid:71) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86)(cid:15) (cid:90)(cid:72) (cid:68)(cid:71)(cid:82)(cid:83)(cid:87) (cid:40)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16)(cid:37)(cid:76)(cid:47)(cid:54)(cid:55)(cid:48)(cid:16)(cid:38)(cid:53)(cid:41) (cid:68)(cid:81)(cid:71) (cid:70)(cid:82)(cid:81)(cid:71)(cid:88)(cid:70)(cid:87) (cid:72)(cid:91)(cid:83)(cid:72)(cid:85)(cid:76)(cid:16) (cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:82)(cid:81) (cid:38)(cid:82)(cid:49)(cid:47)(cid:47)(cid:21)(cid:19)(cid:19)(cid:22)(cid:17) (cid:37)(cid:72)(cid:86)(cid:76)(cid:71)(cid:72)(cid:86) (cid:86)(cid:87)(cid:68)(cid:87)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:58)(cid:82)(cid:85)(cid:71)(cid:21)(cid:57)(cid:72)(cid:70)(cid:15) (cid:71)(cid:92)(cid:81)(cid:68)(cid:80)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:40)(cid:47)(cid:48)(cid:82) (cid:68)(cid:81)(cid:71) (cid:37)(cid:40)(cid:53)(cid:55)(cid:15) (cid:68)(cid:85)(cid:72) (cid:68)(cid:79)(cid:86)(cid:82) (cid:79)(cid:76)(cid:86)(cid:87)(cid:72)(cid:71) (cid:68)(cid:86) (cid:69)(cid:68)(cid:86)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72)(cid:86) (cid:73)(cid:82)(cid:85) (cid:48)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71) (cid:41)(cid:20) (cid:54)(cid:70)(cid:82)(cid:85)(cid:72) (cid:11)(cid:8)(cid:12) (cid:58)(cid:17)(cid:38) (cid:28)(cid:20)(cid:17)(cid:27)(cid:21) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51) (cid:28)(cid:21)(cid:17)(cid:20)(cid:25) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86) (cid:28)(cid:21)(cid:17)(cid:22)(cid:23) (cid:58)(cid:82)(cid:85)(cid:71)(cid:21)(cid:57)(cid:72)(cid:70) (cid:11)(cid:48)(cid:76)(cid:78)(cid:82)(cid:79)(cid:82)(cid:89) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:22)(cid:69)(cid:12) (cid:28)(cid:19)(cid:17)(cid:26)(cid:21) (cid:40)(cid:47)(cid:48)(cid:82) (cid:11)(cid:51)(cid:72)(cid:87)(cid:72)(cid:85)(cid:86) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:12) (cid:28)(cid:21)(cid:17)(cid:21)(cid:21) (cid:37)(cid:40)(cid:53)(cid:55) (cid:11)(cid:39)(cid:72)(cid:89)(cid:79)(cid:76)(cid:81) (cid:72)(cid:87) (cid:68)(cid:79)(cid:17)(cid:15) (cid:21)(cid:19)(cid:20)(cid:27)(cid:12) (cid:28)(cid:21)(cid:17)(cid:27)(cid:19) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:23)(cid:29) (cid:39)(cid:76)(cid:73)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:87) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:37)(cid:76)(cid:47)(cid:54)(cid:55)(cid:48)(cid:16)(cid:38)(cid:53)(cid:41) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72) (cid:82)(cid:81) (cid:87)(cid:75)(cid:72) (cid:38)(cid:82)(cid:49)(cid:47)(cid:47)(cid:21)(cid:19)(cid:19)(cid:22) (cid:49)(cid:40)(cid:53) (cid:87)(cid:68)(cid:86)(cid:78)(cid:17) (cid:70)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:76)(cid:86)(cid:82)(cid:81)(cid:15) (cid:68)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90)(cid:81) (cid:76)(cid:81) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:23)(cid:17) (cid:44)(cid:81) (cid:87)(cid:72)(cid:85)(cid:80)(cid:86) (cid:82)(cid:73) (cid:86)(cid:87)(cid:68)(cid:87)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:15) (cid:68)(cid:71)(cid:71)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:86)(cid:87)(cid:76)(cid:79)(cid:79) (cid:69)(cid:72)(cid:81)(cid:72)(cid:73)(cid:76)(cid:87) (cid:87)(cid:75)(cid:72) (cid:87)(cid:68)(cid:86)(cid:78)(cid:15) (cid:90)(cid:76)(cid:87)(cid:75) (cid:68)(cid:85)(cid:82)(cid:88)(cid:81)(cid:71) (cid:19)(cid:17)(cid:24)(cid:23)(cid:8) (cid:76)(cid:81)(cid:70)(cid:85)(cid:72)(cid:68)(cid:86)(cid:72) (cid:83)(cid:72)(cid:85) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:73)(cid:85)(cid:82)(cid:80) (cid:58)(cid:82)(cid:85)(cid:71)(cid:21)(cid:57)(cid:72)(cid:70) (cid:87)(cid:82) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87)(cid:15) (cid:72)(cid:89)(cid:72)(cid:81) (cid:76)(cid:81) (cid:70)(cid:82)(cid:80)(cid:83)(cid:79)(cid:72)(cid:91) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86)(cid:15) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:85)(cid:72) (cid:86)(cid:88)(cid:83)(cid:72)(cid:85)(cid:76)(cid:82)(cid:85) (cid:87)(cid:82) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:15) (cid:68)(cid:81)(cid:71) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:80)(cid:72) (cid:68)(cid:81)(cid:71) (cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:91) (cid:68)(cid:85)(cid:72) (cid:88)(cid:86)(cid:72)(cid:73)(cid:88)(cid:79)(cid:17) (cid:41)(cid:82)(cid:85) (cid:71)(cid:92)(cid:81)(cid:68)(cid:80)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:15) (cid:87)(cid:75)(cid:82)(cid:88)(cid:74)(cid:75) (cid:76)(cid:87) (cid:76)(cid:86) (cid:80)(cid:68)(cid:85)(cid:74)(cid:76)(cid:81)(cid:68)(cid:79)(cid:79)(cid:92) (cid:76)(cid:81)(cid:73)(cid:72)(cid:85)(cid:76)(cid:82)(cid:85) (cid:87)(cid:82) (cid:37)(cid:40)(cid:53)(cid:55)(cid:15) (cid:82)(cid:88)(cid:85) (cid:58)(cid:17)(cid:38)(cid:17)(cid:51)(cid:17)(cid:51)(cid:82)(cid:86) (cid:76)(cid:86) (cid:69)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:87)(cid:75)(cid:68)(cid:81) (cid:40)(cid:48)(cid:47)(cid:82) (cid:69)(cid:92) (cid:19)(cid:17)(cid:20)(cid:21)(cid:8)(cid:17) (cid:44)(cid:87) (cid:76)(cid:81)(cid:71)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72)(cid:86) (cid:82)(cid:88)(cid:85) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:72)(cid:91)(cid:83)(cid:79)(cid:82)(cid:76)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) (cid:83)(cid:82)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:68)(cid:79) (cid:82)(cid:73) (cid:86)(cid:87)(cid:68)(cid:87)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:83)(cid:79)(cid:72) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:86)(cid:88)(cid:85)(cid:83)(cid:68)(cid:86)(cid:86)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72)(cid:79)(cid:92) (cid:86)(cid:75)(cid:68)(cid:79)(cid:79)(cid:82)(cid:90) (cid:71)(cid:92)(cid:81)(cid:68)(cid:80)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:68)(cid:79)(cid:86)(cid:82) (cid:86)(cid:88)(cid:74)(cid:74)(cid:72)(cid:86)(cid:87)(cid:86) (cid:87)(cid:75)(cid:72) (cid:83)(cid:82)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:68)(cid:79) (cid:82)(cid:73) (cid:76)(cid:81)(cid:87)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:76)(cid:81)(cid:74) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:16)(cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:87)(cid:82) (cid:71)(cid:92)(cid:81)(cid:68)(cid:80)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:50)(cid:88)(cid:85) (cid:86)(cid:87)(cid:68)(cid:87)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:79)(cid:86)(cid:82) (cid:72)(cid:81)(cid:77)(cid:82)(cid:92) (cid:82)(cid:87)(cid:75)(cid:72)(cid:85) (cid:68)(cid:71)(cid:89)(cid:68)(cid:81)(cid:87)(cid:68)(cid:74)(cid:72)(cid:86)(cid:17) (cid:55)(cid:75)(cid:72) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:182)(cid:86) (cid:86)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72) (cid:76)(cid:86) (cid:86)(cid:76)(cid:80)(cid:83)(cid:79)(cid:72) (cid:68)(cid:81)(cid:71) (cid:86)(cid:87)(cid:85)(cid:68)(cid:76)(cid:74)(cid:75)(cid:87)(cid:73)(cid:82)(cid:85)(cid:90)(cid:68)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:83)(cid:68)(cid:85)(cid:68)(cid:80)(cid:72)(cid:87)(cid:72)(cid:85) (cid:86)(cid:76)(cid:93)(cid:72) (cid:76)(cid:86) (cid:86)(cid:80)(cid:68)(cid:79)(cid:79)(cid:17) (cid:44)(cid:87) (cid:85)(cid:72)(cid:84)(cid:88)(cid:76)(cid:85)(cid:72)(cid:86) (cid:79)(cid:72)(cid:86)(cid:86) (cid:70)(cid:82)(cid:85)(cid:83)(cid:88)(cid:86) (cid:68)(cid:81)(cid:71) (cid:85)(cid:72)(cid:86)(cid:82)(cid:88)(cid:85)(cid:70)(cid:72)(cid:86) (cid:87)(cid:82) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81) (cid:70)(cid:82)(cid:80)(cid:83)(cid:68)(cid:85)(cid:72)(cid:71) (cid:87)(cid:82) (cid:37)(cid:40)(cid:53)(cid:55)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:76)(cid:86) (cid:68) (cid:70)(cid:82)(cid:80)(cid:83)(cid:79)(cid:76)(cid:70)(cid:68)(cid:87)(cid:72)(cid:71) (cid:71)(cid:72)(cid:72)(cid:83) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86)(cid:17) (cid:44)(cid:81) (cid:71)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:15) (cid:87)(cid:75)(cid:82)(cid:88)(cid:74)(cid:75) (cid:37)(cid:40)(cid:53)(cid:55) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:86) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:15) (cid:76)(cid:87) (cid:69)(cid:72)(cid:68)(cid:85)(cid:86) (cid:72)(cid:91)(cid:83)(cid:72)(cid:81)(cid:86)(cid:76)(cid:89)(cid:72) (cid:70)(cid:82)(cid:86)(cid:87)(cid:86) (cid:82)(cid:73) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:70)(cid:82)(cid:80)(cid:16) (cid:83)(cid:79)(cid:72)(cid:91)(cid:76)(cid:87)(cid:92) (cid:68)(cid:81)(cid:71) (cid:70)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:85)(cid:72)(cid:86)(cid:82)(cid:88)(cid:85)(cid:70)(cid:72)(cid:86)(cid:17) (cid:48)(cid:82)(cid:85)(cid:72)(cid:82)(cid:89)(cid:72)(cid:85)(cid:15) (cid:87)(cid:75)(cid:72) (cid:71)(cid:92)(cid:81)(cid:68)(cid:80)(cid:76)(cid:70) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:70)(cid:68)(cid:81)(cid:81)(cid:82)(cid:87) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:71)(cid:72)(cid:83)(cid:72)(cid:81)(cid:71)(cid:72)(cid:81)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71) (cid:82)(cid:85) (cid:74)(cid:68)(cid:76)(cid:81)(cid:15) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72)(cid:68)(cid:86) (cid:82)(cid:88)(cid:85) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:92)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:75)(cid:76)(cid:74)(cid:75) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:73)(cid:82)(cid:85) (cid:87)(cid:75)(cid:72)(cid:80) (cid:68)(cid:81)(cid:71) (cid:68)(cid:70)(cid:75)(cid:76)(cid:72)(cid:89)(cid:72)(cid:86) (cid:69)(cid:72)(cid:86)(cid:87) (cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:76)(cid:81) (cid:79)(cid:72)(cid:91)(cid:76)(cid:70)(cid:68)(cid:79) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:24)(cid:17)(cid:22) (cid:52)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:40)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:58)(cid:72) (cid:72)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:182) (cid:68)(cid:69)(cid:76)(cid:79)(cid:76)(cid:87)(cid:76)(cid:72)(cid:86) (cid:87)(cid:82) (cid:88)(cid:81)(cid:70)(cid:82)(cid:89)(cid:72)(cid:85) (cid:87)(cid:75)(cid:72) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:72)(cid:71)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86)(cid:15) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85)(cid:86)(cid:15) (cid:68)(cid:81)(cid:71) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87)(cid:86) (cid:87)(cid:75)(cid:85)(cid:82)(cid:88)(cid:74)(cid:75) (cid:70)(cid:68)(cid:86)(cid:72) (cid:86)(cid:87)(cid:88)(cid:71)(cid:76)(cid:72)(cid:86) (cid:69)(cid:68)(cid:86)(cid:72)(cid:71) (cid:82)(cid:81) (cid:68) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79) (cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85) (cid:15) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87) (cid:15) (cid:68)(cid:81)(cid:71) (cid:83)(cid:76)(cid:81)(cid:92)(cid:76)(cid:81) (cid:17) (cid:55)(cid:68)(cid:78)(cid:76)(cid:81)(cid:74) (cid:11)(cid:68)(cid:83)(cid:83)(cid:79)(cid:92)(cid:12) (cid:68)(cid:86) (cid:68)(cid:81) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:90)(cid:75)(cid:76)(cid:70)(cid:75) (cid:76)(cid:86) (cid:68)(cid:79)(cid:86)(cid:82) (cid:68)(cid:81) (cid:68)(cid:81)(cid:70)(cid:76)(cid:72)(cid:81)(cid:87) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) (cid:81)(cid:68)(cid:80)(cid:72) (cid:68)(cid:81)(cid:71) (cid:68) (cid:83)(cid:82)(cid:83)(cid:88)(cid:79)(cid:68)(cid:85) (cid:79)(cid:68)(cid:86)(cid:87) (cid:81)(cid:68)(cid:80)(cid:72) (cid:76)(cid:81) (cid:38)(cid:75)(cid:76)(cid:81)(cid:68)(cid:15) (cid:76)(cid:87) (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:68) (cid:38)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85) (cid:82)(cid:85) (cid:90)(cid:82)(cid:85)(cid:71)(cid:17) (cid:58)(cid:72) (cid:79)(cid:76)(cid:86)(cid:87) (cid:76)(cid:87)(cid:86) (cid:70)(cid:79)(cid:82)(cid:86)(cid:72)(cid:86)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:73)(cid:85)(cid:82)(cid:80) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:76)(cid:81) (cid:55)(cid:68)(cid:16) (cid:69)(cid:79)(cid:72) (cid:24) (cid:90)(cid:75)(cid:72)(cid:85)(cid:72) (cid:90)(cid:72) (cid:87)(cid:85)(cid:72)(cid:68)(cid:87) (cid:68)(cid:86) (cid:68) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85) (cid:68)(cid:81)(cid:71) (cid:68) (cid:90)(cid:82)(cid:85)(cid:71)(cid:17) (cid:58)(cid:75)(cid:72)(cid:81) (cid:76)(cid:87) (cid:76)(cid:86) (cid:68) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85)(cid:15) (cid:80)(cid:82)(cid:86)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:70)(cid:79)(cid:82)(cid:86)(cid:72)(cid:86)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:68)(cid:85)(cid:72) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79)(cid:79)(cid:92) (cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:72)(cid:71) (cid:87)(cid:82) (cid:68)(cid:83)(cid:83)(cid:79)(cid:92) (cid:17) (cid:58)(cid:75)(cid:72)(cid:81) (cid:76)(cid:87) (cid:68)(cid:70)(cid:87)(cid:86) (cid:68)(cid:86) (cid:68) (cid:90)(cid:82)(cid:85)(cid:71)(cid:15) (cid:87)(cid:75)(cid:72) (cid:70)(cid:79)(cid:82)(cid:86)(cid:72)(cid:86)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:72)(cid:71) (cid:87)(cid:82) (cid:70)(cid:82)(cid:88)(cid:81)(cid:87)(cid:85)(cid:92) (cid:81)(cid:68)(cid:80)(cid:72) (cid:68)(cid:81)(cid:71) (cid:79)(cid:68)(cid:86)(cid:87) (cid:81)(cid:68)(cid:80)(cid:72) (cid:80)(cid:72)(cid:68)(cid:81)(cid:76)(cid:81)(cid:74)(cid:86) (cid:76)(cid:81) (cid:17) (cid:41)(cid:82)(cid:85) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72)(cid:15) (cid:11)(cid:61)(cid:75)(cid:68)(cid:82)(cid:12)(cid:15) (cid:11)(cid:60)(cid:76)(cid:81)(cid:12) (cid:68)(cid:85)(cid:72) (cid:68)(cid:81)(cid:70)(cid:76)(cid:72)(cid:81)(cid:87) (cid:86)(cid:87)(cid:68)(cid:87)(cid:72) (cid:81)(cid:68)(cid:80)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:79)(cid:68)(cid:86)(cid:87) (cid:81)(cid:68)(cid:80)(cid:72)(cid:86)(cid:15) (cid:68)(cid:81)(cid:71) (cid:11)(cid:39)(cid:88)(cid:78)(cid:72) (cid:39)(cid:76)(cid:81)(cid:74)(cid:12) (cid:11)(cid:39)(cid:88)(cid:78)(cid:72) (cid:61)(cid:75)(cid:68)(cid:82) (cid:82)(cid:73) (cid:45)(cid:76)(cid:81)(cid:12) (cid:68)(cid:85)(cid:72) (cid:71)(cid:88)(cid:78)(cid:72)(cid:86) (cid:76)(cid:81) (cid:68)(cid:81)(cid:70)(cid:76)(cid:72)(cid:81)(cid:87) (cid:38)(cid:75)(cid:76)(cid:81)(cid:68)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:85)(cid:72)(cid:89)(cid:72)(cid:68)(cid:79)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:70)(cid:68)(cid:81) (cid:86)(cid:88)(cid:83)(cid:83)(cid:79)(cid:72)(cid:80)(cid:72)(cid:81)(cid:87) (cid:87)(cid:75)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:73)(cid:82)(cid:85) (cid:68) (cid:80)(cid:82)(cid:85)(cid:72) (cid:70)(cid:82)(cid:80)(cid:83)(cid:79)(cid:72)(cid:87)(cid:72) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:58)(cid:72) (cid:73)(cid:88)(cid:85)(cid:87)(cid:75)(cid:72)(cid:85) (cid:87)(cid:68)(cid:78)(cid:72) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87) (cid:11)(cid:76)(cid:79)(cid:79)(cid:81)(cid:72)(cid:86)(cid:86)(cid:12) (cid:68)(cid:86) (cid:68)(cid:81) (cid:72)(cid:91)(cid:68)(cid:80)(cid:83)(cid:79)(cid:72) (cid:68)(cid:81)(cid:71) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:24) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:76)(cid:87)(cid:86) (cid:70)(cid:79)(cid:82)(cid:86)(cid:72)(cid:86)(cid:87) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:16) (cid:87)(cid:72)(cid:85)(cid:86) (cid:68)(cid:81)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86)(cid:17) (cid:36)(cid:79)(cid:79) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:70)(cid:79)(cid:82)(cid:86)(cid:72)(cid:86)(cid:87) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85)(cid:86) (cid:68)(cid:81)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:68)(cid:85)(cid:72) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79)(cid:79)(cid:92) (cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:72)(cid:71) (cid:87)(cid:82) (cid:87)(cid:75)(cid:72) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87) (cid:17) (cid:48)(cid:82)(cid:86)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72)(cid:80) (cid:68)(cid:85)(cid:72) (cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:72)(cid:71) (cid:87)(cid:82) (cid:71)(cid:76)(cid:86)(cid:72)(cid:68)(cid:86)(cid:72)(cid:86)(cid:15) (cid:86)(cid:92)(cid:80)(cid:83)(cid:16) (cid:87)(cid:82)(cid:80)(cid:86) (cid:68)(cid:81)(cid:71) (cid:82)(cid:87)(cid:75)(cid:72)(cid:85) (cid:80)(cid:72)(cid:71)(cid:76)(cid:70)(cid:68)(cid:79) (cid:87)(cid:72)(cid:85)(cid:80)(cid:86)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:11)(cid:71)(cid:76)(cid:86)(cid:72)(cid:68)(cid:86)(cid:72)(cid:12)(cid:15) (cid:11)(cid:76)(cid:81)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:12) (cid:68)(cid:81)(cid:71) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85)(cid:86) (cid:11)(cid:86)(cid:92)(cid:80)(cid:83)(cid:87)(cid:82)(cid:80)(cid:12)(cid:15) (cid:11)(cid:86)(cid:70)(cid:82)(cid:85)(cid:72)(cid:12)(cid:17) (cid:48)(cid:82)(cid:86)(cid:87) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72)(cid:80) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81) (cid:70)(cid:82)(cid:80)(cid:16) (cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87) (cid:15) (cid:69)(cid:88)(cid:87) (cid:11)(cid:74)(cid:68)(cid:81)(cid:74)(cid:85)(cid:72)(cid:81)(cid:72)(cid:12)(cid:15) (cid:11)(cid:86)(cid:88)(cid:73)(cid:73)(cid:72)(cid:85)(cid:12)(cid:15) (cid:68)(cid:81)(cid:71) (cid:11)(cid:76)(cid:81)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:12) (cid:90)(cid:76)(cid:87)(cid:75)(cid:82)(cid:88)(cid:87) (cid:68)(cid:79)(cid:86)(cid:82) (cid:86)(cid:75)(cid:68)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85) (cid:86)(cid:72)(cid:16) (cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70)(cid:86)(cid:17) (cid:48)(cid:82)(cid:85)(cid:72)(cid:82)(cid:89)(cid:72)(cid:85)(cid:15) (cid:90)(cid:72) (cid:86)(cid:87)(cid:88)(cid:71)(cid:92) (cid:83)(cid:76)(cid:81)(cid:92)(cid:76)(cid:81) (cid:87)(cid:16)(cid:122)(cid:81)(cid:74)(cid:15) (cid:86)(cid:82)(cid:88)(cid:81)(cid:71) (cid:82)(cid:73) (cid:11)(cid:83)(cid:68)(cid:76)(cid:81)(cid:12)(cid:15) (cid:68)(cid:81)(cid:71) (cid:79)(cid:76)(cid:86)(cid:87) (cid:76)(cid:87)(cid:86) (cid:70)(cid:79)(cid:82)(cid:86)(cid:72)(cid:86)(cid:87) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:16) (cid:87)(cid:72)(cid:85)(cid:86) (cid:76)(cid:81) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:24) (cid:68)(cid:81)(cid:71) (cid:82)(cid:69)(cid:86)(cid:72)(cid:85)(cid:89)(cid:72) (cid:68) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85) (cid:83)(cid:75)(cid:72)(cid:81)(cid:82)(cid:80)(cid:72)(cid:81)(cid:82)(cid:81) (cid:87)(cid:82) (cid:17) (cid:55)(cid:75)(cid:72)(cid:86)(cid:72) (cid:70)(cid:79)(cid:82)(cid:86)(cid:72)(cid:86)(cid:87) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85)(cid:86) (cid:75)(cid:68)(cid:89)(cid:72) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85) (cid:86)(cid:72)(cid:16) (cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:80)(cid:72)(cid:68)(cid:81)(cid:76)(cid:81)(cid:74) (cid:90)(cid:76)(cid:87)(cid:75) (cid:11)(cid:83)(cid:68)(cid:76)(cid:81)(cid:12)(cid:15) (cid:90)(cid:75)(cid:82)(cid:86)(cid:72) (cid:83)(cid:76)(cid:81)(cid:92)(cid:76)(cid:81) (cid:76)(cid:86) (cid:87)(cid:16) (cid:122)(cid:81)(cid:74)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:11)(cid:83)(cid:68)(cid:76)(cid:81)(cid:12)(cid:15) (cid:11)(cid:75)(cid:72)(cid:68)(cid:71)(cid:68)(cid:70)(cid:75)(cid:72)(cid:12)(cid:15) (cid:68)(cid:81)(cid:71) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85)(cid:86) (cid:11)(cid:86)(cid:82)(cid:85)(cid:72)(cid:12)(cid:15) (cid:11)(cid:83)(cid:68)(cid:85)(cid:68)(cid:79)(cid:92)(cid:86)(cid:76)(cid:86)(cid:12)(cid:17) (cid:55)(cid:75)(cid:72) (cid:84)(cid:88)(cid:68)(cid:79)(cid:16) (cid:76)(cid:87)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:68)(cid:81)(cid:68)(cid:79)(cid:92)(cid:86)(cid:76)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:82)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86) (cid:79)(cid:72)(cid:89)(cid:72)(cid:85)(cid:68)(cid:74)(cid:72) (cid:69)(cid:82)(cid:87)(cid:75) (cid:72)(cid:91)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:79) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:70)(cid:82)(cid:16)(cid:82)(cid:70)(cid:70)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:16) (cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:79) (cid:80)(cid:82)(cid:85)(cid:83)(cid:75)(cid:82)(cid:79)(cid:82)(cid:74)(cid:76)(cid:70)(cid:68)(cid:79) (cid:68)(cid:81)(cid:71) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:55)(cid:75)(cid:72) (cid:80)(cid:72)(cid:71)(cid:76)(cid:70)(cid:68)(cid:79) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:86)(cid:87)(cid:82)(cid:85)(cid:72)(cid:71) (cid:76)(cid:81) (cid:68)(cid:69)(cid:82)(cid:89)(cid:72) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:70)(cid:82)(cid:88)(cid:79)(cid:71) (cid:69)(cid:72) (cid:88)(cid:87)(cid:76)(cid:79)(cid:76)(cid:93)(cid:72)(cid:71) (cid:73)(cid:82)(cid:85) (cid:70)(cid:79)(cid:76)(cid:81)(cid:16) (cid:76)(cid:70)(cid:68)(cid:79) (cid:49)(cid:40)(cid:53) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:17) (cid:25) (cid:38)(cid:82)(cid:81)(cid:70)(cid:79)(cid:88)(cid:86)(cid:76)(cid:82)(cid:81) (cid:58)(cid:72) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72) (cid:68) (cid:73)(cid:79)(cid:72)(cid:91)(cid:76)(cid:69)(cid:79)(cid:72) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:16) (cid:90)(cid:82)(cid:85)(cid:78) (cid:87)(cid:82) (cid:77)(cid:82)(cid:76)(cid:81)(cid:87)(cid:79)(cid:92) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81) (cid:69)(cid:82)(cid:87)(cid:75) (cid:90)(cid:82)(cid:85)(cid:71) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:69)(cid:92) (cid:76)(cid:81)(cid:70)(cid:82)(cid:85)(cid:83)(cid:82)(cid:85)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74) (cid:80)(cid:82)(cid:85)(cid:83)(cid:75)(cid:82)(cid:79)(cid:82)(cid:74)(cid:76)(cid:70)(cid:68)(cid:79)(cid:15) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:87)(cid:76)(cid:70)(cid:15) (cid:68)(cid:81)(cid:71) (cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:70)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:86)(cid:76)(cid:80)(cid:88)(cid:79)(cid:87)(cid:68)(cid:81)(cid:72)(cid:82)(cid:88)(cid:86)(cid:79)(cid:92)(cid:17) (cid:50)(cid:88)(cid:85) (cid:83)(cid:85)(cid:82)(cid:83)(cid:82)(cid:86)(cid:72)(cid:71) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:79)(cid:72)(cid:89)(cid:72)(cid:85)(cid:68)(cid:74)(cid:72)(cid:86) (cid:68)(cid:81) (cid:76)(cid:81)(cid:81)(cid:82)(cid:89)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:73)(cid:76)(cid:81)(cid:72)(cid:16)(cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72)(cid:15) (cid:76)(cid:81)(cid:70)(cid:79)(cid:88)(cid:71)(cid:76)(cid:81)(cid:74) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80)(cid:86) (cid:68)(cid:81)(cid:71) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:71)(cid:85)(cid:82)(cid:83)(cid:83)(cid:76)(cid:81)(cid:74)(cid:15) (cid:68)(cid:86) (cid:90)(cid:72)(cid:79)(cid:79) (cid:68)(cid:86) (cid:68) (cid:81)(cid:82)(cid:89)(cid:72)(cid:79) (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:87)(cid:82) (cid:70)(cid:68)(cid:83)(cid:16) (cid:57)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85) (cid:40)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:55)(cid:82)(cid:83) (cid:24) (cid:38)(cid:79)(cid:82)(cid:86)(cid:72)(cid:86)(cid:87) (cid:53)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) EW EW (cid:11)(cid:61)(cid:75)(cid:68)(cid:82)(cid:12)(cid:15) (cid:11)(cid:60)(cid:76)(cid:81)(cid:12)(cid:15) (cid:11)(cid:39)(cid:88)(cid:78)(cid:72) (cid:39)(cid:76)(cid:81)(cid:74)(cid:12)(cid:15) (cid:11)(cid:38)(cid:75)(cid:88)(cid:12)(cid:15) (cid:11)(cid:39)(cid:88)(cid:78)(cid:72) (cid:61)(cid:75)(cid:68)(cid:82) (cid:82)(cid:73) (cid:45)(cid:76)(cid:81)(cid:12) E charG EW (cid:11)(cid:68)(cid:83)(cid:83)(cid:79)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:12)(cid:15) (cid:11)(cid:68)(cid:83)(cid:83)(cid:79)(cid:76)(cid:70)(cid:68)(cid:81)(cid:87)(cid:12)(cid:15) (cid:11)(cid:89)(cid:76)(cid:86)(cid:68)(cid:12)(cid:15) (cid:11)(cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:12)(cid:15) (cid:11)(cid:72)(cid:81)(cid:87)(cid:85)(cid:92)(cid:12) E compG EW (cid:11)(cid:71)(cid:76)(cid:86)(cid:72)(cid:68)(cid:86)(cid:72)(cid:12)(cid:15) (cid:11)(cid:76)(cid:81)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:12)(cid:15) (cid:11)(cid:83)(cid:68)(cid:76)(cid:81)(cid:12)(cid:15) (cid:11)(cid:79)(cid:88)(cid:81)(cid:74)(cid:12)(cid:15) (cid:11)(cid:86)(cid:92)(cid:80)(cid:83)(cid:87)(cid:82)(cid:80)(cid:12) E charG (cid:11)(cid:86)(cid:92)(cid:80)(cid:83)(cid:87)(cid:82)(cid:80)(cid:12)(cid:15) (cid:11)(cid:86)(cid:82)(cid:85)(cid:72)(cid:12)(cid:15) (cid:11)(cid:72)(cid:83)(cid:76)(cid:79)(cid:72)(cid:83)(cid:86)(cid:92)(cid:12)(cid:15) (cid:11)(cid:86)(cid:90)(cid:82)(cid:79)(cid:79)(cid:72)(cid:81)(cid:12)(cid:15) (cid:11)(cid:74)(cid:68)(cid:81)(cid:74)(cid:85)(cid:72)(cid:81)(cid:72)(cid:12) E pinyinG (cid:87)(cid:16)(cid:122)(cid:81)(cid:74) EW (cid:11)(cid:83)(cid:68)(cid:76)(cid:81)(cid:12)(cid:15) (cid:11)(cid:87)(cid:72)(cid:68)(cid:85)(cid:12)(cid:15) (cid:11)(cid:68)(cid:81)(cid:91)(cid:76)(cid:72)(cid:87)(cid:92)(cid:12)(cid:15) (cid:11)(cid:83)(cid:68)(cid:76)(cid:81)(cid:12)(cid:15) (cid:11)(cid:89)(cid:82)(cid:80)(cid:76)(cid:87)(cid:12) E charG (cid:11)(cid:83)(cid:68)(cid:76)(cid:81)(cid:12)(cid:15) (cid:11)(cid:85)(cid:72)(cid:80)(cid:82)(cid:85)(cid:86)(cid:72)(cid:12)(cid:15) (cid:11)(cid:83)(cid:68)(cid:79)(cid:83)(cid:76)(cid:87)(cid:68)(cid:87)(cid:72)(cid:12)(cid:15) (cid:11)(cid:71)(cid:76)(cid:86)(cid:72)(cid:68)(cid:86)(cid:72)(cid:12)(cid:15) (cid:11)(cid:86)(cid:81)(cid:72)(cid:72)(cid:93)(cid:72)(cid:12) (cid:55)(cid:68)(cid:69)(cid:79)(cid:72) (cid:24)(cid:29) (cid:52)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:68)(cid:81)(cid:68)(cid:79)(cid:92)(cid:86)(cid:76)(cid:86)(cid:17) (cid:41)(cid:82)(cid:85) (cid:68) (cid:57)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85)(cid:15) (cid:73)(cid:85)(cid:82)(cid:80) (cid:68)(cid:81) (cid:40)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:87)(cid:82)(cid:83) (cid:24) (cid:70)(cid:79)(cid:82)(cid:86)(cid:72)(cid:86)(cid:87) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:68)(cid:85)(cid:72) (cid:79)(cid:76)(cid:86)(cid:87)(cid:72)(cid:71)(cid:17) (cid:87)(cid:88)(cid:85)(cid:72) (cid:87)(cid:75)(cid:72) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:70)(cid:82)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:90)(cid:76)(cid:87)(cid:75)(cid:76)(cid:81) (cid:72)(cid:68)(cid:70)(cid:75) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:16) (cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71) (cid:68)(cid:81)(cid:71) (cid:86)(cid:75)(cid:68)(cid:85)(cid:72)(cid:71) (cid:68)(cid:70)(cid:85)(cid:82)(cid:86)(cid:86) (cid:80)(cid:88)(cid:79)(cid:87)(cid:76)(cid:83)(cid:79)(cid:72) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:17) (cid:37)(cid:92) (cid:76)(cid:81)(cid:16) (cid:87)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:76)(cid:81)(cid:74) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:87)(cid:76)(cid:70) (cid:68)(cid:81)(cid:71) (cid:86)(cid:92)(cid:81)(cid:87)(cid:68)(cid:70)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79) (cid:79)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:79)(cid:72)(cid:89)(cid:72)(cid:85)(cid:68)(cid:74)(cid:76)(cid:81)(cid:74) (cid:82)(cid:88)(cid:85) (cid:73)(cid:76)(cid:81)(cid:72)(cid:16)(cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:83)(cid:76)(cid:83)(cid:72)(cid:79)(cid:76)(cid:81)(cid:72) (cid:68)(cid:81)(cid:71) (cid:79)(cid:82)(cid:86)(cid:86) (cid:73)(cid:88)(cid:81)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:82)(cid:88)(cid:85) (cid:73)(cid:85)(cid:68)(cid:80)(cid:72)(cid:90)(cid:82)(cid:85)(cid:78) (cid:76)(cid:86) (cid:70)(cid:68)(cid:83)(cid:68)(cid:69)(cid:79)(cid:72) (cid:82)(cid:73) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:69)(cid:72)(cid:87)(cid:16) (cid:87)(cid:72)(cid:85) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:76)(cid:81) (cid:87)(cid:72)(cid:85)(cid:80)(cid:86) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71) (cid:86)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85)(cid:76)(cid:87)(cid:92) (cid:68)(cid:81)(cid:71) (cid:68)(cid:81)(cid:68)(cid:79)(cid:82)(cid:74)(cid:92)(cid:17) (cid:41)(cid:88)(cid:85)(cid:87)(cid:75)(cid:72)(cid:85)(cid:80)(cid:82)(cid:85)(cid:72)(cid:15) (cid:90)(cid:72) (cid:86)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80)(cid:76)(cid:70)(cid:68)(cid:79)(cid:79)(cid:92) (cid:76)(cid:81)(cid:89)(cid:72)(cid:86)(cid:16) (cid:87)(cid:76)(cid:74)(cid:68)(cid:87)(cid:72) (cid:87)(cid:75)(cid:72) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:81)(cid:72)(cid:86)(cid:86) (cid:82)(cid:73) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:68)(cid:81)(cid:71) (cid:83)(cid:85)(cid:82)(cid:89)(cid:76)(cid:71)(cid:72)(cid:86) (cid:87)(cid:75)(cid:72) (cid:72)(cid:89)(cid:76)(cid:71)(cid:72)(cid:81)(cid:70)(cid:72) (cid:87)(cid:75)(cid:68)(cid:87) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:70)(cid:68)(cid:81) (cid:69)(cid:72) (cid:68) (cid:69)(cid:72)(cid:87)(cid:87)(cid:72)(cid:85) (cid:68)(cid:79)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:87)(cid:82) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:73)(cid:82)(cid:85) (cid:90)(cid:82)(cid:85)(cid:71) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:17) (cid:40)(cid:91)(cid:83)(cid:72)(cid:85)(cid:76)(cid:80)(cid:72)(cid:81)(cid:87)(cid:68)(cid:79) (cid:85)(cid:72)(cid:86)(cid:88)(cid:79)(cid:87)(cid:86) (cid:86)(cid:75)(cid:82)(cid:90) (cid:87)(cid:75)(cid:68)(cid:87) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:88)(cid:87)(cid:83)(cid:72)(cid:85)(cid:73)(cid:82)(cid:85)(cid:80) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:76)(cid:81) (cid:86)(cid:72)(cid:89)(cid:72)(cid:85)(cid:68)(cid:79) (cid:71)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:49)(cid:47)(cid:51) (cid:87)(cid:68)(cid:86)(cid:78)(cid:86)(cid:15) (cid:86)(cid:88)(cid:70)(cid:75) (cid:68)(cid:86) (cid:87)(cid:72)(cid:91)(cid:87) (cid:70)(cid:79)(cid:68)(cid:86)(cid:86)(cid:76)(cid:73)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:81)(cid:71) (cid:81)(cid:68)(cid:80)(cid:72)(cid:71) (cid:72)(cid:81)(cid:87)(cid:76)(cid:87)(cid:92) (cid:85)(cid:72)(cid:70)(cid:82)(cid:74)(cid:81)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:55)(cid:75)(cid:72) (cid:84)(cid:88)(cid:68)(cid:79)(cid:76)(cid:87)(cid:68)(cid:87)(cid:76)(cid:89)(cid:72) (cid:68)(cid:81)(cid:68)(cid:79)(cid:92)(cid:86)(cid:76)(cid:86) (cid:76)(cid:79)(cid:79)(cid:88)(cid:86)(cid:87)(cid:85)(cid:68)(cid:87)(cid:72)(cid:86) (cid:87)(cid:75)(cid:68)(cid:87) (cid:74)(cid:85)(cid:68)(cid:76)(cid:81) (cid:72)(cid:80)(cid:16) (cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:70)(cid:68)(cid:81) (cid:72)(cid:73)(cid:73)(cid:72)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:79)(cid:92) (cid:70)(cid:68)(cid:83)(cid:87)(cid:88)(cid:85)(cid:72) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:36)(cid:70)(cid:78)(cid:81)(cid:82)(cid:90)(cid:79)(cid:72)(cid:71)(cid:74)(cid:80)(cid:72)(cid:81)(cid:87)(cid:86) (cid:58)(cid:72) (cid:90)(cid:82)(cid:88)(cid:79)(cid:71) (cid:79)(cid:76)(cid:78)(cid:72) (cid:87)(cid:82) (cid:87)(cid:75)(cid:68)(cid:81)(cid:78) (cid:51)(cid:85)(cid:82)(cid:73)(cid:17) (cid:45)(cid:82)(cid:85)(cid:71)(cid:68)(cid:81) (cid:37)(cid:82)(cid:92)(cid:71)(cid:16)(cid:42)(cid:85)(cid:68)(cid:69)(cid:72)(cid:85) (cid:73)(cid:85)(cid:82)(cid:80) (cid:56)(cid:81)(cid:76)(cid:89)(cid:72)(cid:85)(cid:86)(cid:76)(cid:87)(cid:92) (cid:82)(cid:73) (cid:48)(cid:68)(cid:85)(cid:92)(cid:79)(cid:68)(cid:81)(cid:71) (cid:68)(cid:81)(cid:71) (cid:39)(cid:85)(cid:17) (cid:60)(cid:72)(cid:73)(cid:72)(cid:81)(cid:74) (cid:61)(cid:75)(cid:72)(cid:81)(cid:74) (cid:73)(cid:85)(cid:82)(cid:80) (cid:55)(cid:72)(cid:81)(cid:70)(cid:72)(cid:81)(cid:87) (cid:73)(cid:82)(cid:85) (cid:87)(cid:75)(cid:72)(cid:76)(cid:85) (cid:75)(cid:72)(cid:79)(cid:83)(cid:73)(cid:88)(cid:79) (cid:86)(cid:88)(cid:74)(cid:74)(cid:72)(cid:86)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:17) (cid:55)(cid:75)(cid:76)(cid:86) (cid:90)(cid:82)(cid:85)(cid:78) (cid:90)(cid:68)(cid:86) (cid:86)(cid:88)(cid:83)(cid:83)(cid:82)(cid:85)(cid:87)(cid:72)(cid:71) (cid:69)(cid:92) (cid:87)(cid:75)(cid:72) (cid:46)(cid:72)(cid:92)(cid:16)(cid:36)(cid:85)(cid:72)(cid:68) (cid:53)(cid:72)(cid:16) (cid:86)(cid:72)(cid:68)(cid:85)(cid:70)(cid:75) (cid:68)(cid:81)(cid:71) (cid:39)(cid:72)(cid:89)(cid:72)(cid:79)(cid:82)(cid:83)(cid:80)(cid:72)(cid:81)(cid:87) (cid:51)(cid:85)(cid:82)(cid:74)(cid:85)(cid:68)(cid:80) (cid:82)(cid:73) (cid:42)(cid:88)(cid:68)(cid:81)(cid:74)(cid:71)(cid:82)(cid:81)(cid:74) (cid:51)(cid:85)(cid:82)(cid:89)(cid:76)(cid:81)(cid:70)(cid:72) (cid:62)(cid:21)(cid:19)(cid:21)(cid:19)(cid:37)(cid:19)(cid:20)(cid:19)(cid:20)(cid:22)(cid:24)(cid:19)(cid:19)(cid:19)(cid:20)(cid:64)(cid:17) (cid:53)(cid:72)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:39)(cid:68)(cid:89)(cid:76)(cid:71) (cid:44) (cid:37)(cid:72)(cid:68)(cid:89)(cid:72)(cid:85)(cid:15) (cid:37)(cid:85)(cid:68)(cid:71)(cid:92) (cid:38)(cid:79)(cid:68)(cid:85)(cid:78)(cid:15) (cid:40)(cid:71)(cid:90)(cid:68)(cid:85)(cid:71) (cid:54)(cid:87)(cid:68)(cid:81)(cid:87)(cid:82)(cid:81) (cid:41)(cid:79)(cid:72)(cid:80)(cid:16) (cid:80)(cid:76)(cid:81)(cid:74)(cid:15) (cid:55) (cid:41)(cid:79)(cid:82)(cid:85)(cid:76)(cid:68)(cid:81) (cid:45)(cid:68)(cid:72)(cid:74)(cid:72)(cid:85)(cid:15) (cid:68)(cid:81)(cid:71) (cid:48)(cid:68)(cid:85)(cid:76)(cid:68) (cid:58)(cid:82)(cid:79)(cid:87)(cid:72)(cid:85)(cid:86)(cid:17) (cid:21)(cid:19)(cid:19)(cid:26)(cid:17) (cid:58)(cid:75)(cid:72)(cid:81) (cid:86)(cid:72)(cid:80)(cid:68)(cid:81)(cid:87)(cid:76)(cid:70)(cid:86) (cid:80)(cid:72)(cid:72)(cid:87)(cid:86) (cid:83)(cid:75)(cid:82)(cid:81)(cid:72)(cid:87)(cid:76)(cid:70)(cid:86)(cid:29) (cid:36)(cid:70)(cid:82)(cid:88)(cid:86)(cid:87)(cid:76)(cid:70)(cid:68)(cid:79) (cid:86)(cid:87)(cid:88)(cid:71)(cid:76)(cid:72)(cid:86) (cid:82)(cid:73) (cid:86)(cid:72)(cid:70)(cid:82)(cid:81)(cid:71)(cid:16)(cid:82)(cid:70)(cid:70)(cid:88)(cid:85)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:73)(cid:82)(cid:70)(cid:88)(cid:86)(cid:17) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:15) (cid:27)(cid:22)(cid:11)(cid:21)(cid:12)(cid:29)(cid:21)(cid:23)(cid:24)(cid:177) (cid:21)(cid:26)(cid:25)(cid:17) (cid:51)(cid:76)(cid:82)(cid:87)(cid:85) (cid:37)(cid:82)(cid:77)(cid:68)(cid:81)(cid:82)(cid:90)(cid:86)(cid:78)(cid:76)(cid:15) (cid:40)(cid:71)(cid:82)(cid:88)(cid:68)(cid:85)(cid:71) (cid:42)(cid:85)(cid:68)(cid:89)(cid:72)(cid:15) (cid:36)(cid:85)(cid:80)(cid:68)(cid:81)(cid:71) (cid:45)(cid:82)(cid:88)(cid:79)(cid:76)(cid:81)(cid:15) (cid:68)(cid:81)(cid:71) (cid:55)(cid:82)(cid:80)(cid:68)(cid:86) (cid:48)(cid:76)(cid:78)(cid:82)(cid:79)(cid:82)(cid:89)(cid:17) (cid:21)(cid:19)(cid:20)(cid:26)(cid:17) (cid:40)(cid:81)(cid:85)(cid:76)(cid:70)(cid:75)(cid:76)(cid:81)(cid:74) (cid:90)(cid:82)(cid:85)(cid:71) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:86)(cid:88)(cid:69)(cid:90)(cid:82)(cid:85)(cid:71) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:55)(cid:85)(cid:68)(cid:81)(cid:86)(cid:68)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:36)(cid:86)(cid:86)(cid:82)(cid:70)(cid:76)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86) (cid:15) (cid:24)(cid:29)(cid:20)(cid:22)(cid:24)(cid:177)(cid:20)(cid:23)(cid:25)(cid:17) (cid:54)(cid:75)(cid:68)(cid:82)(cid:86)(cid:75)(cid:72)(cid:81)(cid:74) (cid:38)(cid:68)(cid:82) (cid:68)(cid:81)(cid:71) (cid:58)(cid:72)(cid:76) (cid:47)(cid:88)(cid:17) (cid:21)(cid:19)(cid:20)(cid:26)(cid:17) (cid:44)(cid:80)(cid:83)(cid:85)(cid:82)(cid:89)(cid:76)(cid:81)(cid:74) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:70)(cid:82)(cid:81)(cid:89)(cid:82)(cid:79)(cid:88)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:68)(cid:81)(cid:71) (cid:86)(cid:88)(cid:69)(cid:90)(cid:82)(cid:85)(cid:71) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:44)(cid:81) (cid:55)(cid:75)(cid:76)(cid:85)(cid:87)(cid:92)(cid:16)(cid:41)(cid:76)(cid:85)(cid:86)(cid:87) (cid:36)(cid:36)(cid:36)(cid:44) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:16) (cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:36)(cid:85)(cid:87)(cid:76)(cid:73)(cid:76)(cid:70)(cid:76)(cid:68)(cid:79) (cid:44)(cid:81)(cid:87)(cid:72)(cid:79)(cid:79)(cid:76)(cid:74)(cid:72)(cid:81)(cid:70)(cid:72) (cid:17) (cid:54)(cid:75)(cid:68)(cid:82)(cid:86)(cid:75)(cid:72)(cid:81)(cid:74) (cid:38)(cid:68)(cid:82)(cid:15) (cid:58)(cid:72)(cid:76) (cid:47)(cid:88)(cid:15) (cid:45)(cid:88)(cid:81) (cid:61)(cid:75)(cid:82)(cid:88)(cid:15) (cid:68)(cid:81)(cid:71) (cid:59)(cid:76)(cid:68)(cid:82)(cid:79)(cid:82)(cid:81)(cid:74) (cid:47)(cid:76)(cid:17) (cid:21)(cid:19)(cid:20)(cid:27)(cid:17) (cid:70)(cid:90)(cid:21)(cid:89)(cid:72)(cid:70)(cid:29) (cid:47)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:90)(cid:76)(cid:87)(cid:75) (cid:86)(cid:87)(cid:85)(cid:82)(cid:78)(cid:72) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:44)(cid:81) (cid:55)(cid:75)(cid:76)(cid:85)(cid:87)(cid:92)(cid:16)(cid:54)(cid:72)(cid:70)(cid:82)(cid:81)(cid:71) (cid:36)(cid:36)(cid:36)(cid:44) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:36)(cid:85)(cid:87)(cid:76)(cid:73)(cid:76)(cid:70)(cid:76)(cid:68)(cid:79) (cid:44)(cid:81)(cid:87)(cid:72)(cid:79)(cid:79)(cid:76)(cid:74)(cid:72)(cid:81)(cid:70)(cid:72) (cid:17) (cid:43)(cid:72)(cid:81)(cid:74) (cid:38)(cid:75)(cid:72)(cid:81)(cid:15) (cid:45)(cid:88)(cid:81)(cid:92)(cid:76)(cid:81)(cid:74) (cid:47)(cid:76)(cid:68)(cid:81)(cid:74)(cid:15) (cid:68)(cid:81)(cid:71) (cid:43)(cid:68)(cid:76)(cid:87)(cid:68)(cid:82) (cid:47)(cid:76)(cid:88)(cid:17) (cid:21)(cid:19)(cid:20)(cid:24)(cid:68)(cid:17) (cid:43)(cid:82)(cid:90) (cid:71)(cid:82)(cid:72)(cid:86) (cid:90)(cid:82)(cid:85)(cid:71) (cid:79)(cid:72)(cid:81)(cid:74)(cid:87)(cid:75) (cid:72)(cid:89)(cid:82)(cid:79)(cid:89)(cid:72) (cid:76)(cid:81) (cid:90)(cid:85)(cid:76)(cid:87)(cid:87)(cid:72)(cid:81) (cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72)(cid:34) (cid:51)(cid:79)(cid:82)(cid:54) (cid:82)(cid:81)(cid:72) (cid:15) (cid:20)(cid:19)(cid:11)(cid:28)(cid:12)(cid:29)(cid:72)(cid:19)(cid:20)(cid:22)(cid:27)(cid:24)(cid:25)(cid:26)(cid:17) (cid:59)(cid:76)(cid:81)(cid:91)(cid:76)(cid:82)(cid:81)(cid:74) (cid:38)(cid:75)(cid:72)(cid:81)(cid:15) (cid:47)(cid:72)(cid:76) (cid:59)(cid:88)(cid:15) (cid:61)(cid:75)(cid:76)(cid:92)(cid:88)(cid:68)(cid:81) (cid:47)(cid:76)(cid:88)(cid:15) (cid:48)(cid:68)(cid:82)(cid:86)(cid:82)(cid:81)(cid:74) (cid:54)(cid:88)(cid:81)(cid:15) (cid:68)(cid:81)(cid:71) (cid:43)(cid:88)(cid:68)(cid:81)(cid:69)(cid:82) (cid:47)(cid:88)(cid:68)(cid:81)(cid:17) (cid:21)(cid:19)(cid:20)(cid:24)(cid:69)(cid:17) (cid:45)(cid:82)(cid:76)(cid:81)(cid:87) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:82)(cid:73) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:16) (cid:87)(cid:72)(cid:85) (cid:68)(cid:81)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:44)(cid:81) (cid:55)(cid:90)(cid:72)(cid:81)(cid:87)(cid:92)(cid:16)(cid:41)(cid:82)(cid:88)(cid:85)(cid:87)(cid:75) (cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:16) (cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:45)(cid:82)(cid:76)(cid:81)(cid:87) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:36)(cid:85)(cid:87)(cid:76)(cid:73)(cid:76)(cid:70)(cid:76)(cid:68)(cid:79) (cid:44)(cid:81)(cid:87)(cid:72)(cid:79)(cid:79)(cid:76)(cid:74)(cid:72)(cid:81)(cid:70)(cid:72) (cid:17) (cid:53)(cid:82)(cid:81)(cid:68)(cid:81) (cid:38)(cid:82)(cid:79)(cid:79)(cid:82)(cid:69)(cid:72)(cid:85)(cid:87)(cid:15) (cid:45)(cid:68)(cid:86)(cid:82)(cid:81) (cid:58)(cid:72)(cid:86)(cid:87)(cid:82)(cid:81)(cid:15) (cid:47)(cid:112)(cid:82)(cid:81) (cid:37)(cid:82)(cid:87)(cid:87)(cid:82)(cid:88)(cid:15) (cid:48)(cid:76)(cid:70)(cid:75)(cid:68)(cid:72)(cid:79) (cid:46)(cid:68)(cid:85)(cid:79)(cid:72)(cid:81)(cid:15) (cid:46)(cid:82)(cid:85)(cid:68)(cid:92) (cid:46)(cid:68)(cid:89)(cid:88)(cid:78)(cid:70)(cid:88)(cid:82)(cid:74)(cid:79)(cid:88)(cid:15) (cid:68)(cid:81)(cid:71) (cid:51)(cid:68)(cid:89)(cid:72)(cid:79) (cid:46)(cid:88)(cid:78)(cid:86)(cid:68)(cid:17) (cid:21)(cid:19)(cid:20)(cid:20)(cid:17) (cid:49)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:83)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74) (cid:11)(cid:68)(cid:79)(cid:80)(cid:82)(cid:86)(cid:87)(cid:12) (cid:73)(cid:85)(cid:82)(cid:80) (cid:86)(cid:70)(cid:85)(cid:68)(cid:87)(cid:70)(cid:75)(cid:17) (cid:45)(cid:82)(cid:88)(cid:85)(cid:81)(cid:68)(cid:79) (cid:82)(cid:73) (cid:80)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:85)(cid:72)(cid:86)(cid:72)(cid:68)(cid:85)(cid:70)(cid:75) (cid:15) (cid:20)(cid:21)(cid:11)(cid:36)(cid:88)(cid:74)(cid:12)(cid:29)(cid:21)(cid:23)(cid:28)(cid:22)(cid:177)(cid:21)(cid:24)(cid:22)(cid:26)(cid:17) (cid:45)(cid:68)(cid:70)(cid:82)(cid:69) (cid:39)(cid:72)(cid:89)(cid:79)(cid:76)(cid:81)(cid:15) (cid:48)(cid:76)(cid:81)(cid:74)(cid:16)(cid:58)(cid:72)(cid:76) (cid:38)(cid:75)(cid:68)(cid:81)(cid:74)(cid:15) (cid:46)(cid:72)(cid:81)(cid:87)(cid:82)(cid:81) (cid:47)(cid:72)(cid:72)(cid:15) (cid:68)(cid:81)(cid:71) (cid:46)(cid:85)(cid:76)(cid:86)(cid:87)(cid:76)(cid:81)(cid:68) (cid:55)(cid:82)(cid:88)(cid:87)(cid:68)(cid:81)(cid:82)(cid:89)(cid:68)(cid:17) (cid:21)(cid:19)(cid:20)(cid:27)(cid:17) (cid:37)(cid:72)(cid:85)(cid:87)(cid:29) (cid:51)(cid:85)(cid:72)(cid:16)(cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:82)(cid:73) (cid:71)(cid:72)(cid:72)(cid:83) (cid:69)(cid:76)(cid:71)(cid:76)(cid:85)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:73)(cid:82)(cid:85)(cid:80)(cid:72)(cid:85)(cid:86) (cid:73)(cid:82)(cid:85) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:88)(cid:81)(cid:71)(cid:72)(cid:85)(cid:86)(cid:87)(cid:68)(cid:81)(cid:71)(cid:16) (cid:76)(cid:81)(cid:74)(cid:17) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89) (cid:83)(cid:85)(cid:72)(cid:83)(cid:85)(cid:76)(cid:81)(cid:87) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89)(cid:29)(cid:20)(cid:27)(cid:20)(cid:19)(cid:17)(cid:19)(cid:23)(cid:27)(cid:19)(cid:24) (cid:17) (cid:60)(cid:72)(cid:68)(cid:70)(cid:75)(cid:68)(cid:81) (cid:46)(cid:76)(cid:80)(cid:15) (cid:46)(cid:68)(cid:81)(cid:74)(cid:16)(cid:48)(cid:76)(cid:81) (cid:46)(cid:76)(cid:80)(cid:15) (cid:45)(cid:76)(cid:16)(cid:48)(cid:76)(cid:81) (cid:47)(cid:72)(cid:72)(cid:15) (cid:68)(cid:81)(cid:71) (cid:54)(cid:68)(cid:81)(cid:74)(cid:46)(cid:72)(cid:88)(cid:81) (cid:47)(cid:72)(cid:72)(cid:17) (cid:21)(cid:19)(cid:20)(cid:27)(cid:17) (cid:47)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:87)(cid:82) (cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:85)(cid:72)(cid:83)(cid:16) (cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:88)(cid:86)(cid:76)(cid:81)(cid:74) (cid:86)(cid:88)(cid:69)(cid:90)(cid:82)(cid:85)(cid:71) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:44)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:16) (cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:21)(cid:26)(cid:87)(cid:75) (cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:16) (cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:21)(cid:24)(cid:24)(cid:20)(cid:177)(cid:21)(cid:24)(cid:25)(cid:20)(cid:17) (cid:45)(cid:82)(cid:75)(cid:81) (cid:47)(cid:68)(cid:73)(cid:73)(cid:72)(cid:85)(cid:87)(cid:92)(cid:15) (cid:36)(cid:81)(cid:71)(cid:85)(cid:72)(cid:90) (cid:48)(cid:70)(cid:38)(cid:68)(cid:79)(cid:79)(cid:88)(cid:80)(cid:15) (cid:68)(cid:81)(cid:71) (cid:41)(cid:72)(cid:85)(cid:81)(cid:68)(cid:81)(cid:71)(cid:82) (cid:38)(cid:49) (cid:51)(cid:72)(cid:85)(cid:72)(cid:76)(cid:85)(cid:68)(cid:17) (cid:21)(cid:19)(cid:19)(cid:20)(cid:17) (cid:38)(cid:82)(cid:81)(cid:71)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:85)(cid:68)(cid:81)(cid:71)(cid:82)(cid:80) (cid:73)(cid:76)(cid:72)(cid:79)(cid:71)(cid:86)(cid:29) (cid:51)(cid:85)(cid:82)(cid:69)(cid:68)(cid:16) (cid:69)(cid:76)(cid:79)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86) (cid:73)(cid:82)(cid:85) (cid:86)(cid:72)(cid:74)(cid:80)(cid:72)(cid:81)(cid:87)(cid:76)(cid:81)(cid:74) (cid:68)(cid:81)(cid:71) (cid:79)(cid:68)(cid:69)(cid:72)(cid:79)(cid:76)(cid:81)(cid:74) (cid:86)(cid:72)(cid:84)(cid:88)(cid:72)(cid:81)(cid:70)(cid:72) (cid:71)(cid:68)(cid:87)(cid:68)(cid:17) (cid:59)(cid:76)(cid:68)(cid:82)(cid:92)(cid:68) (cid:47)(cid:76)(cid:15) (cid:60)(cid:88)(cid:91)(cid:76)(cid:68)(cid:81) (cid:48)(cid:72)(cid:81)(cid:74)(cid:15) (cid:59)(cid:76)(cid:68)(cid:82)(cid:73)(cid:72)(cid:76) (cid:54)(cid:88)(cid:81)(cid:15) (cid:52)(cid:76)(cid:81)(cid:74)(cid:75)(cid:82)(cid:81)(cid:74) (cid:43)(cid:68)(cid:81)(cid:15) (cid:36)(cid:85)(cid:76)(cid:68)(cid:81)(cid:81)(cid:68) (cid:60)(cid:88)(cid:68)(cid:81)(cid:15) (cid:68)(cid:81)(cid:71) (cid:45)(cid:76)(cid:90)(cid:72)(cid:76) (cid:47)(cid:76)(cid:17) (cid:21)(cid:19)(cid:20)(cid:28)(cid:17) (cid:44)(cid:86) (cid:90)(cid:82)(cid:85)(cid:71) (cid:86)(cid:72)(cid:74)(cid:80)(cid:72)(cid:81)(cid:16) (cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:81)(cid:72)(cid:70)(cid:72)(cid:86)(cid:86)(cid:68)(cid:85)(cid:92) (cid:73)(cid:82)(cid:85) (cid:71)(cid:72)(cid:72)(cid:83) (cid:79)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:82)(cid:73) (cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:85)(cid:72)(cid:83)(cid:16) (cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:34) (cid:44)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:24)(cid:26)(cid:87)(cid:75) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:16) (cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:36)(cid:86)(cid:86)(cid:82)(cid:70)(cid:76)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:16) (cid:87)(cid:76)(cid:70)(cid:86) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:22)(cid:21)(cid:23)(cid:21)(cid:177)(cid:22)(cid:21)(cid:24)(cid:21)(cid:17) (cid:41)(cid:85)(cid:72)(cid:71)(cid:72)(cid:85)(cid:76)(cid:70)(cid:78) (cid:47)(cid:76)(cid:88)(cid:15) (cid:43)(cid:68)(cid:81) (cid:47)(cid:88)(cid:15) (cid:38)(cid:75)(cid:76)(cid:72)(cid:75) (cid:47)(cid:82)(cid:15) (cid:68)(cid:81)(cid:71) (cid:42)(cid:85)(cid:68)(cid:75)(cid:68)(cid:80) (cid:49)(cid:72)(cid:88)(cid:69)(cid:76)(cid:74)(cid:17) (cid:21)(cid:19)(cid:20)(cid:26)(cid:17) (cid:47)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85)(cid:16)(cid:79)(cid:72)(cid:89)(cid:72)(cid:79) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:16) (cid:86)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79)(cid:76)(cid:87)(cid:92) (cid:90)(cid:76)(cid:87)(cid:75) (cid:89)(cid:76)(cid:86)(cid:88)(cid:68)(cid:79) (cid:73)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72)(cid:86)(cid:17) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89) (cid:83)(cid:85)(cid:72)(cid:83)(cid:85)(cid:76)(cid:81)(cid:87) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89)(cid:29)(cid:20)(cid:26)(cid:19)(cid:23)(cid:17)(cid:19)(cid:23)(cid:27)(cid:24)(cid:28) (cid:17) (cid:55)(cid:82)(cid:80)(cid:68)(cid:86) (cid:48)(cid:76)(cid:78)(cid:82)(cid:79)(cid:82)(cid:89)(cid:15) (cid:46)(cid:68)(cid:76) (cid:38)(cid:75)(cid:72)(cid:81)(cid:15) (cid:42)(cid:85)(cid:72)(cid:74) (cid:38)(cid:82)(cid:85)(cid:85)(cid:68)(cid:71)(cid:82)(cid:15) (cid:68)(cid:81)(cid:71) (cid:45)(cid:72)(cid:73)(cid:16) (cid:73)(cid:85)(cid:72)(cid:92) (cid:39)(cid:72)(cid:68)(cid:81)(cid:17) (cid:21)(cid:19)(cid:20)(cid:22)(cid:68)(cid:17) (cid:40)(cid:73)(cid:73)(cid:76)(cid:70)(cid:76)(cid:72)(cid:81)(cid:87) (cid:72)(cid:86)(cid:87)(cid:76)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:76)(cid:81) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85) (cid:86)(cid:83)(cid:68)(cid:70)(cid:72)(cid:17) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89) (cid:83)(cid:85)(cid:72)(cid:83)(cid:85)(cid:76)(cid:81)(cid:87) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89)(cid:29)(cid:20)(cid:22)(cid:19)(cid:20)(cid:17)(cid:22)(cid:26)(cid:27)(cid:20) (cid:17) (cid:55)(cid:82)(cid:80)(cid:68)(cid:86) (cid:48)(cid:76)(cid:78)(cid:82)(cid:79)(cid:82)(cid:89)(cid:15) (cid:44)(cid:79)(cid:92)(cid:68) (cid:54)(cid:88)(cid:87)(cid:86)(cid:78)(cid:72)(cid:89)(cid:72)(cid:85)(cid:15) (cid:46)(cid:68)(cid:76) (cid:38)(cid:75)(cid:72)(cid:81)(cid:15) (cid:42)(cid:85)(cid:72)(cid:74) (cid:54) (cid:38)(cid:82)(cid:85)(cid:16) (cid:85)(cid:68)(cid:71)(cid:82)(cid:15) (cid:68)(cid:81)(cid:71) (cid:45)(cid:72)(cid:73)(cid:73) (cid:39)(cid:72)(cid:68)(cid:81)(cid:17) (cid:21)(cid:19)(cid:20)(cid:22)(cid:69)(cid:17) (cid:39)(cid:76)(cid:86)(cid:87)(cid:85)(cid:76)(cid:69)(cid:88)(cid:87)(cid:72)(cid:71) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:16) (cid:87)(cid:76)(cid:82)(cid:81)(cid:86) (cid:82)(cid:73) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:83)(cid:75)(cid:85)(cid:68)(cid:86)(cid:72)(cid:86) (cid:68)(cid:81)(cid:71) (cid:87)(cid:75)(cid:72)(cid:76)(cid:85) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:86)(cid:76)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79)(cid:16) (cid:76)(cid:87)(cid:92)(cid:17) (cid:44)(cid:81) (cid:36)(cid:71)(cid:89)(cid:68)(cid:81)(cid:70)(cid:72)(cid:86) (cid:76)(cid:81) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:76)(cid:81)(cid:73)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:83)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74) (cid:86)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80)(cid:86) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:22)(cid:20)(cid:20)(cid:20)(cid:177)(cid:22)(cid:20)(cid:20)(cid:28)(cid:17) (cid:55)(cid:82)(cid:80)(cid:68)(cid:86) (cid:48)(cid:76)(cid:78)(cid:82)(cid:79)(cid:82)(cid:89)(cid:15) (cid:58)(cid:72)(cid:81)(cid:16)(cid:87)(cid:68)(cid:88) (cid:60)(cid:76)(cid:75)(cid:15) (cid:68)(cid:81)(cid:71) (cid:42)(cid:72)(cid:82)(cid:73)(cid:73)(cid:85)(cid:72)(cid:92) (cid:61)(cid:90)(cid:72)(cid:76)(cid:74)(cid:17) (cid:21)(cid:19)(cid:20)(cid:22)(cid:70)(cid:17) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70) (cid:85)(cid:72)(cid:74)(cid:88)(cid:79)(cid:68)(cid:85)(cid:76)(cid:87)(cid:76)(cid:72)(cid:86) (cid:76)(cid:81) (cid:70)(cid:82)(cid:81)(cid:87)(cid:76)(cid:81)(cid:88)(cid:82)(cid:88)(cid:86) (cid:86)(cid:83)(cid:68)(cid:70)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:17) (cid:44)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:21)(cid:19)(cid:20)(cid:22) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:49)(cid:82)(cid:85)(cid:87)(cid:75) (cid:36)(cid:80)(cid:72)(cid:85)(cid:76)(cid:70)(cid:68)(cid:81) (cid:38)(cid:75)(cid:68)(cid:83)(cid:87)(cid:72)(cid:85) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:36)(cid:86)(cid:86)(cid:82)(cid:70)(cid:76)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:73)(cid:82)(cid:85) (cid:38)(cid:82)(cid:80)(cid:83)(cid:88)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:47)(cid:76)(cid:81)(cid:74)(cid:88)(cid:76)(cid:86)(cid:87)(cid:76)(cid:70)(cid:86)(cid:29) (cid:43)(cid:88)(cid:80)(cid:68)(cid:81) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:55)(cid:72)(cid:70)(cid:75)(cid:81)(cid:82)(cid:79)(cid:82)(cid:74)(cid:76)(cid:72)(cid:86) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:26)(cid:23)(cid:25)(cid:177)(cid:26)(cid:24)(cid:20)(cid:17) (cid:45)(cid:72)(cid:73)(cid:73)(cid:85)(cid:72)(cid:92) (cid:51)(cid:72)(cid:81)(cid:81)(cid:76)(cid:81)(cid:74)(cid:87)(cid:82)(cid:81)(cid:15) (cid:53)(cid:76)(cid:70)(cid:75)(cid:68)(cid:85)(cid:71) (cid:54)(cid:82)(cid:70)(cid:75)(cid:72)(cid:85)(cid:15) (cid:68)(cid:81)(cid:71) (cid:38)(cid:75)(cid:85)(cid:76)(cid:86)(cid:87)(cid:82)(cid:83)(cid:75)(cid:72)(cid:85) (cid:48)(cid:68)(cid:81)(cid:81)(cid:76)(cid:81)(cid:74)(cid:17) (cid:21)(cid:19)(cid:20)(cid:23)(cid:17) (cid:42)(cid:79)(cid:82)(cid:89)(cid:72)(cid:29) (cid:42)(cid:79)(cid:82)(cid:69)(cid:68)(cid:79) (cid:89)(cid:72)(cid:70)(cid:87)(cid:82)(cid:85)(cid:86) (cid:73)(cid:82)(cid:85) (cid:90)(cid:82)(cid:85)(cid:71) (cid:85)(cid:72)(cid:83)(cid:16) (cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:44)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:21)(cid:19)(cid:20)(cid:23) (cid:70)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:72)(cid:80)(cid:83)(cid:76)(cid:85)(cid:76)(cid:70)(cid:68)(cid:79) (cid:80)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:76)(cid:81) (cid:81)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:79)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:83)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:16) (cid:76)(cid:81)(cid:74) (cid:11)(cid:40)(cid:48)(cid:49)(cid:47)(cid:51)(cid:12) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:20)(cid:24)(cid:22)(cid:21)(cid:177)(cid:20)(cid:24)(cid:23)(cid:22)(cid:17) (cid:48)(cid:68)(cid:87)(cid:87)(cid:75)(cid:72)(cid:90) (cid:40) (cid:51)(cid:72)(cid:87)(cid:72)(cid:85)(cid:86)(cid:15) (cid:48)(cid:68)(cid:85)(cid:78) (cid:49)(cid:72)(cid:88)(cid:80)(cid:68)(cid:81)(cid:81)(cid:15) (cid:48)(cid:82)(cid:75)(cid:76)(cid:87) (cid:44)(cid:92)(cid:92)(cid:72)(cid:85)(cid:15) (cid:48)(cid:68)(cid:87)(cid:87) (cid:42)(cid:68)(cid:85)(cid:71)(cid:81)(cid:72)(cid:85)(cid:15) (cid:38)(cid:75)(cid:85)(cid:76)(cid:86)(cid:87)(cid:82)(cid:83)(cid:75)(cid:72)(cid:85) (cid:38)(cid:79)(cid:68)(cid:85)(cid:78)(cid:15) (cid:46)(cid:72)(cid:81)(cid:87)(cid:82)(cid:81) (cid:47)(cid:72)(cid:72)(cid:15) (cid:68)(cid:81)(cid:71) (cid:47)(cid:88)(cid:78)(cid:72) (cid:61)(cid:72)(cid:87)(cid:87)(cid:79)(cid:72)(cid:80)(cid:82)(cid:92)(cid:72)(cid:85)(cid:17) (cid:21)(cid:19)(cid:20)(cid:27)(cid:17) (cid:39)(cid:72)(cid:72)(cid:83) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:88)(cid:68)(cid:79)(cid:76)(cid:93)(cid:72)(cid:71) (cid:90)(cid:82)(cid:85)(cid:71) (cid:85)(cid:72)(cid:83)(cid:85)(cid:72)(cid:16) (cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:17) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89) (cid:83)(cid:85)(cid:72)(cid:83)(cid:85)(cid:76)(cid:81)(cid:87) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89)(cid:29)(cid:20)(cid:27)(cid:19)(cid:21)(cid:17)(cid:19)(cid:24)(cid:22)(cid:25)(cid:24) (cid:17) (cid:49)(cid:76)(cid:87)(cid:76)(cid:86)(cid:75) (cid:54)(cid:85)(cid:76)(cid:89)(cid:68)(cid:86)(cid:87)(cid:68)(cid:89)(cid:68)(cid:15) (cid:42)(cid:72)(cid:82)(cid:73)(cid:73)(cid:85)(cid:72)(cid:92) (cid:43)(cid:76)(cid:81)(cid:87)(cid:82)(cid:81)(cid:15) (cid:36)(cid:79)(cid:72)(cid:91) (cid:46)(cid:85)(cid:76)(cid:93)(cid:75)(cid:72)(cid:89)(cid:86)(cid:78)(cid:92)(cid:15) (cid:44)(cid:79)(cid:92)(cid:68) (cid:54)(cid:88)(cid:87)(cid:86)(cid:78)(cid:72)(cid:89)(cid:72)(cid:85)(cid:15) (cid:68)(cid:81)(cid:71) (cid:53)(cid:88)(cid:86)(cid:79)(cid:68)(cid:81) (cid:54)(cid:68)(cid:79)(cid:68)(cid:78)(cid:75)(cid:88)(cid:87)(cid:71)(cid:76)(cid:81)(cid:82)(cid:89)(cid:17) (cid:21)(cid:19)(cid:20)(cid:23)(cid:17) (cid:39)(cid:85)(cid:82)(cid:83)(cid:82)(cid:88)(cid:87)(cid:29) (cid:68) (cid:86)(cid:76)(cid:80)(cid:83)(cid:79)(cid:72) (cid:90)(cid:68)(cid:92) (cid:87)(cid:82) (cid:83)(cid:85)(cid:72)(cid:89)(cid:72)(cid:81)(cid:87) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86) (cid:73)(cid:85)(cid:82)(cid:80) (cid:82)(cid:89)(cid:72)(cid:85)(cid:73)(cid:76)(cid:87)(cid:87)(cid:76)(cid:81)(cid:74)(cid:17) (cid:45)(cid:82)(cid:88)(cid:85)(cid:81)(cid:68)(cid:79) (cid:82)(cid:73) (cid:48)(cid:68)(cid:70)(cid:75)(cid:76)(cid:81)(cid:72) (cid:47)(cid:72)(cid:68)(cid:85)(cid:81)(cid:76)(cid:81)(cid:74) (cid:53)(cid:72)(cid:16) (cid:86)(cid:72)(cid:68)(cid:85)(cid:70)(cid:75) (cid:15) (cid:20)(cid:24)(cid:11)(cid:20)(cid:12)(cid:29)(cid:20)(cid:28)(cid:21)(cid:28)(cid:177)(cid:20)(cid:28)(cid:24)(cid:27)(cid:17) (cid:60)(cid:68)(cid:80)(cid:76)(cid:81)(cid:74) (cid:54)(cid:88)(cid:81)(cid:15) (cid:47)(cid:72)(cid:76) (cid:47)(cid:76)(cid:81)(cid:15) (cid:39)(cid:88)(cid:92)(cid:88) (cid:55)(cid:68)(cid:81)(cid:74)(cid:15) (cid:49)(cid:68)(cid:81) (cid:60)(cid:68)(cid:81)(cid:74)(cid:15) (cid:61)(cid:75)(cid:72)(cid:81)(cid:93)(cid:75)(cid:82)(cid:88) (cid:45)(cid:76)(cid:15) (cid:68)(cid:81)(cid:71) (cid:59)(cid:76)(cid:68)(cid:82)(cid:79)(cid:82)(cid:81)(cid:74) (cid:58)(cid:68)(cid:81)(cid:74)(cid:17) (cid:21)(cid:19)(cid:20)(cid:24)(cid:17) (cid:48)(cid:82)(cid:71)(cid:72)(cid:79)(cid:76)(cid:81)(cid:74) (cid:80)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81)(cid:15) (cid:70)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87) (cid:68)(cid:81)(cid:71) (cid:72)(cid:81)(cid:87)(cid:76)(cid:87)(cid:92) (cid:90)(cid:76)(cid:87)(cid:75) (cid:81)(cid:72)(cid:88)(cid:85)(cid:68)(cid:79) (cid:81)(cid:72)(cid:87)(cid:90)(cid:82)(cid:85)(cid:78)(cid:86) (cid:73)(cid:82)(cid:85) (cid:72)(cid:81)(cid:87)(cid:76)(cid:87)(cid:92) (cid:71)(cid:76)(cid:86)(cid:16) (cid:68)(cid:80)(cid:69)(cid:76)(cid:74)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:17) (cid:44)(cid:81) (cid:55)(cid:90)(cid:72)(cid:81)(cid:87)(cid:92)(cid:16)(cid:41)(cid:82)(cid:88)(cid:85)(cid:87)(cid:75) (cid:44)(cid:81)(cid:87)(cid:72)(cid:85)(cid:81)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:68)(cid:79) (cid:45)(cid:82)(cid:76)(cid:81)(cid:87) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:36)(cid:85)(cid:87)(cid:76)(cid:73)(cid:76)(cid:70)(cid:76)(cid:68)(cid:79) (cid:44)(cid:81)(cid:87)(cid:72)(cid:79)(cid:79)(cid:76)(cid:74)(cid:72)(cid:81)(cid:70)(cid:72) (cid:17) (cid:45)(cid:82)(cid:75)(cid:81) (cid:58)(cid:76)(cid:72)(cid:87)(cid:76)(cid:81)(cid:74)(cid:15) (cid:48)(cid:82)(cid:75)(cid:76)(cid:87) (cid:37)(cid:68)(cid:81)(cid:86)(cid:68)(cid:79)(cid:15) (cid:46)(cid:72)(cid:89)(cid:76)(cid:81) (cid:42)(cid:76)(cid:80)(cid:83)(cid:72)(cid:79)(cid:15) (cid:68)(cid:81)(cid:71) (cid:46)(cid:68)(cid:85)(cid:72)(cid:81) (cid:47)(cid:76)(cid:89)(cid:72)(cid:86)(cid:70)(cid:88)(cid:17) (cid:21)(cid:19)(cid:20)(cid:24)(cid:17) (cid:55)(cid:82)(cid:90)(cid:68)(cid:85)(cid:71)(cid:86) (cid:88)(cid:81)(cid:76)(cid:89)(cid:72)(cid:85)(cid:86)(cid:68)(cid:79) (cid:83)(cid:68)(cid:85)(cid:68)(cid:16) (cid:83)(cid:75)(cid:85)(cid:68)(cid:86)(cid:87)(cid:76)(cid:70) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86)(cid:17) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89) (cid:83)(cid:85)(cid:72)(cid:83)(cid:85)(cid:76)(cid:81)(cid:87) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89)(cid:29)(cid:20)(cid:24)(cid:20)(cid:20)(cid:17)(cid:19)(cid:27)(cid:20)(cid:28)(cid:27) (cid:17) (cid:45)(cid:82)(cid:75)(cid:81) (cid:58)(cid:76)(cid:72)(cid:87)(cid:76)(cid:81)(cid:74)(cid:15) (cid:48)(cid:82)(cid:75)(cid:76)(cid:87) (cid:37)(cid:68)(cid:81)(cid:86)(cid:68)(cid:79)(cid:15) (cid:46)(cid:72)(cid:89)(cid:76)(cid:81) (cid:42)(cid:76)(cid:80)(cid:83)(cid:72)(cid:79)(cid:15) (cid:68)(cid:81)(cid:71) (cid:46)(cid:68)(cid:85)(cid:72)(cid:81) (cid:47)(cid:76)(cid:89)(cid:72)(cid:86)(cid:70)(cid:88)(cid:17) (cid:21)(cid:19)(cid:20)(cid:25)(cid:17) (cid:38)(cid:75)(cid:68)(cid:85)(cid:68)(cid:74)(cid:85)(cid:68)(cid:80)(cid:29) (cid:40)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86) (cid:68)(cid:81)(cid:71) (cid:86)(cid:72)(cid:81)(cid:87)(cid:72)(cid:81)(cid:70)(cid:72)(cid:86) (cid:89)(cid:76)(cid:68) (cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85) (cid:81)(cid:16)(cid:74)(cid:85)(cid:68)(cid:80)(cid:86)(cid:17) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89) (cid:83)(cid:85)(cid:72)(cid:83)(cid:85)(cid:76)(cid:81)(cid:87) (cid:68)(cid:85)(cid:59)(cid:76)(cid:89)(cid:29)(cid:20)(cid:25)(cid:19)(cid:26)(cid:17)(cid:19)(cid:21)(cid:26)(cid:27)(cid:28) (cid:17) (cid:53)(cid:82)(cid:81)(cid:74)(cid:70)(cid:75)(cid:68)(cid:82) (cid:60)(cid:76)(cid:81)(cid:15) (cid:52)(cid:88)(cid:68)(cid:81) (cid:58)(cid:68)(cid:81)(cid:74)(cid:15) (cid:51)(cid:72)(cid:81)(cid:74) (cid:47)(cid:76)(cid:15) (cid:53)(cid:88)(cid:76) (cid:47)(cid:76)(cid:15) (cid:68)(cid:81)(cid:71) (cid:37)(cid:76)(cid:81) (cid:58)(cid:68)(cid:81)(cid:74)(cid:17) (cid:21)(cid:19)(cid:20)(cid:25)(cid:17) (cid:48)(cid:88)(cid:79)(cid:87)(cid:76)(cid:16)(cid:74)(cid:85)(cid:68)(cid:81)(cid:88)(cid:79)(cid:68)(cid:85)(cid:76)(cid:87)(cid:92) (cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:16) (cid:71)(cid:76)(cid:81)(cid:74)(cid:17) (cid:44)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:21)(cid:19)(cid:20)(cid:25) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:40)(cid:80)(cid:16) (cid:83)(cid:76)(cid:85)(cid:76)(cid:70)(cid:68)(cid:79) (cid:48)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:76)(cid:81) (cid:49)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:28)(cid:27)(cid:20)(cid:177)(cid:28)(cid:27)(cid:25)(cid:17) (cid:45)(cid:76)(cid:81)(cid:91)(cid:76)(cid:81)(cid:74) (cid:60)(cid:88)(cid:15) (cid:59)(cid:88)(cid:81) (cid:45)(cid:76)(cid:68)(cid:81)(cid:15) (cid:43)(cid:68)(cid:82) (cid:59)(cid:76)(cid:81)(cid:15) (cid:68)(cid:81)(cid:71) (cid:60)(cid:68)(cid:81)(cid:74)(cid:84)(cid:76)(cid:88) (cid:54)(cid:82)(cid:81)(cid:74)(cid:17) (cid:21)(cid:19)(cid:20)(cid:26)(cid:17) (cid:45)(cid:82)(cid:76)(cid:81)(cid:87) (cid:72)(cid:80)(cid:69)(cid:72)(cid:71)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:70)(cid:75)(cid:76)(cid:81)(cid:72)(cid:86)(cid:72) (cid:90)(cid:82)(cid:85)(cid:71)(cid:86)(cid:15) (cid:70)(cid:75)(cid:68)(cid:85)(cid:16) (cid:68)(cid:70)(cid:87)(cid:72)(cid:85)(cid:86)(cid:15) (cid:68)(cid:81)(cid:71) (cid:73)(cid:76)(cid:81)(cid:72)(cid:16)(cid:74)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71) (cid:86)(cid:88)(cid:69)(cid:70)(cid:75)(cid:68)(cid:85)(cid:68)(cid:70)(cid:87)(cid:72)(cid:85) (cid:70)(cid:82)(cid:80)(cid:83)(cid:82)(cid:81)(cid:72)(cid:81)(cid:87)(cid:86)(cid:17) (cid:44)(cid:81) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:72)(cid:71)(cid:76)(cid:81)(cid:74)(cid:86) (cid:82)(cid:73) (cid:87)(cid:75)(cid:72) (cid:21)(cid:19)(cid:20)(cid:26) (cid:38)(cid:82)(cid:81)(cid:73)(cid:72)(cid:85)(cid:72)(cid:81)(cid:70)(cid:72) (cid:82)(cid:81) (cid:40)(cid:80)(cid:83)(cid:76)(cid:85)(cid:76)(cid:16) (cid:70)(cid:68)(cid:79) (cid:48)(cid:72)(cid:87)(cid:75)(cid:82)(cid:71)(cid:86) (cid:76)(cid:81) (cid:49)(cid:68)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:47)(cid:68)(cid:81)(cid:74)(cid:88)(cid:68)(cid:74)(cid:72) (cid:51)(cid:85)(cid:82)(cid:70)(cid:72)(cid:86)(cid:86)(cid:76)(cid:81)(cid:74) (cid:15) (cid:83)(cid:68)(cid:74)(cid:72)(cid:86) (cid:21)(cid:27)(cid:25)(cid:177)(cid:21)(cid:28)(cid:20)(cid:17)" ]
[ "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "abstain", "other" ]
[ "Data-driven approaches using neural networks have achieved promising performances in natural language generation (NLG).", "However, neural generators are prone to make mistakes, e.g., neglecting an input slot value and generating a redundant slot value.", "Prior works refer this to hallucination phenomenon .", "In this paper, we study slot consistency for building reliable NLG systems with all slot values of input dialogue act (DA) properly generated in output sentences.", "We propose Iterative Rectification Network (IRN) for improving general NLG systems to produce both correct and fluent responses.", "It applies a bootstrapping algorithm to sample training candidates and uses reinforcement learning to incorporate discrete reward related to slot inconsistency into training.", "Comprehensive studies have been conducted on multiple benchmark datasets, showing that the proposed methods have significantly reduced the slot error rate (ERR) for all strong baselines.", "Human evaluations also have confirmed its effectiveness.", "Natural Language Generation (NLG), as a critical component of task-oriented dialogue systems, converts a meaning representation, i.e., dialogue act (DA), into natural language sentences.", "Traditional methods (Stent et al., 2004; Konstas and Lapata, 2013; Wong and Mooney, 2007) are mostly pipeline-based, dividing the generation process into sentence planing and surface realization.", "Despite their robustness, they heavily rely on handcrafted rules and domain-specific knowledge.", "In addition, the generated sentences of rule-based approaches are rather rigid, without the variance of human language.", "More recently, neural network based models (Wen et al., 2015a,b; Dusek and Jurccek, 2016; Equal contributions.", "Input DA inform( NAME = pickwick hotel , PRICERANGE = moderate ) Reference the hotel named pickwick hotel is in a moderate price range Missing this is a moderate hotel [ NAME ] Misplace the pickwick hotel in fort mason is a moderate price range [ AREA ] Table 1: An exmaple (including mistaken generations) extracted from SF Hotel (Wen et al., 2015b) dataset.", "Tran and Nguyen, 2017a) have attracted much attention.", "They implicitly learn sentence planning and surface realisation end-to-end with cross entropy objectives.", "For example, Dusek and Jurccek (2016) employ an attentive encoder-decoder model, which applies attention mechanism over input slot value pairs.", "Although neural generators can be trained end-to-end, they suffer from hallucination phenomenon (Balakrishnan et al., 2019).", "Examples in Table 1 show a misplacement error of an unseen slot AREA and a missing error of slot NAME by an end-to-end trained model, when compared against its input DA.", "Motivated by this observation, in this paper, we define slot consistency of NLG systems as all slot values of input DAs shall appear in output sentences without misplacement.", "We also observe that, for task-oriented dialogue systems, input DAs are mostly with simple logic forms, therefore enabling retrieval-based methods e.g. K-Nearest Neighbour (KNN) to handle the majority of test cases.", "Furthermore, there exists a discrepancy between the training criterion of cross entropy loss and evaluation metric of slot error rate (ERR), similarly to that observed in neural machine translation (Ranzato et al., 2015).", "Therefore, it is beneficial to use training methods that integrate the evaluation metrics in their objectives.", "In this paper, we propose Iterative Rectification Network (IRN) to improve slot consistency for general NLG systems.", "IRN consists of a pointer rewriter and an experience replay buffer.", "Pointer rewriter iteratively rectifies slot-inconsistent generations from KNN or data-driven NLG systems.", "Experience replay buffer of a fixed size collects candidates, which consist of mistaken cases, for training IRN.", "Leveraging the above observations, we further introduce a retrieval-based bootstrapping to sample pseudo mistaken cases as candidates for enriching the training data.", "To foster consistency between training objective and evaluation metrics, we use REINFORCE (Williams, 1992) to incorporate slot consistency and other discrete rewards into training objectives.", "Extensive experiments show that, the proposed model, KNN + IRN, significantly outperforms all previous strong approaches.", "When applying IRN to improve slot consistency of prior NLG baselines, we notice large reductions of their slot error rates.", "Finally, the effectiveness of the proposed methods are further confirmed using BLEU scores, case analysis and human evaluations.", "Inputs to NLG are structured meaning representations, i.e., DA, which consists of an act type and a list of slot value pairs.", "Each slot value pair represents the type of information and its content while the act type control the style of sentence.", "To improve generalization capability of DA, delexicalization technique (Wen et al., 2015a,b; Dusek and Jurccek, 2016; Tran and Nguyen, 2017a) is widely used to replace all values in reference sentence by their corresponding slot in DA, creating pairs of delexicalized input DAs and output templates.", "Hence the most important step in NLG is to generate templates correctly given an input DA.", "However, this step can introduce missing and misplaced slots, because of modeling errors or unaligned training data (Balakrishnan et al., 2019; Nie et al., 2019; Juraska et al., 2018).", "Lexicalization is followed after a template is generated, replacing slots in template with corresponding values in DA.", "Formally, we denote a delexicalized input DA as a set x = { x 1 , x 2 , , x N } that consists of an act type and some slots.", "Universal set S contains all possible slots.", "The output template y = [ y 1 , y 2 , , y M ] from NLG systems f ( x ) is a sequence of tokens (words and slots).", "We define a slot extraction function g as g ( z ) = { t | t z ; t S } .", "(1) where z consists of the DA x and elements of the template y .", "A slot-consistent NLG system f ( x ) satisfies the following constraint: g ( f ( x )) = g ( x ) .", "(2) To avoid trivial solutions, we require that f ( x ) (cid:54) = x .", "However, due to the hallucination phenomenon , it is possible to miss or misplace slot value in generated templates (Wen et al., 2015a), which is hard to avoid in neural-based approaches.", "A KNN-based NLG system f KNN is composed of a distant function and a template set Y = { y 1 , y 2 , , y Q } which is collected from Q delexicalized sentences in training corpus.", "Given input DA x , the distance is defined as ( x , y i ) = #( { s | s = t ; t y i ; s x } ) , (3) where function # computes the size of a set.", "During evaluation, system f KNN first ranks the templates in set Y by distant function and then selects the top k (beam size) templates.", "Figure 1 shows the architecture of Iterative Rectification Network.", "It consists of two components: a pointer rewriter to produce templates with improved performance metrics and an experience replay buffer to gather and sample training data.", "The improvements on slot consistency are obtained via an iterative rewriting process.", "Assume, at iteration k , we have a template y ( k ) that is not slot consistent with input DA, i.e., g ( y ( k ) ) (cid:54) = g ( x ) .", "Then, a pointer rewriter iteratively rewrites it as y ( k +1) = PR ( x , y ( k ) ) .", "Above recursion ends once g ( y ( k ) ) = g ( x ) or certain number of iterations is reached.", "The pointer rewriter PR is trained to iteratively correct the candidate y ( k ) given a DA x .", "This correction operation is conducted time-recurrently.", "At each position j of rewriting a template, there is a state h j to represent the past history of the pointer rewriter and an action a j to take according to a policy .", "where DA x is represented by one-hot representation (Wen et al., 2015a,b).", "c j is a context representation over input template y ( k ) , to be described in Eq.", "(6).", "The operation [; ] means vector concatenation.", "Action For position j in the output template y ( k ) , its action a j is in a space consisting of two categories: template copy, c ( i ) , to copy a token from the template y ( k ) at i , and word and slot generation, w , to generate a word or a slot at the position.", "For a lengthM input template y ( k ) , the action a j is therefore in a set of { w, c (1) , , c ( M ) } .", "The action sequence a for a lengthN output template is [ a 1 , , a N ] .", "Template Copy The model PR for template copy uses attentive pointer to decide, for position j , what token to copy from the candidate y ( k ) .", "Each token y ( k ) i in candidate y ( k ) is represented using an embedding y ( k ) i .", "For position j in the output template, this model utilizes the above hidden state h j and computes attentive weights to all of the tokens in y ( k ) , with weight to token embedding y ( k ) i as follows: PR ( h j , y ( k ) i ) = v Ta ( W h h j + W y y ( k ) i ) p PR ij = Softmax ( PR ( h j , y ( k ) i )) c j = (cid:88) 1 i M p PR ij y i , (6) where v a , W h , W y are learnable parameters.", "Word and Slot Generation Another candidate for position j is a word or a slot key from a predefined vocabulary.", "The action w computes a distribution of words and slot keys below p Vocab j = Softmax ( W v h j ) , (7) where this distribution is dependent on the state h j and matrix W v is learnable.", "Policy The probabilities for the above actions can be computed as follows", "where ( c ( i ) | h j ) is the probability of copying the i -th token from input template y ( k ) to position j .", "( w | h j ) is the probability to use words or slot keys predicted from the distribution p Vocab j in Eq.", "(7).", "The weight j is a real value between 0 and", "1. It is computed from a Sigmoid operation as j = Sigmoid ( v h h j ) .", "With the policy, the pointer rewriter does greedy search to decide whether copying or generating a token.", "The experience replay buffer aims at providing training samples for IRN.", "It has three sources of samples.", "The first is from off-the-shelf NLG systems.", "The second is from the pointer rewriter in the last iteration.", "Both of them are real mistaken Algorithm 2: Bootstrapping via Retrieval Input: template-DB, T ; total sample number, V ; maximum tolerance (default 2), (cid:15) .", "samples.", "They are stored in a case set C in the buffer.", "These samples are off-policy as the case set C can contain samples from many iterations before.", "The third source is sampled from a bootstrapping algorithm.", "They are stored in a set .", "Iterative Data Aggregation The replay experiences should be progressive, reflecting improvements in the iterative training of IRN.", "Therefore, we design an iterative data aggregation algorithm in Algorithm", "1. In the algorithm, the experience replay buffer B is defined as a fixed size set of B = C + .", "For a total epoch number of E , it randomly provides mistaken samples for training pointer rewriter PR at each epoch.", "Importantly, both content of C and are varying from each epoch.", "For C , it initially consists of real mistaken samples from the baseline system (line 3 -th to line 8 -th).", "Later on, it's gradually filled by the samples from the IRN (line 14 -th to line 19 -th).", "For , its samples reflect a general distribution of training samples from a template database T (line 10 -th).", "Finally, the algorithm aggregates these two groups of mistaken samples (line 11 -th) and use them to train the model PR (line 12 -th).", "Bootstrapping via Retrieval Relying solely on the real mistaken samples exposes the system to data scarcity problem.", "It is easy to observe that real samples are heavily biased towards certain slots, and the number of real mistaken samples can be small.", "To address this problem, we introduce a position 0 1 2 3 4 5 6 token the hotel -s phone number is $PHONE$ token the phone number of the $NAME$ is $PHONE$ 1 1 1 0 1 0 1 1 0 3 4 -1 0 -1 5 6 (0) (3) (4) g(of) (0) g($NAME$) (5) (6) MistakenTemplate ReferenceTemplate extractive slot function word function word noun phrase ambiguity Figure 2: Correcting a candidate given a reference template.", "bootstrapping algorithm, described in Algorithm", "2. It uses a template database T , built from delexicalized NLG training corpus and organized by pairs of DA and reference template ( x , z ) .", "At each turn of the algorithm, it first randomly samples (line 3 -th) a pair ( x , z ) , from training template data base of T .", "Then for every pair ( x , z ) in T , it measures if the pair ( x , z ) is slot-inconsistent with respect to ( x , z ) , and adds the pair that is within a certain distance (cid:15) (a hyper parameter) to a set Z (line 5 -th to 11 -th).", "(cid:15) is usually set to a small number so that the selected samples are close enough to ( x, z ) .", "In practice, we set it to", "2. Finally, it does a random sampling (line 12 -th) on Z and insert its return into the output set .", "Such bootstrapping process stops when the number of generated samples reaches a certain limit K .", "These samples, which we refer them as pseudo samples in the following, represent a wider coverage of training samples than the real mistaken samples.", "Because they are sampled from general distribution of the templates, some of semantics are not seen in the real mistaken cases.", "We will demonstrate through experiments that it effectively addresses data scarcity problem.", "One key idea behind the proposed IRN model is to conduct distant supervision on the actions of template copy and generation.", "We diagram its motivation in Figure", "2. During training, only candidate y and its reference z are given.", "The exact actions that convert template y to z have to be inferred from the two templates.", "Here we use simple rules for the inference.", "Firstly, the rules check if reference token z j exists in the candidate y .", "The output is a label d c consisting of 1s and 0s, representing whether tokens in the reference template are existent/absent in the candidate.", "Secondly, the rules locate the original position d lj in the candidate for each token j in the reference template if d c = 1 and use -1 for d c = 0 .", "Finally, the action label d for policy is inferred, with w for d lj = 1 and c ( i ) for d lj = i .", "We may use the extracted tags to do supervised learning.", "The loss to be minimized is as follows JSL = L (cid:88) j =1 log ( d j | h j ) , (9) where L is the length of ground truth.", "( d j | h j ) computes the likelihood of action d j at position j given state h j .", "However, there are following issues when attempting to utilize the labels produced by distant supervision for training.", "Firstly, the importance of every token in candidate is different.", "For example, noun phrase (colored in blue) is critical and should be copied.", "Function words (colored in red) is of little relevance and can be generated by IRN itself.", "However, distant supervision treats them the same.", "Secondly, rule-based matching may cause semantic ambiguity (dashed line colored in black).", "Lastly, the training criterion of cross entropy is not directly relevant to the evaluation metric using slot error rate.", "To address these issues, we use reinforcement learning to obtain the optimal actions.", "In this section, we describe another method to train IRN.", "We apply policy gradient (Williams, 1992) to optimize models with discrete rewards.", "Slot Consistency This reward is related to the correctness of output templates.", "Given the set of slot-value pairs g ( y ) from the output template generated by IRN and the set of slot-value pairs g ( x ) extracted from input DA, the reward is zero when Model SF Restaurant SF Hotel Laptop Television BLEU ERR BLEU ERR BLEU ERR BLEU ERR HLSTM (Wen et al., 2015a) 0 .", "they are equal; otherwise, it is negative with value set to the cardinality of the difference between the two sets as follows r SC = | g ( y ) g ( x ) | .", "Language Fluency This reward is related to the naturalness of the realized surface form from a response generation method.", "Following (Wen et al., 2015a,b), we first train a backward language model on the reference texts from training data.", "Then, the perplexity (PPL) of the surface form after lexicalization of the output template y is measured using the language model.", "This PPL is used for the reward for language fluency as follows: r LM = PPL( y ) .", "Distant Supervision We also measure the reward from using distant supervision in Section 4.", "For a lengthN reference template, the reward is given as follows: r DS = L (cid:88) j =1 log ( d j | h j ) , (12) where d j is the inferred action label.", "r ( a ) = SC r SC + LM r LM + DS r DS (13)", "where SC + LM + DS = 1 .", "We set them to equal value in this work.", "A reward is observed after the last token of the utterance is generated.", "We utilize supervised learning in Eq.", "(9) to initialize our model with the labels extracted from distant supervision.", "After its convergence, we continuously tune the model using policy gradient described in this section.", "The policy model in PR itself generates a sequence of actions a , that are not necessarily the same as d , and this produces an output template y to compute slot consistency reward in Eq.", "(10) and language fluency reward in Eq.", "(11).", "With these rewards, the final reward is computed in (13).", "The gradient to back propagate is estimated using REINFORCE as JRL ( ) = ( r ( a ) b ) N (cid:88) j =1 log ( a j | h j ) , (14) where denotes model parameters.", "r ( a ) b is the advantage function per REINFORCE.", "b is a baseline.", "Through experiments, we find that b = BLEU( y , z ) performs better (Weaver and Tao, 2001) than tricks such as simple averaging of the likelihood 1 N (cid:80) Nj =1 log ( a j | h j ) .", "We assess the model performances on four NLG datasets of different domains.", "The SF Hotel and SF Restaurant benchmarks are collected in (Wen et al., 2015a) while Laptop and TV benchmarks are released by (Wen et al., 2016).", "Each dataset is evaluated with five strong baseline methods, including HLSTM (Wen et al., 2015a), SC-LSTM (Wen et al., 2015b), TGen (Dusek and Jurccek, 2016), ARoA (Tran and Nguyen, 2017b) and RALSTM (Tran and Nguyen, 2017a).", "Following these prior works, the evaluation metrics consist of BLEU and slot error rate (ERR), which is computed as ERR = p + q N , (15) where N is the total number of slots in the DA, and p , q is the number of missing and redundant slots in the generated template, respectively.", "We follow all baseline performances reported in (Tran and Nguyen, 2017b) and use open source toolkits, RNNLG 1 and Tgen 2 to build NLG systems, HLSTM, SCLSTM and TGen.", "We reimplement the baselines ARoA and RALSTM since their source codes are not available.", "We first compare our model, i.e., IRN + KNN with all those strong baselines metioned above.", "Figure 2 shows that the proposed model significantly outperforms previous baselines on both BLEU score and ERR.", "Compared with current state-of-the-art model, RALSTM, it achieves reductions of 1.45, 1.38, 1.45 and 1.80 times for SF Restaurant, SF Hotel, Laptop, and Television datasets, respectively.", "Furthermore, it improves 3 .", "59% , 1 .", "45% , 2 .", "29% and 3 .", "33% of BLEU scores on these datasets, respectively.", "This improvements of BLEU score can be contributed from language fluency reward r LM .", "To verify whether IRN helps improve slot consistency of general NLG models, we further equip strong baselines, including HLSTM, TGen and RALSTM, with IRN.", "We evaluate their performances on SF Restaurant and Television datasets.", "As shown in Table 3, the methods consistently reduce ERRs and also improve BLEU scores for all 1 https://github.com/shawnwun/RNNLG.", "In conclusion, our model, IRN (+ KNN), not only has achieved the state-of-the-art performances but also can contribute to improvements of slot consistency for general NLG systems.", "We perform a set of ablation experiments on the SCLSTM+IRN models on Laptop dataset to understand the relative contribution of data aggregation algorithms in Sec. 3.2 and rewards in Sec. 5.1.", "The results in Table 4 show that removal of slot consistency reward r SC or distant supervision reward r DS from advantage function dramatically degrades SER performance.", "Language fluency related information from baseline BLEU and reward r LM also have positive impact on BLEU and SER, though they are smaller than using r SC or r DS .", "Using only candidates from baselines degrades performance to approximately that of the baseline SCLSTM.", "This shows that incorporating candidates from IRN is important.", "The model without bootstrapping, even including candidates from IRN, has worse performance than SCLSTM in Table 3.", "This shows that bootstrapping to include generic samples from templates database is critical.", "uator to score generated surface realizations from our model and other baselines in terms of informativeness and naturalness.", "Here informativeness measures whether output utterance contains all the information specified in the DA without insertion of extra slots or missing an input slot.", "The naturalness is defined as whether it mimics a response from a human (both ratings are out of 5 ).", "Table 5 shows that RALSTM + IRN outperforms RALSTM notably in informativeness relatively by 4.97%, from 4 .", "63 to 4 .", "86 .", "In terms of naturalness, the improvement is from 4.01 to 4.07, relative by 1.50%.", "Meanwhile, IRN helps to improve the performances of TGen by 5 .", "12% on informativeness and 3 .", "23% on naturalness.", "Table 6 presents a sample on TV dataset and shows a progress made by IRN.", "Given an input DA, the baseline HLSTM outputs in the third row a template that misses slot $AUDIO$ but inserts slot $PRICE$.", "The output template from the first iteration of IRN has a removal of the inserted $PRICE$ slot.", "The second iteration has improved language fluency but no progress in slot-inconsistency.", "The third iteration achieves slot consistency, after which a natural language, though slightly different from the reference text, is generated via lexicalization.", "Conventional approaches for solving NLG task are mostly pipeline-based, dividing it into sentence planning and surface realisation (Dethlefs et al., 2013; Stent et al., 2004; Walker et al., 2002).", "Oh and Rudnicky (2000) introduce a class-based n-gram language model and a rule-based reranker.", "Ratnaparkhi (2002) address the limitations of n-gram language models by using more sophisticated syntactic dependency trees.", "Mairesse and Young (2014) employ a phrase-based generator that learn from a semantically aligned corpus.", "Despite their robustness, these models are costly to create and maintain as they heavily rely on handcrafted rules.", "Recent works (Wen et al., 2015b; Dusek and Jurccek, 2016; Tran and Nguyen, 2017a) build data-driven models based on end-to-end learning.", "Wen et al. (2015a) combine two recurrent neural network (RNN) based models with a CNN reranker to generate required utterances.", "Wen et al. (2015b) introduce a novel SC-LSTM with an additional reading cell to jointly learn gating mechanism and language model.", "Dusek and Jurccek (2016) present an attentive neural generator to apply attention mechanism over input DA.", "Tran and Nguyen (2017b,a) employ a refiner component to select and aggregate the semantic elements produced by the encoder.", "More recently, domain adaptation (Wen et al., 2016) and unsupervised learning (Bahuleyan et al., 2018) for NLG also receive much attention.", "We are also inspired by the post-edit paradigm (Xia et al., 2017), which uses a second-pass decoder to improve the translation quality.", "A recent method in (Wu et al., 2019) defines an auxiliary loss that checks if the object words exist in the expected system response of a task-oriented dialogue system.", "It would be interesting to apply this auxiliary loss in the proposed method.", "On the other hand, the REINFORCE (Williams, 1992) algorithm applied in this paper is more general than (Wu et al., 2019) to incorporate other metrics, such as BLEU.", "Nevertheless, end-to-end neural-based generators suffer from hallucination problem and are hard to avoid generating slot-inconsistent utterance (Bal-akrishnan et al., 2019).", "Balakrishnan et al. (2019) attempts to alleviate this issue by employing a tree-structured meaning representation and constrained decoding technique.", "However, the tree-shaped structure requires additional human annotation.", "We have proposed Iterative Rectification Network (IRN) to improve slot consistency of general NLG systems.", "In this method, a retrieval-based bootstrapping is introduced to sample pseudo mistaken cases from training corpus to enrich the original training data.", "We also employ policy-based reinforcement learning to enable training the models with discrete rewards that are consistent to evaluation metrics.", "Extensive experiments show that the proposed model significantly outperforms previous methods.", "These improvements include both of correctness measured with slot error rates and naturalness measured with BLEU scores.", "Human evaluation and case study also confirm the effectiveness of the proposed method.", "This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153.", "This work was done while the first author did internship at Ant Financial.", "We thank anonymous reviewers for valuable suggestions." ]
[ "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "other", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Multi-hop reasoning requires aggregation and inference from multiple facts.", "To retrieve such facts, we propose a simple approach that retrieves and reranks set of evidence facts jointly.", "Our approach first generates unsupervised clusters of sentences as candidate evidence by accounting links between sentences and coverage with the given query.", "Then, a RoBERTa-based reranker is trained to bring the most representative evidence cluster to the top.", "We specifically emphasize on the importance of retrieving evidence jointly by showing several comparative analyses to other methods that retrieve and rerank evidence sentences individually.", "First, we introduce several attentionand embedding-based analyses, which indicate that jointly retrieving and reranking approaches can learn compositional knowledge required for multi-hop reasoning.", "Second, our experiments show that jointly retrieving candidate evidence leads to substantially higher evidence retrieval performance when fed to the same supervised reranker.", "In particular, our joint retrieval and then reranking approach achieves new state-of-the-art evidence retrieval performance on two multi-hop question answering (QA) datasets: 30.5 Re-call@2 on QASC, and 67.6% F1 on MultiRC.", "When the evidence text from our joint retrieval approach is fed to a RoBERTa-based answer selection classifier, we achieve new state-of-the-art QA performance on MultiRC and second best result on QASC.", "Recent advances in question answering (QA) have achieved excellent performance on several benchmark datasets (Wang et al., 2019a), even when relying on partial (Gururangan et al., 2018), incorrect (Jia and Liang, 2017) or no supporting knowledge (Raffel et al., 2019).", "Specifically, black-box neural QA methods have shown to rely on spurious signals confirming unfaithful or non-explainable behavior Question: RNA is a small molecule that can squeeze through pores in (A) dermal & vascular tissue (B) space between (C) eukaryotic cells (D) jellyfish (E) (H) Gold evidence sentences: 1. RNA is a small molecule that can squeeze through pores in the nuclear membrane 2. Cells with a nuclear membrane are called eukaryotic.", "BM25 sentences: 1. RNA is a small molecule that can squeeze through pores in the nuclear membrane.", "2. RNA synthesis in eukaryotic cells is synthesized by three types of RNA polymerases 3. Eukaryotic cells have three different RNA polymerases.", "4. the molecule seems to have evolved specifically to parasitize eukaryotic cells WAIR Step-1 sentences: 1. RNA is a small molecule that can squeeze through pores in the nuclear membrane.", "2. RNA synthesis in eukaryotic cells is synthesized by three types of RNA polymerases WAIR Step-2 sentences: 1. Cells with a nuclear membrane are called eukaryotic 2. Eukaryotic cells have three different RNA polymerases.", "(Geva et al., 2019).", "Thus, justifying the underlying knowledge or evidence text has been deemed very important for faithfulness and explainability of neural QA methods (DeYoung et al., 2019; Yang et al., 2018).", "Our work is also focused on improving the explainability of QA methods by the means of evidence (or justification) sentence retrieval.", "Evidence retrieval for multi-hop QA is a challenging task as it requires compositional inference based aggregation of multiple evidence sentences (Yang et al., 2018; Khashabi et al., 2018; Welbl et al., 2018; Khot et al., 2019a).", "For such compositional aggregation, we emphasize on the importance of jointly handeling the set of evidence facts within the QA pipeline.", "The motivation behind our work is simple: jointly handling evidence sentences gives access to the complete information together and thus enable compositional reasoning.", "On the other hand, handling evidence sentences individually leads to selection of disconnected evidence that do not support compositional multi-hop reasoning (Jansen, 2018; Chen and Durrett, 2019).", "For retrieving compositional evidence, we propose a simple unsupervised retriever w eighted a lignment-based i nformation r etrieval algorithm (WAIR) that generates candidate evidence chains based on two key heuristics coverage and associativity .", "Coverage denotes the proportion of query covered by the evidence text and associativity denotes links between individual evidence sentences.", "We show that WAIR evidence candidate chains lead to substantially higher retrieval performance when compared to the other approaches that handle evidence sentences individually.", "Particularly, we show that just feeding the candidate evidence chain from WAIR to RoBERTa reranker achieves substantially better performance than when the same reranker is instead fed with individual candidate sentences.", "Further, we present several attention-and embedding-based analyses of the reranker RoBERTa model highlighting that WAIR retrieved chains enable", "a) learning of compositional reasoning and,", "b) complementary knowledge aggregation.", "Our overall QA approach operates in three steps.", "We first retrieve candidate evidence chains for a given query using WAIR.", "In 2 iterations, our unsupervised WAIR approach weighs down query terms that have already been covered by previously retrieved sentences, and increases the weights of reformulated query terms that have not been covered yet.", "In the second step of our QA framework, we jointly rerank clusters of evidence sentences generated by WAIR.", "The reranking is implemented as a regression task, where the score assigned to each sentence cluster is F1 score computed from the gold annotated evidence sentences.", "Lastly, the top reranked set of sentences are fed into an answer classification component.", "In particular, our key contributions are: (1) We introduce a simple, unsupervised and fast evidence retrieval approach -WAIR for multi-hop QA that generates complete and associated candidate evidence chains.", "To show the multi-hop reasoning approximated within WAIR candidate evidence chains, We present several attention weights and embeddings based analyses 1 .", "Our attention analyses highlights that jointly retrieving candidate evidence chains using WAIR assists the reranker model to learn contextual and compositional knowledge necessary for multi-hop reasoning.", "Specifically, our transformer based reranker attends more on the linking terms necessary for combining multiple evidence facts.", "Further, our embedding based analysis shows that the reranking of WAIR evidence chains helps the reranker to project embedding representations of evidence facts differently, thus allowing complementary knowledge aggregation during the QA stage necessary for multi-hop reasoning.", "(2) We show that just the simple construction of candidate evidence using WAIR leads to substantial higher (10.2% Recall @2 on QASC (Khot et al., 2019a) and 3.6% F1 on MultiRC (Khashabi et al., 2018)) evidence selection performance with the same RoBERTa reranker over the case when it is fed with individual candidate sentences.", "Specifically, we achieve the new state-of-the-art evidence selection results on two multi-hop QA datasets (30.5% Recall @2 on QASC and 68.0% on MultiRC.", "Further, our simple candidate chain generation approach can be coupled with any reranker and QA method, and can be applied to different QA settings, e.g., large KB-based QA such as QASC, reading comprehension and passage-based MCQA such as MultiRC, etc.", "We also show that the QA performance improves by 2.3% EM0 in MultiRC and 5.2% accuracy in QASC when the top reranked WAIR evidence chain is fed to the QA module over the case of feeding individually reranked sentences.", "By just feeding the top reranked WAIR evidence chain, we achieve state-of-the-art QA performance on MultiRC and second best QA results on QASC.", "Evidence retrieval has been shown to improve explainability of complex inference based QA tasks (Qi et al., 2019).", "There are two potential ways to retrieve evidence sentences: individually or jointly .", "Retrieving individual evidence sentences: Most unsupervised information retrieval techniques, e.g., BM25 (Robertson et al., 2009), tf-idf (Ramos et al., 2003; Manning et al., 2008), or alignment-based methods (Kim et al., 2017), have 1 Codes https://github.com/vikas95/WAIR_ interpretability been widely used to retrieve evidence texts for open-domain QA tasks (Joshi et al., 2017; Dunn et al., 2017).", "Although these approaches have been strong benchmarks for decades, they usually do not perform well on recent complex reasoning-based QA tasks (Yang et al., 2018; Khot et al., 2019a).", "More recently, supervised neural network (NN) based retrieval methods have achieved strong results on complex questions (Karpukhin et al., 2020; Nie et al., 2019; Tu et al., 2019).", "However, these approaches require annotated data for initial retrieval and suffer from the same disadvantages at the reranking stage as the other methods that retrieve+rerank individual evidence sentences, i.e., the retrieval algorithm is not aware of what information has already been retrieved and what is missing, or how individual facts need to be combined for explaining the multi-hop reasoning (Khot et al., 2019b).", "Our proposed joint retrieval and reranking approach mitigates both these limitations.", "Jointly retrieving evidence sentences: Recently, several works have proposed retrieval of evidence chains that has led to stronger evidence retrieval performance (Yadav et al., 2019b; Khot et al., 2019a).", "Our WAIR approach aligns in the same direction and particularly utilizes coverage and associativity that leads to higher performance.", "Importantly, our work focuses on highlighting the benefits of feeding evidence chains to transformer based reranking methods.", "First, the evidence retrieval performance of the same reranker is substantially improved resulting in state-of-the-art performance and thus outperforming all the previous approaches.", "Second, we show that the candidate evidence chain from WAIR assist reranker method to learn compositional and aggregative reasoning.", "Other recent works have proposed supervised iterative and multi-task approaches for evidence retrieval (Feldman and El-Yaniv, 2019; Qi et al., 2019; Banerjee, 2019).", "But, these supervised chain retrieval approaches are expensive in their runtime and do not scale well on large KB based QA datasets.", "On the contrary, our retrieval approach does not require any labeling data and is faster because of its unsupervised nature.", "Further, our joint approach is much simpler, performs well and scales on large KB based QA such as QASC.", "In this work, we focus on analyzing the multihop evidence reasoning via attention (Clark et al., 2019) and learned embeddings (Ethayarajh, 2019) analyses.", "Several works have shown attention based analysis on pretrained transformer language models (Rogers et al., 2020) on various NLP tasks including QA (van Aken et al., 2019).", "Our novel analyses are particularly focused on", "a) evaluating attention scores on linking terms that approximate multi-hop compositionality and,", "b) complementary knowledge aggregation necessary for multi-hop QA.", "Importance of Evidence Retrieval for Question Answering Several neural QA methods have achieved high performance without relying on evidence texts.", "Many of these approaches utilize external labeled training data (Raffel et al., 2019; Pan et al., 2019), which limits their portability to other domains.", "Others rely on pretraining, which tends to be computationally expensive but can be used as starting checkpoints (Devlin et al., 2019; Liu et al., 2019).", "More importantly, many of these directions lack explanation of their selected answers to the end user.", "In contrast, QA methods that incorporate an evidence retrieval module can provide these evidence texts as human-readable explanations.", "Further, several works have demonstrated that retrieve and read approaches (similar to ours) tend to achieve higher performance than the former QA methods (Chen et al., 2017; Qi et al., 2019).", "Our work is inspired by these directions but mostly focuses on jointly retrieving+reranking clusters of evidence sentences that leads to substantial QA performance improvements.", "We summarize the overall execution flow of our QA system in Figure 2. The four key components of the system are explained below.", "1. Initial evidence sentence retrieval: In the first step, we retrieve candidate evidence sentences (or justification) given a query.", "We propose a simple unsupervised approach, which, however, has been designed to bridge the lexical chasm inherent between multi-hop questions and their answers (Berger et al., 2000).", "We call our algorithm w eighted a lignment-based i nformation r etrieval (WAIR).", "WAIR operates in two steps, by combining ideas from embedding based-alignment (Yadav et al., 2019a) and pseudo-relevance feedback (Bern-hard, 2010) approaches.", "tion 2 ( Q = q 1 , q 2 , ..., q n ).", "Using Q , WAIR retrieves k justification sentences ( J 1 , J 2 , ...J k ) with the alignment IR method 3 of Yadav et al. (2019a).", "In the second step, WAIR generates k new queries ( Q 1 , Q 2 , ...Q i , ..Q k ) by concatenating Q with each retrieved justification in the previous step.", "For each new query Q i , WAIR assigns a weight 4 of 2 to the original query tokens which are not retrieved in the corresponding justification sentence J i .", "All the other covered terms in Q i receive a weight of 1. This simple idea encourages the algorithm to focus on terms that have not yet been retrieved in J i .", "Also, weighing uncovered query terms higher encourages the retrieval approach to retrieve the remaining query terms thus yielding higher query 2 and candidate answer for multiple-choice QA 3 Please note that for larger KB, BM25 is used to retrieve initial pool of sentences.", "Dataset WAIR BM25 Alignment Gold Evidence QASC (top 2) 78.85 61.42 63.40 80.81 MultiRC (top 3) 55.92 39.86 52.98 63.95", "coverage scores as shown in table 1. Further, the concatenation of J i with Q encourages retrieval of sentences that are associated or linked with the previously retrieved sentences.", "The J i terms are also weighted 1 to mitigate the semantic drift problem by helping the second retrieval iteration stay close to the original query (see WAIR sentences in fig. 1).", "In both iterations of WAIR, the score between a given query Q and a justification sentence J is calculated as: s ( Q, J ) = | Q | (cid:88) m =1 idf ( q m ) align ( q m , J ) (1) align ( q m , J ) = | J | max k =1 cosSim ( q m , j k ) (2) where q m and j k are the m th and k th terms of the query Q and justification sentence J , respectively.", "The inverse document frequency values ( idf ) are computed over the complete knowledge base of QASC (Khot et al., 2019a) and all the paragraphs in MultiRC dataset.", "The cosine similarity ( cosSim ) is computed over GLoVe embeddings for simiplicity.", "2. Generating candidate evidence sets: From the N sentences retrieved in the 2 iterations of previous step, WAIR generates (cid:0) Np (cid:1) combinations, where p denotes the number of sentences in a candidate evidence chain.", "To reduce the overhead on the next supervised component, we implemented a beam filter strategy on these sets.", "We first rank each evidence set E i by how many query terms are included in the set (referred to as coverage which has been shown as a strong retrieval indicator for multi-hop QA (Wang et al., 2019b) (as also shown in table 1)): C ( E i ) = 1 | t ( Q ) | (cid:88) w t ( Q ) t ( E i ) idf ( w ) (3) where t ( Q ) and t ( E i ) denote the unique terms in Q and evidence set E i , respectively.", "We then keep the top n sets with the highest coverage score ( C ).", "We implement an equivalent process for the SingleRR baseline: we compute the coverage C for individual evidence sentences, and keep the top n .", "3. Supervised evidence reranking: This component uses a supervised RoBERTa classifier to rerank evidence sets (for JointRR ) or classify individual justifications (for SingleRR ).", "The latter scenario is modeled as binary classification of individual justification sentences.", "The former scenario (for JointRR ) is modeled as a regression task, where the score of each evidence set is the F1 score computed from gold evidence sentences.", "For example, an evidence set with 3 sentences, out of which 2 are correct has a precision of 2 / 3 .", "Assuming 2 gold justifications are not included in the set, its recall is 2 / 4 , and the F1 score used for regression is 0 .", "57 .", "Please note that we directly use the sets created in the previous step even in the training step i.e., we do not insert gold sentences in the set to keep the consistency between training and test step.", "For both classifiers, we used RoBERTa-base with a learning rate of 1e 5 , maximum sequence length of 256 5 , batch size of 8, and 4 epochs.", "For the SingleRR approach, all the evidence sentences having probability larger than 0 .", "5 are concatenated to create the final evidence text.", "For JointRR approach, the evidence set with the highest regression score is selected.", "Similarly, all the sentences in this set are concatenated into a single text.", "4. Answer selection: The last component clas-sifies candidate answers given the original question and the evidence text assembled in the previous step.", "Similar to previous works, we use the multiple-choice question answering (MCQA) architecture of RoBERTa for QASC (Khot et al., 2019a; Wolf et al., 2019) where a softmax is used to discriminate among the eight answer choices.", "The inputs to RoBERTa-MCQA consist of eight queries (from eight candidate answers) and their corresponding eight evidence texts.", "The hyperparameters used were: RoBERTa large, maximum sequence length = 128 6 (for each candidate answer), batch size = 8, epoch = 3. For MultiRC, where questions have variable number of candidate answers and multiple correct answers, a RoBERTa binary classifier 7 is used for each candidate seperately.", "We focus on complex non-factoid and long answer span based explainable multi-hop datasets:", "(MultiRC): a reading comprehension dataset provided in the multiple-choice QA format (Khashabi et al., 2018).", "Every question is supported by one document, from which the answer and justification sentences must be extracted.", "WAIR retrieves n = 10 sentences, 8 which are separately considered as candidates in the downstream components of SingleRR .", "For the JointRR approach, we generate combinations of evidence texts with k { 2 , 3 , 4 } sentences, i.e., (cid:0) n =10 k { 2 , 3 , 4 } (cid:1) .", "We use the original MultiRC dataset 9 which includes the gold annotations for evidence text.", "Question Answering using Sentence Composition (QASC): a multiple-choice QA dataset (Khot et al., 2019a), where each question is provided with 8 answer candidates, out of which 4 candidates are hard adversarial choices.", "The evidence sentences are to be retrieved from a large KB of 17.2 million facts.", "Similar to Khot et al. (2019a), WAIR first retrieves n = 10 sentences 10 for each candidate answer, where the query concatenates the question and candidate answer texts.", "WAIR uses each of these retrieved sentences to reformulate and reweigh the query, to retrieve an additional 1 sentence in a second iteration.", "This results in a total of 20 candidate evidence sentences for a given question and candidate answer.", "We generate evidence chains using the same approach as the one used for MultiRC, except here we focus on k = 2 , i.e., (cid:0) n =20 k =2 (cid:1) , because all questions in QASC are annotated with only two gold justification sentences.", "We report QA and evidence selection performances in both the datasets using standard evaluation measures (Khot et al., 2019a; Khashabi et al., 2018).", "Tables 2 and 4 list the main results for both question answering and evidence retrieval for the two datasets.", "Table 3 shows a more detailed analysis 8 The recall for the retrieval of gold evidence sentences is approximately 94% at n = 10 in the MultiRC training set.", "Retrieval Method", "Accuracy Evidence Evidence", "for QASC 11 at different levels of recall, i.e., the percentage of gold evidence sentences found in top N reranked evidence sentences ( Recall @ N ).", "We draw following observations from evidence retrieval experiments (answer selection results are discussed in the following subsection): (1) Unsupervised retrieval: Indicating initial benefits of retrieving evidence chains, our alignment-based evidence retrieval approach (WAIR) outperforms the other IR benchmarks (BM25 and alignment) as shown in rows 10-11 vs. 12-13 in table 4 and rows {1,9,10} vs. 11 in table 2. WAIR also outperforms the two-step IR-based methods for evidence retrieval (row (9, 10 vs. 11) in table 2), highlighting the importance of query reweighing in iterative retrieval methods.", "(2) Supervised reranking: Reranking WAIR candidate evidence chains ( JointRR ) leads to absolute 10.4% on QASC (row 12 vs row 13 in table 2) and 3.6% F1 improvement on MultiRC (row 14 vs row 15 in table 4) over the case where the same reranker is fed with individual sentences ( SingleRR ).", "This highlights the importance of feeding candidate evi-11 We found similar trends for MultiRC but present analysis only on QASC (large KB based QA) because of space constraints.", "(3) Recall comparison: As shown in table 3, just feeding WAIR candidate chains result in higher performance for retrieving complete evidence (the \"Both found\" columns) than SingleRR , especially for low recall scenarios.", "Notably, SingleRR achieves marginally better performance on finding atleast 1 evidence sentence but performs poorly on retrieving both the evidence sentences indicating absence of compositional multi-hop reasoning.", "We observe similar gains on MultiRC i.e., JointRR achieves 6% higher recall compared to SingleRR (row 14, row 15 in table 4).", "(4) (Pseudo) oracle JointRR: To investigate the ceiling of JointRR , we inserted the gold justification sentences within the WAIR retrieved sentences and then created candidate evidence chains.", "These chains were then reranked by the same RoBERTa reranker.", "As shown in row 18a of table 4 and row 14 of table 2, the performance of JointRR approach is substantially improved when gold evidence sentences are retrieved in the initial WAIR pool.", "The ceiling performance of JointRR is much higher than the current actual method (row 13 in table 2 and row 15 in table 4), which suggests there is SingleRR JointRR Recall@N Evidence Evidence QA Evidence Evidence QA Both found Atleast 1 found Accuracy Both found Atleast 1 found Accuracy Recall@2 20.1 65.3 73.8 30.5 65.1 78.6 Recall@4 35.0 67.9 74.7 40.5 66.7 80.7 Recall@6 40.2 69.0 77.9 44.1 68.2 80.0 Recall@8 43.3 69.4 76.8 45.2 69.0 79.6 Recall@10 44.4 69.6 79.7 45.3 69.4 81.7 Table 3: Evidence retrieval and QA performance comparison of SingleRR and JointRR at different recall levels on the QASC development dataset.", "(5) State-of-the-art evidence retrieval performance: The top reranked WAIR chain achieves 30.5% Recall@2 on QASC (row 13, table 2) and 67.6% F1 on MultiRC (row 15, table 4).", "Thus, establishing the new state-of-the-art evidence retrieval performance on both the datasets.", "(1) Impact of two-step evidence retrieval: Unsurprisingly, the two-step evidence retrieval process substantially impacts QA performance (e.g., row 1 vs. row 9 in table 2), which is consistent with the observations of previous works (Khot et al., 2019a; Yadav et al., 2020b).", "The top reranked WAIR chain leads to higher QA performance ( +5.2% on QASC (row 12 vs. 13, table 2), and 2.3% F1 on MultiRC (row 14 vs. 15, table 4)).", "(2) Impact of retrieval recall: As shown in table 3, JointRR always achieves higher Recall@N score for finding both (or complete) evidence.", "As a result, it also achieves better QA accuracy when compared to SingleRR .", "On the other hand, SingleRR always achieves marginally better performance on finding atleast 1 evidence sentence indicating that retrieval of incomplete information leads to lower QA performance.", "Further, the best QA performance is also achieved at higher recalls (last row of table 3 and row 15 in table 4).", "(3) Ceiling performance: When coupled with the (pseudo) oracle retriever, the QA scores of JointRR approaches human performance (row 18, table 4).", "This emphasizes the importance of evidence retrieval for the QA performance.", "(4) Top QA performance: RoBERTa answer classifier that just the uses top reranked evidence of WAIR achieves state-of-the-art QA performance on MultiRC development and test sets.", "It also achieves the second and third best results on QASC development and test sets.", "Notably, the approaches that score higher than JointRR use ensembling or additional labeled data.", "To better understand the differences in learned features of RoBERTa reranker from WAIR chains ( JointRR ) and individual candidate evidence sentences ( SingleRR ), we performed several analyses of their attention weights.", "We focus on the attention score on the [CLS] token, whose representation is fed into the decision layer of the RoBERTa classifier (Wolf et al., 2019).", "We compute the attention score from a given token to [CLS] by summing up the attention scores from all the 12 heads in each layer (Clark et al., 2019).", "Similar to Clark et al. (2019); Rogers et al. (2020), we remove the attention scores from < s >, < /s > , punctuation and stopword tokens in our analysis.", "Attention from semantically matching tokens in query and evidence : Retrieval tasks are often driven by the lexically matching query tokens in the retrieved document(Robertson et al., 2009; Manning et al., 2008).", "Thus, to understand the fo-Token QASC MultiRC type SingleRR JointRR SingleRR JointRR SMA 50.3 56.0 60.0 64.0 Linking 50.6 54.8 55.7 64.4 Table 5: Various attention scores of the SingleRR and the JointRR approaches.", "cus of the reranker on semantic matching, we compute the attention on [CLS] from all the tokens that are not lexically matched between the given question+candidate answer text and the retrieved evidence text (Yadav et al., 2020a).", "We refer it to as Semantic Matching Attention ( SMA ) score.", "As shown in table 5, reranker fed with WAIR chain ( JointRR approach) attends more on the tokens requiring semantic matching when compared to SingleRR (50.3% vs 56% on QASC and 60.0 vs. 64.0% on MultiRC) suggesting that it learns how to bridge the lexical chasm between question and answers (Berger et al., 2000) Attention from linking tokens of evidence : Here, we focus only on the terms that are shared between sentences in the gold evidence texts (referred to as Linking terms).", "As shown in fig.", "1, {nuclear, membrane} are examples of linking terms that compose the two justification sentences into a complete explanation.", "The remaining terms in the evidence text, i.e., terms that are uniquely present in any one of the evidence sentences are referred to as Non linking terms.As shown in table 5, JointRR attends considerably more to the Linking terms (50.6 vs. 54.8 and 55.7 vs. 64.4), which suggests that it focuses more on the relevant compositional pieces after the retrieval training.", "We also analyzed the embedding representations of the reranking model (Ethayarajh, 2019).", "In particular, we computed the embedding based cosine-similarity scores (or alignment scores (Yadav et al., 2019a)) between the two gold evidence sentences to determine their similarity in embedding space.", "As shown in fig.", "3, the inter-justification alignment similarity score of JointRR is substantially lower across the majority of the layers after layer 0.65 0.77 0.8 0.81 0.82 0.83 0.85 0.85 0.83 0.84 0.86 0.87 0.52 0.65 0.67 0.69 0.72 0.75 0.89 0.95 0.97 0.98 0.99 0.99 0.51 0.64 0.67 0.67 0.68 0.63 0.68 0.64 0.54 0.57 0.58 0.77 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 LAYER 0 LAYER 1 LAYER 2 LAYER 3 LAYER 4 LAYER 5 LAYER 6 LAYER 7 LAYER 8 LAYER 9 LAYER 10LAYER 11 QASC INTER-JUSTIFICATION ALIGNMENT RoBERTa-base SingleRR JointRR 0.63 0.76 0.78 0.79 0.8 0.81 0.83 0.82 0.81 0.82 0.84 0.86 0.56 0.69 0.72 0.72 0.73 0.73 0.77 0.78 0.81 0.89 0.95 0.98 0.56 0.68 0.7 0.71 0.71 0.69 0.7 0.67 0.64 0.69 0.87 0.95 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 LAYER 0 LAYER 1 LAYER 2 LAYER 3 LAYER 4 LAYER 5 LAYER 6 LAYER 7 LAYER 8 LAYER 9 LAYER 10LAYER 11 MULTIRC INTER-JUSTIFICATION ALIGNMENT RoBERTa-base SingleRR JointRR Figure 3: Layer-wise embedding based alignment similarity scores between the two gold justification sentences.", "3. This indicates that the RoBERTa reranker fed with WAIR chains has learned to differentiate the individual justification sentences (in embedding space) enabling complementary and compositional knowledge aggregation.", "As shown in table 4 (row 17 vs. row 15), this compositionality information is useful when the evidence reranking RoBERTa is transferred to the answer selection component i.e., we see a (small) QA performance improvement.", "On the other hand, SingleRR learns to consider both sentences similar, and this hurts the QA performance by 4.3% EM0 (row 16 vs. row 14, table 4).", "Recent works have shown importance of vector normalization (Kobayashi et al., 2020) for analyzing the transformer embeddings.", "In future works, normalized embedding analysis can be added to further study the behavior of trained retriever's across different layers.", "We introduced a simple unsupervised approach for retrieving candidate evidence chains that after reranking achieves state-of-the-art evidence retrieval performance on two multi-hop QA datasets: QASC and MultiRC.", "We highlight the importance of generating and feeding candidate evidence chains by showing several benefits over the widely followed approach that retrieves evidence sentences individually.", "Further, we introduced few attention and embedding analyses demonstrating that jointly retrieving and reranking chains assist in learning compositional information, which is also beneficial to the downstream QA task.", "Overall, our work highlights the strengths and potential of joint retrieval+reranking approaches for future works.", "This work was supported by the Defense Advanced Research Projects Agency (DARPA) under the World Modelers program, grant number W911NF1810014.", "Mihai Surdeanu declares a fi-nancial interest in lum.ai.", "This interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies." ]
[ "abstain", "objective", "objective", "abstain", "objective", "objective", "result", "objective", "objective", "abstain", "other", "objective", "objective", "objective", "objective", "abstain", "abstain", "result", "abstain", "objective", "objective", "abstain", "objective", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "objective", "result", "result", "method", "result", "result", "objective", "method", "result", "result", "other", "other", "other", "other", "other", "other", "objective", "other", "method", "abstain", "other", "abstain", "other", "other", "abstain", "method", "method", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "method", "result", "other", "other", "other" ]
[ "Cross-lingual retrieval aims to retrieve relevant text across languages.", "Current methods typically achieve cross-lingual retrieval by learning language-agnostic text representations in word or sentence level.", "However, how to learn phrase representations for cross-lingual phrase retrieval is still an open problem.", "In this paper, we propose XPR , a cross-lingual phrase retriever that extracts phrase representations from unlabeled example sentences.", "Moreover, we create a large-scale cross-lingual phrase retrieval dataset, which contains 65K bilingual phrase pairs and 4.2M example sentences in 8 English-centric language pairs.", "Experimental results show that XPR outperforms state-of-the-art baselines which utilize word-level or sentence-level representations.", "XPR also shows impressive zero-shot transferability that enables the model to perform retrieval in an unseen language pair during training.", "Our dataset, code, and trained models are publicly available at github.com/cwszz/XPR/ .", "Phrase retrieval aims to retrieve relevant phrases from a large phrase set, which is a critical part of information retrieval.", "Recent studies on phrase retrieval learn dense representations of phrases, and achieve promising results in entity linking, slot filling, and open-domain question answering tasks (Gillick et al., 2019; Lee et al., 2021a,b).", "Nonetheless, most of the studies focus on monolingual scenarios, leaving the cross-lingual phrase retrieval unexplored.", "Various methods have been proposed to perform cross-lingual text retrieval, which learns cross-lingual word or sentence representations shared Co-first authors with equal contributions.", "across languages.", "Cross-lingual word representation methods typically train word embeddings on each language separately, and then learn an embedding mapping between the embedding spaces of different languages (Mikolov et al., 2013; Dinu et al., 2014).", "Then, the bilingual word pairs can be retrieved between vocabularies using nearest neighbor search, which is also known as bilingual lexicon induction (Artetxe et al., 2018; Lample et al., 2018).", "Cross-lingual sentence retrieval is typically achieved by learning a sentence encoder on multilingual text corpora with self-supervised pretraining tasks (Conneau and Lample, 2019; Conneau et al., 2020), or large-scale parallel corpora (Artetxe and Schwenk, 2019), or both (Chi et al., 2021b).", "The trained sentence encoders produce language-agnostic sentence representations, which enables sentences to be retrieved across languages.", "Despite the effectiveness of word-level and sentence-level methods, how to learn phrase representations for cross-lingual phrase retrieval is still an open problem.", "Learning cross-lingual phrase representations is challenging in two aspects.", "First, a phrase is a conceptual unit containing multiple words, so it is necessary to model the interaction between words, which is not considered in word-level methods.", "Second, a phrase contains fewer words with less information compared to sentences, which prevents sentence encoders from taking the advantage of the ability of understanding full-length sentences.", "Thus, in this paper, we propose a novel cross-lingual phrase retriever named as XPR .", "Unlike previous cross-lingual retrieval methods that directly encode the input text, XPR produces phrase representations using example sentences, which can be collected from unlabeled text corpora.", "Initialized with a pretrained cross-lingual language 4193 model, XPR can either directly serve as an unsupervised retriever, or be further trained to produce better-aligned phrase representations.", "Besides, we propose the cross-lingual phrase contrast (XPCO ) loss for training XPR , where the model is trained to distinguish bilingual phrase pairs from negative examples.", "Furthermore, we create a cross-lingual phrase retrieval dataset, namely WikiXPR.", "WikiXPR contains 65K bilingual phrase pairs of eight language pairs, and provides example sentences for each phrase.", "We conduct a comprehensive evaluation of XPR on WikiXPR under four evaluation settings, i.e., unsupervised, supervised, zero-shot transfer, and multilingual supervised.", "Our XPR model substantially outperforms the retrieval baselines based on cross-lingual word embeddings and cross-lingual sentence encoders.", "XPR also shows impressive zero-shot transferability that enables the model to be trained in a language pair and directly perform phrase retrieval for other language pairs.", "Moreover, we present an in-depth analysis on XPR , showing that using example sentences improves both the learned XPR model and the phrase representations.", "Our contributions are summarized as follows: We propose XPR , a novel cross-lingual phrase retriever that utilizes example sentences to produce phrase representations.", "We propose the cross-lingual phrase contrast loss for training XPR .", "We demonstrate the effectiveness of XPR on eight language pairs under four evaluation settings.", "We create a cross-lingual phrase retrieval dataset, which provides 65K bilingual phrase pairs with 4.2M example sentences in 8 language pairs.", "Cross-Lingual Retrieval Current cross-lingual text retrieval methods focus on word-level and sentence-level scenarios.", "Word-level cross-lingual retrieval methods typically train word embeddings on each language separately, and then align the word embeddings between languages by learning a mapping function (Mikolov et al., 2013; Dinu et al., 2014; Artetxe et al., 2016, 2018; Lample et al., 2018; Doval et al., 2018; Joulin et al., 2018).", "Similarly, cross-lingual sentence retrieval can be achieved by aligning sentence representations across different languages.", "LASER (Artetxe and Schwenk, 2019) learns a multilingual auto-encoder on multilingual parallel corpora to produce language-agnostic sentence embeddings.", "Training on parallel corpora, cross-lingual sentence representations can also be learned with neural machine translation (Schwenk, 2018), contrastive learning (Chi-dambaram et al., 2019; Feng et al., 2020; Chi et al., 2021b), translation span corruption (Chi et al., 2021a), or knowledge distillation (Ham and Kim, 2021).", "Thanks to the recent language model pretraining technique (Devlin et al., 2019), sentence encoders can be learned on a multilingual unlabeled text corpus without using parallel corpora (Conneau and Lample, 2019; Conneau et al., 2020; Chi et al., 2021c; Goswami et al., 2021).", "Phrase Retrieval Recent research on phrase retrieval typically learns phrase representations.", "Seo et al. (2019) propose to treat phrases as the smallest retrieval unit for open-domain question answering, where the phrases are encoded as indexable query-agnostic representations.", "The retrieval methods can be further improved with self-supervised pretraining, leading to better performance on open-domain question answering (Lee et al., 2021a,b).", "Additionally, DEER (Gillick et al., 2019) formulates the entity linking task as an entity phrase retrieval problem.", "However, these works study phrase retrieval in a monolingual scenario while we focus on cross-lingual phrase retrieval.", "Contrastive Learning Contrastive learning learns representations by a contrastive loss that encourages the positive data pairs to be more similar than other data pairs.", "It has shown to be effective for learning representations of a wide range of modalities including visual representations (He et al., 2020; Chen et al., 2020a; Grill et al., 2020), sentence representations (Kong et al., 2020; Chi et al., 2021b; Gao et al., 2021), audio representations (Saeed et al., 2021), etc.", "Different from previous work that performs contrastive learning at sentence level, we introduce contrastive learning to learn phrase representations.", "Figure 1 shows the overview of XPR .", "In this section, we first introduce the model architecture of XPR , and then present the cross-lingual phrase contrast loss.", "Finally, we show the training procedure of XPR .", "The model architecture of XPR is a Transformer (Vaswani et al., 2017) encoder shared across different languages.", "XPR can be initialized with pretrained cross-lingual language models, which have shown to produce well-aligned sentence representations (Hu et al., 2020; Chi et al., 2021b).", "Given a phrase p and an example sentence x = w 1 , . . . , w n with n tokens that contain the phrase.", "We denote the start and end indices of p as s and e , i.e., p = w s , . . . , w e .", "XPR first encodes x into a sequence of contextualized token representations 1 h 1 , . . . , h n = Transformer ( w 1 , . . . , w n ) .", "In general, a phrase can have more than one example sentence.", "Considering m example sentences X = x 1 , . . . , x m for the phrase p , XPR encodes the sentences separately, and uses the average of the phrase representations as the final phrase representation, i.e., (cid:80) x X x /m .", "Notice that XPR does not introduce additional parameters beyond the original Transformer encoder.", "Thus, after the initialization with a pretrained cross-lingual language model, XPR can directly serve as an unsupervised cross-lingual phrase retriever.", "Recent work (Chen et al., 2020a; Kong et al., 2020) has demonstrated the effectiveness of contrastive learning framework for learning visual and text representations.", "To learn language-agnostic phrase representations, we propose the cross-lingual phrase contrast (XPCO ) loss, where the goal is to distinguish the bilingual phrase pairs from negative examples.", "Formally, consider a mini-batch B = {P , Q} of bilingual phrase pairs, where P = { p } N and Q = { q } N stand for N phrases in a language and their translations in another language, respectively.", "For each phrase p P , we sample example sentences X for p , and compute the phrase representation u as described in Section 3.1.", "Following Chen et al. (2020a), we apply a projection head over u that consists of two linear lay-4195 Algorithm 1 Training procedure of XPR Input: Bilingual phrase pair corpus D , unlabeled text corpus U , learning rate , momentum coefficient Output: XPR parameters 1: Initialize , m 2: while not converged do 3: ( P , Q ) D 4: for i = 1 , 2 , . . . , N do 5: X U s.t. p i x , x X 6: Y U s.t. q i y , y Y 7: p i = f ( p i , X ; ) 8: p i = f ( p i , X ; m ) 9: q i = f ( q i , Y ; ) 10: q i = f ( q i , Y ; m ) 11: end for 12: g LXPCO 13: g 14: m m + (1 ) 15: end while ers with a ReLU in between and a l 2 normalization followed.", "For simplicity, we denote the above operation that converts an input phrase p to a normalized vector as p = f ( p, X ; ) , where stands for the parameters of the encoder and the projection head.", "For each phrase q Q , we employ a momentum encoder (He et al., 2020) to encode q : q = f ( q, Y ; m ) , where Y represents the example sentences of q , and m represents the parameters of the momentum encoder.", "For the i -th phrase p i P , q i Q is its corresponding positive example and the other N 1 phrases are treated as negative examples.", "The contrastive loss in the direction of P Q is defined as L ( P Q ) = N (cid:88) i =1 log exp( p (cid:62) i q i /T ) (cid:80) Nj =1 exp( p (cid:62) i q j /T ) (3) Similarly, we employ an additional contrastive loss in the direction of Q P .", "The XPCO loss combines both directions, which is defined as LXPCO = L ( P Q ) + L ( Q P ) (4) where T is the softmax temperature.", "momentum encoder m with a pretrained cross-lingual language model.", "For each training step, we first sample a mini-batch of bilingual phrase pairs ( P , Q ) from the bilingual phrase pair corpus D , and then sample example sentences X and Y for P and Q , respectively.", "Each example sentence x X should contain the phrase p i , which is denoted as p i x .", "With the phrase representations produced by the two encoders, we compute the XPCO loss, and update with gradient descent.", "Notice that we do not perform back-propagation in the momentum encoder, which is learned by a momentum update (He et al., 2020) with a momentum coefficient of .", "Given a phrase set P = { p } N with N candidate phrases , the goal is to find p P with the same meaning of a query phrase q .", "With the trained XPR encoder , we first sample example sentences candidate phrases and then compute their representations { p } N with f ( ; ) .", "Then, for a query phrase q , we can find the corresponding phrase by: p = arg max p i { p (cid:62) i q } (5) In practice, the representations of candidate phrases can be pre-computed for reuse.", "Moreover, although the example sentence number is limited during training, we can use more example sentences to obtain better phrase representation for retrieval.", "To evaluate our model, we create WikiXPR, a cross-lingual phrase retrieval dataset extracted from Wikipedia.", "WikiXPR consists of bilingual phrase pairs in eight English-centric language pairs, and contains large-scale example sentences for the phrases, which enable models to leverage contextual information to better understand phrases.", "In what follows, we describe how we construct the WikiXPR dataset.", "Manually translating phrases is expensive when building a large-scale bilingual phrase pair corpus.", "Therefore, we leverage the link information within Wikipedia for mining bilingual phrase pairs.", "Specifically, we first extract inter-language 4196 ar-en de-en es-en fr-en ja-en ko-en ru-en zh-en Total Train 4222 1931 1333 1315 14745 2138 5229 8326 39239 Dev 1408 644 445 438 4915 713 1743 2775 13081 Test 1407 644 445 438 4915 713 1743 2775 13080 Table 1: The number of bilingual phrase pairs for each language pair in WikiXPR.", "linked wiki entries from dbpedia 2 .", "We treat English as the pivot language, and choose a range of diverse languages to build our datasets, so that the models can be evaluated with different language families and scripts.", "We filter out time expressions, and the phrase pairs with low edit distance using ROUGE-L (Lin, 2004) as the distance measure.", "The phrase pairs with bidirectional ROUGE-L values higher than 0 .", "5 are removed.", "In addition to phrase pairs in diverse languages, XPR also provides example sentences for each phrase, which aims to facilitate the research on phrase representation learning with example sentences.", "For each phrase, we retrieve example sentences from an unlabeled text corpus.", "In specific, we first extract raw sentences from Wikipedia dumps as our unlabeled text corpus.", "Then, we build sentence indices with the Elasticsearch 3 searching engine.", "For each phrase, we retain the searched sentences with at least 10 more characters than the phrase as the results.", "Besides, we only retain 32 example sentences for each phrase to keep a reasonable size for the resulting example sentence corpus.", "As shown in Table 1, we present the number of bilingual phrase pairs for each language pair in WikiXPR.", "The resulting WikiXPR dataset consists of 65,400 phrase pairs in eight language pairs, and 4.2M example sentences in total.", "For each phrase, WikiXPR provides 32 example sentences extracted from Wikipedia text.", "WikiXPR is split into training, dev, and test sets by 3:1:1, so WikiXPR can be used for diverse evaluation settings including the supervised setting, cross-lingual zero-shot transfer, etc.", "See detailed statistics in Appendix A. 2 downloads.dbpedia.org/2014/en/ 3 www.elastic.co 5 Experiments In this section, we first present four evaluation settings for cross-lingual phrase retrieval, and describe the models to be compared.", "Then, we present the experimental results.", "We conduct experiments on the cross-lingual phrase retrieval task on our WikiXPR dataset.", "Detailed description of WikiXPR can be found in Section 4.", "Since collecting or annotating parallel sentences can be expensive especially for low-resource languages, we only consider unlabeled text and the bilingual pairs provided by WikiXPR in our experiments.", "According to the difference in the training resource, we present the following four evaluation settings.", "Unsupervised Under the unsupervised setting, the retrieval model should not use any bilingual phrase pairs or other cross-lingual supervision such as bilingual dictionaries and parallel corpus.", "The language representations are typically learned from unlabeled text corpora.", "Supervised In the supervised setting, the retrieval model is trained on and tested on bilingual phrase pairs for each language pair separately, e.g., training and testing with English-French phrase pairs.", "Zero-Shot Transfer Zero-shot transfer is a widely-used setting in cross-lingual understanding tasks Conneau and Lample (2019); Wu and Dredze (2019), where models are trained in a source language but evaluated on other languages.", "We introduce this setting to the cross-lingual phrase retrieval task, e.g., training a model with English-French phrase pairs but performing retrieval between English and Chinese phrases.", "Multilingual Supervised In this setting, the retrieval model is able to use training data in multiple languages, e.g., training a model using a combined training set over all languages in WikiXPR and testing it for each language.", "Considering the lack of methods for cross-lingual phrase retrieval, we develop the following two baselines in our experiments:", "CLWE Cross-lingual word embeddings (CLWE ) encode words from various languages into a shared embedding space.", "For each word in a phrase, we first represent it with the pretrained fastText multilingual word vectors (Grave et al., 2018), and then map it to a shared embedding space via the VECMAP 4 (Artetxe et al., 2018) tool.", "Notice that VECMAP can be applied to both unsupervised and supervised scenarios.", "Finally, the retrieval is achieved by the nearest search using an average word vector as the phrase representation.", "CLSE Cross-lingual sentence encoders (CLSE ) produce language-agnostic sentence representations for the input text sequence.", "We use XLM-R base (Conneau et al., 2020) as the sentence encoder, which is pretrained on a large-scale multilingual text corpus.", "For the unsupervised setting, we use the averaged hidden vector from a specific middle layer as the phrase representation.", "For the other settings, we follow Wang et al. (2019), which learns an orthogonal mapping between the feature spaces of the training phrase pairs.", "As LASER (Artetxe and Schwenk, 2019) and LaBSE (Feng et al., 2020) utilize parallel corpora, we do not use them in our experiments.", "As for our model XPR described in Section 3, we initialize XPR with XLM-r base (Conneau et al., 2020) for a fair comparison.", "For each step, we use a batch of 256 phrase pairs and 4 example sen-4 github.com/artetxem/vecmap tences for each phrase.", "The model is optimized with the Adam (Kingma and Ba, 2015) optimizer with a learning rate of 2 10 5 for 100 epochs.", "The learning rate is scheduled with 1% warm-up steps and a linear decay during training.", "Unsupervised Results As present in Table 2, XPR obtains the best performance over all languages without any cross-lingual supervision, achieving an average accuracy@1 of 22.92.", "On the contrary, CLWE and CLSE only obtain 0.83 and 15.12, respectively.", "It indicates that XPR successfully leverage example sentences to produce better phrase representations.", "Besides, the performance varies in different languages.", "We observe that the retrieval between English and European languages can be easier than other language pairs when using CLSE and XPR .", "It is worth mentioning that CLWE and CLSE are proven to be effective for bilingual lexicon induction and cross-lingual sentence retrieval, respectively (Lample et al., 2018; Hu et al., 2020).", "Nonetheless, they do not perform as well as on word or sentence level tasks, indicating that they are not directly applicable to cross-lingual phrase retrieval.", "Supervised Results Under the supervised setting, XPR achieves an average accuracy of 83.94, largely outperforming the other two models over all evaluation language pairs.", "Comparing the results between the unsupervised and the supervised settings, all the three models greatly benefit from the training data.", "In particular, XPR pushes the average result from 7.34 to 87.32 for the en-ja phrase retrieval.", "The results suggest that the bilingual phrase pairs can help to learn cross-lingual alignment for both word-level and sentence-level representations.", "We find that using training data brings more gains for CLWE than CLSE , showing that the contextualized phrase representations in CLSE can be harder to align.", "Zero-shot Transfer In zero-shot transfer, the models are trained using an en-xx dataset but evaluated on all language pairs.", "The table only presents the results of the model trained on en-zh data.", "Detailed results of other transfer directions can be found in Appendix B. Although the XPR model only learns on en-zh training data, it performs surprisingly well on other languages.", "On en-es and en-ko, XPR even produces comparable results to the results in the supervised setting.", "Comparing the results to the unsupervised setting, XPR pushes the average accuracy from 22.92 to 76.99.", "This demonstrates the strong cross-lingual transferability of XPR , which allows our model to be applied to low-resource languages without training data.", "On the contrary, CLSE fails to leverage the en-zh training data for the retrieval in other languages, resulting in a consistent performance drop.", "Multilingual Supervised In the multilingual supervised setting, XPR obtains the best results over all models and settings, achieving an average accuracy of 88.57.", "Compared to the supervised setting, using the combined training data leads to consistent improvement over all languages, which demonstrates that XPR successfully leverage the supervision signals from both the same and different languages.", "We conduct ablation studies by removing main components from XPR .", "In specific, we compare three variants of XPR that are trained without example sentences, momentum update, or projection head, respectively.", "The evaluation results are shown in Table 3.", "Example Sentence We first investigate whether using example sentences helps cross-lingual phrase retrieval.", "During training, we remove the example sentences from XPR , i.e., the model extracts the phrase representation only from the input phrase itself.", "As shown in Table 3, removing example sentences substantially harms the performance of XPR for both the supervised and zero-shot transfer settings.", "Notice that example sentences are not parallel across languages, but they still make the resulting phrase representations from different languages better aligned.", "Besides, compared to the supervised setting, the gains are even larger for zero-shot transfer, improving the average accuracy from 60.48 to 78.33.", "The above results demonstrate that using example sentences not only learns better phrase representations, but also encourages cross-lingual alignment.", "age of the hidden vectors as the phrase representation.", "As shown in Table 3, the projection head provides consistent gains on the three language pairs, showing the effectiveness of the projection head in contrastive learning.", "The results also agree with the finding in visual representation learning (Chen et al., 2020a,b).", "Momentum Update We study the effects of momentum update used in XPR .", "It shows that the momentum update strategy slightly improves the results on all of the three evaluation language pairs, providing 0.84 accuracy improvement.", "We study the effects of the example sentence number used in XPR .", "We conduct an evaluation on the en-fr set of WikiXPR, under two settings where the example sentence number varies during training or inference: 1) Training and inference with various numbers of example sentences for each phrase, 2) Training with 32 example sentences for each phrase but inference with various numbers of example sentences.", "Figure 2 illustrates the evaluation results.", "It shows a trend that using more example sentences during inference notably improves the performance in both settings.", "The gain is larger when using fewer example sentences, demonstrating the effectiveness of using multiple example sentences for producing phrase representations.", "Comparing the results between the two settings, we find that the model moderately benefits from a large number of example sentences if we use a lower sentence number for inference.", "Although using more example sentences during training provides gains, the heavier computation load should be token into consideration.", "Recent work (Chi et al., 2021b,c) has shown that a middle layer can produce better-aligned sentence representations than the last layer, resulting in higher cross-lingual sentence retrieval performance.", "We investigate which hidden layer of XPR produces phrase representations that achieve higher retrieval accuracy.", "To this end, we evaluate XPR using representations from various hidden layers on the en-fr set of WikiXPR.", "As shown in Table 4, we present the evaluation results of XPR under both the unsupervised and the supervised settings.", "For the unsupervised XPR , we observe that Layer-11 produces the best results while the last layer even performs worse than the first layer.", "Differently, the supervised XPR obtains the best results on Layer-12, indicating that our XPCO loss encourages the model to fully utilize the last few layers.", "Moreover, it shows that using representations from higher layers of the supervised XPR leads to consistent improvement.", "We explore whether using momentum contrast (MOCO ; He et al. 2020) trains our XPR model better, which is proven to be effective for cross-lingual language model pretraining (Chi et al., 2021b).", "In specific, we train a variant of XPR with MOCO , which maintains more negative examples encoded by the momentum encoder with a queue with a length of 1024.", "The evaluation results are presented in Table 5.", "XPCO consistently outperforms MOCO on the three language pairs, suggesting that the negative examples stored in the queue can be out-of-date for contrastive learning.", "In this work, we propose a cross-lingual phrase retriever XPR , which outperforms the baseline retrievers on a range of diverse languages.", "Moreover, we create a cross-lingual phrase retrieval 4200 dataset that contains diverse languages with large-scale example sentences.", "For future work, we would like to improve XPR by: 1) extending XPR to asymmetric retrieval scenarios such as open-domain question answering, 2) exploring how to utilize parallel corpora for training XPR .", "XPR is designed as a cross-lingual phrase retriever that retrieve relevant phrases across different languages.", "We believe XPR would help the communication between the people who speak different languages.", "Besides, our work can facilitate the research on multilingual natural language processing (NLP), which helps to build NLP applications for low-resource languages.", "In addition, we construct the WikiXPR dataset using open-source data from Wikipedia and dbpedia.", "The work is supported by National Key R&D Plan (No. 2018YFB1005100), National Natural Science Foundation of China (No. U19B2020, 62172039, 61732005, 61602197 and L1924068), the funds of Beijing Advanced Innovation Center for Language Resources (No. TYZ19005), and in part by CCF-AFSG Research Fund under Grant No.RF20210005, and in part by the fund of Joint Laboratory of HUST and Pingan Property & Casualty Research (HPL).", "We would like to acknowledge Qian Liu for the helpful discussions." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "abstain", "method", "result", "abstain", "result", "objective", "objective", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "method", "objective", "abstain", "method", "method", "method", "other", "other" ]
[ "The curse of knowledge can impede communication between experts and laymen.", "We propose a new task of expertise style transfer and contribute a manually annotated dataset with the goal of alleviating such cognitive biases.", "Solving this task not only simplifies the professional language, but also improves the accuracy and expertise level of laymen descriptions using simple words.", "This is a challenging task, unaddressed in previous work, as it requires the models to have expert intelligence in order to modify text with a deep understanding of domain knowledge and structures.", "We establish the benchmark performance of five state-of-the-art models for style transfer and text simplification.", "The results demonstrate a significant gap between machine and human performance.", "We also discuss the challenges of automatic evaluation, to provide insights into future research directions.", "The dataset is publicly available at https://srhthu.github.", "io/expertise-style-transfer/ .", "The curse of knowledge (Camerer et al., 1989) is a pervasive cognitive bias exhibited across all domains, leading to discrepancies between an expert's advice and a layman's understanding of it (Tan and Goonawardene, 2017).", "Take medical consultations as an example: patients often find it difficult to understand their doctors' language.", "On the other hand, it is important for doctors to accurately disclose the exact illness conditions based on patients' simple vocabulary.", "Misunderstanding may lead to failures in diagnosis and prompt treatment, or even death.", "How to automatically adjust the expertise level of texts is critical for effective communication.", "In this paper, we propose a new task of text style transfer between expert language and layman language, namely Expertise Style Transfer , and contribute a manually annotated dataset in the medical Many cause dyspnea , pleuritic chest pain, or both.", "domain for this task.", "We show four examples in Figure 1, where the upper sentence is for professionals and the lower one is for laymen.", "On one hand, expertise style transfer aims at improving the readability of a text by reducing the expertise level, such as explaining the complex terminology dyspnea in the first example with a simple phrase shortness of breath .", "On the other hand, it also aims to improve the expertise level based on context, so that laymen's expressions can be more accurate and professional.", "For example, in the second pair, causing further damage is not as accurate as ulcerates , omitting the important mucous and disintegrative conditions of the sores.", "There are two related tasks, but neither serve as suitable prior art.", "The first is text style transfer (ST), which generates texts with different attributes but with the same content.", "However, although existing approaches have achieved a great success regarding the attributes of sentiment (Li et al., 2018) and formality (Rao and Tetreault, 2018) among others, expertise styling has not been explored yet.", "Another similar task is Text Simplification (TS), which rewrites a complex sentence with simple structures (Sulem et al., 2018b) while constrained by limited vocabulary (Paetzold and Specia, 2016).", "This task can be regarded as similar to our subtask: reducing the expertise level from expert to layman language without considering the opposing direction.", "However, most existing TS datasets are derived from Wikipedia, and contain numerous noise (misaligned instances) and inadequacies (in-stances having non-simplified targets) (Xu et al., 2015; Surya et al., 2019); in which further detailed discussion can be found in Section 3.2.", "In this paper, we construct a manually-annotated dataset for expertise style transfer in medical domain, named MSD, and conduct deep analysis by implementing state-of-the-art (SOTA) TS and ST models.", "The dataset is derived from human-written medical references, The Merck Manuals 1 , which include two parallel versions of texts, one tailored for consumers and the other for healthcare professionals.", "For automatic evaluation, we hire doctors to annotate the parallel sentences between the two versions (examples shown in Figure 1).", "Compared with both ST and TS datasets, MSD is more challenging from two aspects: Knowledge Gap.", "Domain knowledge is the key factor that influences the expertise level of text, which is also a key difference from conventional styles.", "We identify two major types of knowledge gaps in MSD: terminology, e.g., dyspnea in the first example; and empirical evidence.", "As shown in the third pair, doctors prefer to use statistics ( About 1/1000 ), while laymen do not ( quite small ).", "Lexical & Structural Modification.", "Fu et al. (2019) has indicated that most ST models only perform lexical modification, while leaving structures unchanged.", "Actually, syntactic structures play a significant role in language styles, especially regarding complexity or simplicity (Carroll et al., 1999).", "As shown in the last example, a complex sentence can be expressed with several simple sentences by appropriately splitting content.", "However, available datasets rarely contain such cases.", "Our main contributions can be summarized as: We propose the new task of expertise style transfer, which aims to facilitate communication between experts and laymen.", "We contribute a challenging dataset that requires knowledge-aware and structural modification techniques.", "We establish benchmark performance and discuss key challenges of datasets, models and evaluation metrics.", "Existing ST work has achieved promising results on the styles of sentiment (Hu et al., 2017; Shen et al., 2017), formality (Rao and Tetreault, 2018), offensiveness (dos Santos et al., 2018), politeness (Sennrich et al., 2016), authorship (Xu et al., 2012), gender and ages (Prabhumoye et al., 2018; Lample et al., 2019), etc.", "Nevertheless, only a few of them focus on supervised methods due to the limited availability of parallel corpora.", "Jham-tani et al. (2017) extract modern language based Shakespeare's play from the educational site, while Rao and Tetreault (2018) and Li et al. (2018) utilize crowdsourcing techniques to rewrite sentences from Yahoo Answers, Yelp and Amazon reviews, which are then utilized for training neural machine translation (NMT) models and evaluation.", "More practically, there is an enthusiasm for unsupervised methods without parallel data.", "There are three groups.", "The first group is Disentanglement methods that learn disentangled representations of style and content, and then directly manipulating these latent representations to control style-specific text generation.", "Shen et al. (2017) propose a cross-aligned autoencoder that learns a shared latent content space between true samples and generated samples through an adversarial clas-sifier.", "Hu et al. (2017) utilize neural generative model, Variational Autoencoders (VAEs) (Kingma and Welling, 2013), to represent the content as continuous variables with standard Gaussian prior, and reconstruct style vector from the generated samples via an attribute discriminator.", "To improve the ability of style-specific generation, Fu et al. (2018) utilize multiple generators, which are then extended by a Wasserstein distance regularizer (Zhao et al., 2018).", "SHAPED (Zhang et al., 2018a) learns a shared and several private encoderdecoder frameworks to capture both common and distinguishing features.", "Some variants further investigate the auxiliary tasks to better preserve contents (John et al., 2019), or domain adaptation (Li et al., 2019).", "Another line of work argues that it is difficult to disentangle style from content.", "Thus, their main idea is to learn style-specific translations, which are trained using unaligned data based on back-translation (Zhang et al., 2019; Prabhumoye et al., 2018; Lample et al., 2019), pseudo parallel sentences according to semantic similarity (Jin et al., 2019), or cyclic reconstruction (Dai et al., 2019), marked with Translation methods .", "The third group is Manipulation methods .", "Li et al. (2018) first identify the style words by their statistics, then replace them with similar retrieved sentences with a target style.", "Xu et al. (2018) jointly train the two steps with a neutralization module and a stylization module based on reinforcement learning.", "For better stylization, Zhang et al. (2018b) introduce a learned sentiment mem-ory network, while John et al. (2019) utilize hierarchical reinforcement learning.", "Earlier work on text simplification define a sentence as simple, if it has more frequent words, shorter length and fewer syllables per word, etc.", "This motivates a variety of syntactic rule-based methods, such as reducing sentence length (Chan-drasekar and Srinivas, 1997; Vickrey and Koller, 2008), lexical substitution (Glavas and Stajner, 2015; Paetzold and Specia, 2016) or sentence splitting (Woodsend and Lapata, 2011; Sulem et al., 2018b).", "Another line of work follows the success of machine translation (MT) (Klein et al., 2017), and regards TS as a monolingual translation from complex language to simple language (Zhu et al., 2010; Coster and Kauchak, 2011; Wubben et al., 2012).", "Zhang and Lapata (2017) incorporate reinforcement learning into the encoderdecoder framework to encourage three types of simplification rewards concerning language simplicity, relevance and fluency, while Shardlow and Nawaz (2019) improve the performance of MT models by introducing explanatory synonyms.", "To alleviate the heavy burden of parallel training corpora, Surya et al. (2019) propose an unsupervised model via adversarial learning between a shared encoder and separate decoders.", "The simplicity of language in the medical domain is particularly important.", "Terminologies are one of the main obstacles to understanding, and extracting their explanations could be helpful for TS (Shardlow and Nawaz, 2019).", "Deleger and Zweigenbaum (2008) detect paraphrases from comparable medical corpora of specialized and lay texts, and Kloehn et al. (2018) explore UMLS (Bo-denreider, 2004) and WordNet (Miller, 2009) with word embedding techniques.", "Furthermore, Van den Bercken et al. (2019) directly align sentences from medical terminological articles in Wikipedia and Simple Wikipedia 2 , which confines the edi-tors' vocabulary to only 850 basic English words.", "Then, they refine these aligned sentences by experts towards automatic evaluation.", "However, the Wikipedia-based dataset is still noisy (with misaligned instances) and inadequate (instances having non-simplified targets) with respect to both model training and testing.", "Besides, it is usually ignored that the opposite direction of TS improving the expertise levels of layman language for accuracy and professionality is also critical for better communication.", "To sum up, both tasks lack parallel data for training and evaluation.", "This prevents researchers from exploring more advanced models concerning the knowledge gap as well as linguistic modification of lexicons and structures.", "In this work, we define a more useful and challenging task of expertise style transfer with high-quality parallel sentences for evaluation.", "Besides, the two communities of ST and TS can shed lights to each other on sentence modification techniques.", "We describe our dataset construction that comprises three steps: data preprocessing, expert annotation and knowledge incorporation.", "We then give a detailed analysis.", "The Merck Manuals, also known as the MSD Manuals, have been the world's most trusted health reference for over 100 years.", "It covers a wide range of medical topics, and is written through a collaboration between hundreds of medical experts, supervised by independent editors.", "For each topic, it includes two versions: one tailored for consumers and the other for professionals.", "Step 1: Data Preprocessing.", "Although the two versions of documents refer to the same topic, they 2 https://simple.wikipedia.org/wiki/ Main_Page Pleural Effusion , Symptoms Expert Many cause dyspnea [C0013404] , pleuritic chest pain [C0008033] , or both.", "We first collect the raw texts from the MSD website 3 , and obtain 2601 professional and 2487 consumer documents with 1185 internal links among them.", "We then split each document into sentences, with the resultant distribution of medical topics as shown in Figure", "2. Finally, to alleviate the annotation burden, we find possible parallel groups of sentences by matching their document titles and subsection titles, which denote medical PCIO elements, such as the Diagnosis and Symptoms.", "Specifically, we first disambiguate the internal links by matching the document title and its accompanied ICD-9 code.", "Then, we manually align medical PCIO elements in the two versions to provide fine-grained internal links.", "For example, all sentences for Atherosclerosis.Symptoms in the professional MSD may be aligned with those for Atherosclerosis.Signs in the consumer MSD.", "We thus obtain 2551 linked sentence groups as candidates for experts to annotate.", "Each group contains 10.40 and 11.33 sentences on average for the professional and consumer versions, respectively.", "We then randomly sample 1000 linked groups for expert annotations in the next section 4 .", "Step 2: Expert Annotation.", "Given the aligned groups of sentences in professional and consumer 3 https://www.msdmanuals.com/ 4 The testing size is consistent with other ST datasets, and the rest of groups will be annotated for a larger dataset in the future.", "MSD, we develop an annotation platform to facilitate expert annotations.", "We hire three doctors to select sentences from each version of group to annotate pairs of sentences that have the same meaning but are written in different styles.", "The hired doctors are formally medically trained, and are qualified to understand the semantics of the medical texts.", "To avoid subjective judgments in the annotations, they are not allowed to change the content.", "Particularly, the doctors are Chinese who also know English as a second language.", "Thus, we provide the English content accompanied with a Chinese translation as assistance, which helps to increase the annotation speed while ensuring quality.", "We also conduct verification on each pair of parallel sentences with the help of another doctor.", "Note that each pairing may contain multiple professional and consumer sentences; i.e., multiple alignment is possible, the alignments are not necessarily one-to-one.", "The strict procedure also discards many aligned groups, leading to 675 annotations for testing, with distribution of medical PCIO elements as shown in Figure", "3. Figure 3: Distribution of testing set based on PCIO.", "Step 3: Knowledge Incorporation.", "To facilitate knowledge-aware analysis, we can utilize information extraction techniques (Cao et al., 2018a, 2019) to identify medical concepts in each sentence.", "Here, we use QuickUMLS (Soldaini and Goharian, 2016) to automatically link entity mentions to Unified Medical Language System (UMLS) (Boden-reider, 2004).", "Note that each mention may refer to multiple concepts, each for which we align to the highest ranked one.", "As shown in Table 1, the Metric MSD Train MSD Test SimpWiki Expert Layman Ratio Expert Layman Ratio Expert Layman Ratio #Annotation 0 0 675 675 -2,267 2,267 #Sentence 130,349 114,674 930 1,047 1.13 2,326 2,307 0.99 #Vocabulary 60,627 37,348 0.62 4,117 3,350 0.81 10,411 8,823 0.85 #Concept Vocabulary 24,153 15,060 0.62 1,865 1,520 0.81 2,899 2,458 0.85 FleshKincaid 12.61 9.97 0.79 12.05 9.53 0.79 12.10 9.63 0.80 Gunning 18.43 15.29 0.83 17.89 15.07 0.84 17.66 14.86 0.84 Coleman 12.66 10.41 0.82 12.26 9.74 0.79 10.89 9.70 0.89 Avg.", "Through this three step process, we obtain a large set of (non-parallel) training sentences in each style, and a small set of parallel sentences for evaluation.", "The detailed statistics as compared with other datasets can be found in Table 2 and Table", "3. 3.2 Dataset Analysis Let us compare our MSD dataset against both publicly available ST and TS datasets.", "SimpWiki (Van den Bercken et al., 2019) is a TS dataset derived from the linked articles between Simple Wikipedia and Normal Wikipedia.", "It focuses on the medical domain and extracts parallel sentences automatically by computing their BLEU scores.", "GYAFC (Rao and Tetreault, 2018) is the largest ST dataset on formality in the domains of Entertainment & Music (E&M) and Family & Relationships (F&R) from Yahoo Answers.", "It contains more than 50,000 training sentences (non-parallel) for each domain, and over 1,000 parallel sentences for testing, obtained by rewriting informal answers via Amazon Mechanical Turk.", "Yelp and Amazon (Li et al., 2018) are sentiment ST datasets by rewriting reviews based on crowdsourcing.", "They both contain over 270k training sentences (non-parallel) and 500 parallel sentences for evaluation.", "Authorship (Xu et al., 2012) aims at transferring styles between modern English and Shakespearean English.", "It contains 18,395 sentences for training (non-parallel) and 1,462 sentence pairs for testing.", "Dataset Statistics Table 2 presents the statistics of expertise and layman sentences in our dataset as well as SimpWiki.", "We split the sentences using NLTK, and compute the ratio of layman to expert in each metric to denote the gap between the two styles (a lower value implies a smaller gap expect that for #Sentence).", "Three standard readability indices are used to evaluate the simplicity levels: FleshKincaid (Kincaid et al., 1975), Gunning (Gunning, 1968) and Coleman (Coleman and Liau, 1975).", "The lower the indices are, the simpler the sentence is.", "Note that SimpWiki does not provide a train/test split, and thus we randomly sample 350 sentence pairs for evaluation.", "We follow the same strategy in our experiments.", "Compared with SimpWiki, we can see that: (1) MSD evaluates the structure modifications.", "As the layman language usually requires more simple sentences to express the same meaning as in the expert language, each expert sentence in MSD Test refers to 1.13 layman sentences on average, while the number in SimpWiki is only 0.99.", "(2) MSD is more distinct between the two styles, which is critical for style transfer.", "This is markedly demonstrated by the larger difference between their (con-cepts) vocabulary sizes (0.62/0.81 vs. 0.85 in ratio of layman to expert), and between the readability indices (0.81/0.81 vs. 0.84 on average).", "(3) we have more complex professional sentences in expert language (14.57/14.07 vs. 13.55 in the three readability indices on average) but comparatively simple sentences in laymen language (11.89/11.45 vs. 11.40).", "This is intuitive because both versions of Wikipedia are written by crowdsourcing editors, and MSD is written by experts in medical domain.", "Quality of Parallel Sentences One of the main concerns in ST is the limitations of parallel sentences towards automatic evaluation.", "On one hand, assuming that the parallel sentences have the same meaning, many datasets find the aligned sentences to have higher string overlap (as measured by BLEU).", "On the other hand, the two sentences should have different styles, and may vary a lot in expressions: and thus leading to a lower BLEU.", "Hence how to build a testing dataset that considers both criteria is critical.", "We analyze the quality of testing sentence pairs in each dataset.", "Table 3 presents the BLEU and edit distance (ED for short) scores.", "Note that each pair of parallel sentences is verified to convey the same meaning during annotation.", "We see that: (1) MSD has the lowest BLEU and highest ED.", "This implies that MSD is very challenging that requires both lexical and structural modifications.", "(2) TS datasets reflect more structural differences (with higher ED values) as compared to ST datasets.", "This means that TS datasets concerning the nature of language complexity (simplicity) are more complex to transfer.", "We reimplement five SOTA models from prior TS and ST studies on both MSD and SimpWiki datasets.", "A further ablation study gives a detailed analysis of the knowledge and structure impacts, and highlights the challenges of existing metrics.", "We choose the following methods to establish benchmark performance on the two datasets on expertise style transfer, because they: (1) achieve SOTA performance in their fields; (2) are typical methods (as grouped in Section 2); and (3) release codes for reimplementation.", "The TS models 5 selected are: (1) Supervised model OpenNMT+PT that incorporates a phrase table into OpenNMT (Klein et al., 2017), which provides guidance for replacing complex words with their simple synonym (Shardlow and Nawaz, 2019); and (2) Unsupervised model UNTS that utilizes adversarial learning (Surya et al., 2019).", "The models for ST task selected are: (1) Disentanglement method ControlledGen (Hu et al., 5 We only report TS models for expertise to laymen language, since they do not claim the opposite direction. 2017) that utilizes VAEs to learn content representations following a Gaussian prior, and reconstructs a style vector via a discriminator; (2) Manipulation method DeleteAndRetrieve (Li et al., 2018) that first identifies style words with a statistical method, then replaces them with target style words derived from given corpus; and (3) Translation method StyleTransformer (Dai et al., 2019) that uses cyclic reconstruction to learn content and style vectors without parallel data.", "We use the pre-trained OpenNMT+PT model released by the authors 6 .", "Other models are trained using MSD and SimpWiki training data.", "We leave 20% of the training data for validation.", "The training settings follow the standard best practice; where all models are trained using Adam (Kingma and Ba, 2015) with mini-batch size 32 , and the hyper-parameters are tuned on the validation set.", "We set the shared parameters the same for baseline models: the maximum sequence length is 100, the word embeddings are initialized with 300-dimensional GloVe (Pennington et al., 2014), learning rate is set to 0 .", "001 , and adaptive learning rate decay is applied.", "We adopt early stopping and dropout rate is set to 0 .", "5 for both encoder and decoder.", "Following Dai et al. (2019), we make an automatic evaluation on three aspects:", "Style Accuracy (marked as Acc) aims to measure how accurate the model controls sentence style.", "We train two classifiers on the training set of each dataset using fasttext (Joulin et al., 2017).", "Fluency (marked as PPL) is usually measured by the perplexity of the transferred sentence.", "We fine-tune the state-of-the-art pretrained language model, Bert (Devlin et al., 2019), on the training set of each dataset for each style.", "Content Similarity measures how much content is preserved during style transfer.", "We calculate 4-gram BLEU (Papineni et al., 2002) between model outputs and inputs (marked as self-BLEU), and between outputs and gold human references (marked as ref-BLEU).", "thus also conduct human evaluation.", "To evaluate over the entire test set, only layman annotators are involved, but we ensure that the layman style sentences are accompanied as references to assist understanding.", "Each annotator is asked to rate the model output given both input and gold references.", "The rating ranges from 1 to 5, where higher values indicate that more semantic content is preserved.", "Text Simplification Measurement.", "The above metrics may not perform well regarding language simplicity (Sulem et al., 2018a).", "So, we also utilize a TS evaluation metrics: SARI (Xu et al., 2016).", "It compares the n-grams of the outputs against those of the input and human references, and considers the added, deleted and kept words by the system.", "Table 4 present the overall performance.", "Since each pair of parallel sentences has been verified during annotation, we did not report human scores to avoid repeated evaluations.", "We can see that: (1) Parallel sentences in MSD have higher quality than SimpWiki, because our gold references are more fluent (4.29 vs. 7.65 in perplexity on average) and more discriminable (91% vs. 60% on average style accuracy).", "(2) The transfer for L2E is more difficult (except in content similarity) than that for E2L: 39.55% vs. 42.50% in Acc on average, 11.50 vs. 10.33 in PPL on average and 2.80 vs. 2.63 in human ratings on average.", "This is because the increase in expertise levels requires more contexts and knowledge, and is harder than simplification.", "(3) TS models perform similarly with ST models.", "Besides, supervised model OpenNMT+PT outperforms the unsupervised UNTS in fluency and content similarity due to the additional supervision signals.", "On the other hand, UNTS achieves higher Acc since it utilizes more non-parallel training data.", "(4) The style accuracy is the reverse to content similarity, making it more challenging to propose a comprehensive evaluation metric that can balance the two opposite directions.", "In terms of content similarity, even if both self-BLEU and ref-BLEU show a strong correlation with human ratings (over 0.98 Pearson coefficient with p-value < 0 . 0001 ), the higher scores of ControlledGen cannot demonstrate its superior performance, as it actually makes little modifications to styles.", "Instead, DeleteAndRetrieve, presents a strong ability to control styles (70% on average in Acc on MSD), but hardly preserves the contents.", "Style Transformer performs more stably.", "Next, we discuss key factors of MSD.", "We take the E2L as the exemplar for discussion, as we have observed similar results for the opposing direction.", "Figure 4a shows the performance curves of BLEU and style accuracy.", "We choose the concept range to ensure they contain similar number of sentences.", "Along with the increasing number of concepts, we can see a downward BLEU trend.", "This is because it becomes more difficult to preserve content when the sentence is more professional.", "As for style accuracy, DeleteAndRetrieve achieves the peak around [8,12) concepts, while the performance of other models drops gradually.", "Clearly, a lower number of concepts benefit the model for better understanding the sentences due to their correlated semantics, but a larger number of concepts requires knowledge-aware text understanding.", "Figure 4b presents the performance curves regarding the structure differences, where the edit distance is computed as mentioned in Section 3.2.", "Higher score denotes more heterogeneous structures.", "We see a similar trend with the curves of concepts.", "That is, existing models perform well", "in simple cases (fewer concepts and less structural differences), but becomes worse if the language is complex.", "We doubt that the encoder in each model is able to understand the domain-specific language sufficient well without considering knowledge.", "We thus propose a simple variant of ControlledGen by introducing terminology definitions, and observe some interesting findings in Section 4.10.", "The style of medical PCIO elements (e.g., symptoms) are slightly different.", "We separately evaluate each model and present the results in Figure 4c.", "Style accuracy remains similar among these medical PCIO elements, but there are significant differences among the models in their performance for preserving content.", "Specifically, models perform well for those sentences about treatment , but perform poorly for evaluation , because this type of sentences usually involve many rare terms, challenging understanding.", "Table 5 presents the performance based on the TS evaluation metric, SARI.", "We utilize the Python package 7 and follow the settings in the original paper.", "Surprisingly, SARI on MSD presents a relatively comprehensive evaluation that is consistent with the above analysis as well as our intuition.", "ControlledGen and OpenNMT+PT are ranked lower since they tend to simply repeat the input.", "DeleteAndRetrieve and UNTS are ranked in the middle due to the accurate style transfer but poor content preservation.", "StyleTransformer is ranked highest as it performs stably in Table 4 and Figure 4a, 4b, 4c.", "This inspires us to further investigate automatic evaluation metrics based on TS studies, which is our ongoing work.", "Even so, we still recommend necessary human evaluation in the current stage.", "Table 6 presents two examples of transferred sentences.", "In the first example, both OpenNMT+PT and UNTS make lexical changes: replacing progresses with goes .", "DeleteAndRetrieve transfers style successfully but also changes the content slightly.", "The other two output the original expert sentence, that is the reason why they achieve higher BLEU (also PPL) but fails in Acc.", "Manipulation method (i.e., DeleteAndRetrieve) is more progres-sive in changing the style, but disentanglement method, ControlledGen, prefers to stay the same.", "gies recurrent spontaneous pneumothorax , but the output sentence can be deemed correct.", "ControlledGen still outputs the original input sentence, and the other three fail by either simply cutting the long sentence off, or changing the complex words randomly.", "Besides, all of the above models still perform much worse than human, which motivates research into better models.", "We have two observations from the aspects of model and evaluation.", "For models, there is a huge gap between all of the above models and human references.", "MSD is indeed challenging to conduct language modifications considering both knowledge and structures.", "Most of the time, these models basically output the original sentences without any modifications, or simply cut off the complex long sentence.", "Therefore, it is exciting to combine the techniques in TS, such as syntactic revisions including sentence splitting and lexical substitutions, with the techniques in ST: style and content disentanglement or the unsupervised idea of alleviating the lack of parallel training data.", "For evaluation, human checking is necessary in the current stage, even though SARI seems to offer a good start for automatic evaluation.", "Based on our observations, it is actually easy to fool the three ST metrics simultaneously via a trick: output sentences by adding style-related words before the original inputs .", "This is demonstrated by a variant of ControlledGen.", "We incorporate into the generator an extra knowledge encoder, which encodes the definition of concepts in each sentence (as mentioned in Section 3.1).", "Surprisingly, such a simple model achieves a very high style accuracy (over 90%) and good BLEU scores (around 20).", "But the model does not succeed in the style transfer task, and simply learns to add the word doctors into layman sentences while almost keeping the other words unchanged; and adding the word eg into the expertise sentences.", "Thus, it achieves good performance on all of the three ST measures, but makes little useful modifications.", "We proposed a practical task of expertise style transfer and constructed a high-quality dataset, MSD.", "It is of high quality and also challenging due to the presence of knowledge gap and the need of structural modifications.", "We established benchmark performance of five SOTA models.", "The results shown a significant gap between machine and human performance.", "Our further discussion analyzed the challenges of existing metrics.", "In the future, we are interested in injecting knowledge into text representation learning (Cao et al., 2017, 2018b) for deeply understanding expert language, and will help to generate knowledge-enhanced questions (Pan et al., 2019) for laymen.", "This research is supported by the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative.", "Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore." ]
[ "abstain", "objective", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "result", "result", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "method", "other", "other" ]
[ "Sentiment transfer is one popular example of a text style transfer task, where the goal is to reverse the sentiment polarity of a text.", "With a sentiment reversal comes also a reversal in meaning.", "We introduce a different but related task called positive reframing in which we neutralize a negative point of view and generate a more positive perspective for the author without contradicting the original meaning.", "Our insistence on meaning preservation makes positive reframing a challenging and semantically rich task.", "To facilitate rapid progress, we introduce a large-scale benchmark, POSITIVEPSYCHOLOGYFRAMES , with 8,349 sentence pairs and 12,755 structured annotations to explain positive reframing in terms of six theoretically-motivated reframing strategies.", "Then we evaluate a set of state-of-the-art text style transfer models, and conclude by discussing key challenges and directions for future work.", "To download the data, see https://github.", "com/GT-SALT/positive-frames 1 Introduction Gratitude is not only the greatest of virtues, but the parent of all the others.", "Marcus Tullius Cicero Text style transfer (TST) has received much attention from the language technologies community (Hovy, 1987; Jin et al., 2020), where the goal is to change some attribute, like the sentiment of the text, without changing any attribute-independent content (Mir et al., 2019; Fu et al., 2018; Logeswaran et al., 2018).", "Some TST applications such as de-biasing (Pryzant et al., 2020; Ma et al., 2020) and paraphrasing (den Bercken et al., 2019; Xu et al., 2012) require meaning-preserving transformations, while political leaning (Prabhumoye et al., 2018), sentiment (Shen et al., 2017; Hu et al., 2017), and topical transfer (Huang et al., 2020) allow for a change (cid:63) Equal contribution.", "in the underlying meaning.", "For instance, for a negative review, this was a bland dish , we can use a sentiment TST model to create a more positive this was a tasty dish , by swapping the word bland with tasty.", "Although the input's structure and attribute-independent content are preserved, the truth-conditional meaning is clearly altered.", "In this work, we introduce a closely related task positive reframingthat differs from sentiment TST in important ways.", "We effectively reframe negative text by inducing a complementary positive viewpoint (e.g. glass-half-full ), which nevertheless supports the underlying content of the original sentence.", "The reframe should implicate rather than contradict the source (see Figure 1), and the transformation should be motivated by theoretically justified strategies from from positive psychology (Harris et al. 2007; see Section 3).", "To use the example from before, we could reframe this was a bland dish with the self-affir-mation strategy and say I've made dishes that are much tastier than this one .", "This reframed one still communicates the author's original intention by conversationally implicating that the dish was unsatisfying (Grice, 1975), but it shifts the focus away from the negative judgment and onto a positive and 3682 self-affirming perspective.", "Numerous studies have shown the positive effects of this and other reframing strategies on well-being and cognitive performance (Martens et al., 2006; Cohen et al., 2006; Good et al., 2003), which motivate this work.", "Our main contribution is the design and implementation of a new positive reframing task.", "To facilitate research in this space, we introduce a parallel corpus of 8,349 reframed sentence pairs and 12,755 structured annotations for six theoretically-motivated re-write strategies.", "This is a significant contribution, especially since rich parallel corpora are scarce in TST tasks.", "Some related datasets exist for politeness (Madaan et al., 2020) and sentiment transfer (Shen et al., 2017; He and McAuley, 2016), but they lack this parallel structure.", "With only unaligned corpora, researchers are limited to unsupervised training paradigms, which notoriously fail to disentangle style from content, and thus also fail to preserve meaning (Lample et al., 2019).", "Using our parallel corpus, we examine how current state-of-the-art neural models work for positive reframing.", "We find that, supervised transformer-based neural models appear capable of rewriting a negative text without contradicting the original premise of that text.", "However, these models still struggle to generate reasonable positive perspectives, suggesting that our dataset will serve as a useful benchmark for understanding psychologically well-motivated strategies for augmenting text with positive perspectives.", "There is a longstanding interest in style transfer, starting with the early days schema-based systems (McDonald and Pustejovsky, 1985; Hovy, 1987), and then syntax-based (Zhu et al., 2010; Xu et al., 2016) and phrase-based machine translation (Xu et al., 2012; Wubben et al., 2012), into the age of end-to-end neural models.", "Recent works include supervised seq2seq tasks on parallel data (Rao and Tetreault, 2018; Fu et al., 2018) or pseudo-parallel data (Jin et al., 2019; Zhang et al., 2020b), as well as unsupervised generative modeling on nonparallel data (Hu et al., 2017; Shen et al., 2017), and semi-supervised techniques (Shang et al., 2019).", "Other ideas include domain adaptation (Li et al., 2019) or multi-task learning (Niu et al., 2018), zero-shot translation (Korotkova et al., 2019), unsupervised delete and generate approaches (Li et al., 2018; Sudhakar et al., 2019; Malmi et al., 2020; Madaan et al., 2020), and reinforcement learning (Zhang and Lapata, 2017; Wang et al., 2016).", "Many existing datasets lack parallel structure, so the unsupervised setting is common in TST.", "Unfortunately, many of these methods still fail to disentangle style from content and adequately preserve the meaning of the original text (Lample et al., 2019).", "Autoencoders are particularly vulnerable to this shortcoming (Hu et al., 2017; Zhao et al., 2018), but some unsupervised machine translation techniques appear less vulnerable (Artetxe et al., 2018; Lample et al., 2018).", "In contrast, our positive reframing task requires source meaning-preservation and the introduction of new content and new perspectives, posing a unique challenge to unsupervised methods.", "We also provide a parallel corpus to train supervised models for this task.", "Positivity is contagious and can spread quickly across social networks (Coviello et al., 2014; Hat-field et al., 1993).", "Positive contagion in teams can reduce group conflict and improve group cooperation and even task performance (Barsade, 2002).", "Effective leaders also harness the power of positive reframing to promote company growth (Sy and Choi, 2013; Sy et al., 2005; Johnson, 2009; Masters, 1992) and beneficially shape negotiations (Filipowicz et al., 2011), customer relations (Dietz et al., 2004), decision making (Gchter et al., 2009; Druckman, 2001) and policy outcomes (Erisen et al., 2014).", "At an individual level, people who express optimism and gratitude are less likely to have depressive symptoms (Lambert et al., 2012) and more likely to experience emotional and psychological well-being (Carver et al., 1999; Watkins et al., 2008; Scheier et al., 2001).", "On the other hand, fake expressions of positivity are correlated with negative brain activity (Ekman et al., 1990) and may actually be more harmful than helpful (Fredrickson, 2000; Fredrickson and Losada, 2005; Gross, 2013; Logel et al., 2009).", "That is why in our task it is essential that any positively reframed rephrased text remain true to the original premise of the source.", "In this way, our task is most similar to meaning-preserving transformations via parallel corpora from domains such as political argumentation (Chakrabarty et al., 2021), de-biasing (Pryzant et al., 2020; Ma et al., 2020), politeness (Madaan et al., 2020), and paraphrasing 3683 (den Bercken et al., 2019; Xu et al., 2012).", "In this section, we present our psychologically-motivated taxonomy of positive reframing strategies.", "Instead of merely swapping antonyms for negative words or inserting unfounded positive language into a sentence, these strategies work to more fundamentally reconstruct the author's fixed, global, and ultimately harmful self-narratives, which are known in the literature as cognitive distortions (Burns, 1981; Abramson et al., 2002; Walton and Brady, 2020).", "Cognitive distortions include many exaggerated or irrational self-focused thoughts (Nalabandian and Ireland, 2019), such as dichotomous all-or-nothing thinking (Oshio, 2012), over-generalization (Muran and Motta, 1993), and catastrophizing (Sullivan et al., 2001).", "We can reconstruct these ideas using strategies from positive psychology (Harris et al., 2007).", "Each strategy is designed to promote a beneficial shift in perspective without distorting the underlying context of the author's situation.", "Growth Mindset or, alternatively, the incremental theory of personality (Yeager et al., 2014; Burnette and Finkel, 2012), is the belief that one's skills and abilities are not immutable but can instead be changed and improved over time (Dweck, 2016); that one's willpower is an abundant rather than limited or exhaustible resource (Job et al., 2010, 2015); and that apparent setbacks like stress can be enhancing rather than debilitating (Crum et al., 2013).", "Instead of saying I'm such a lazy procrastinator , a growth-mindset would say I'm determined to learn better time management .", "This mindset has demonstrable benefits like improved performance on school tests (Good et al., 2003; Blackwell et al., 2007; Dweck and Yeager, 2019; Yeager et al., 2014).", "Impermanence means understanding that negative experiences are finite and temporary, and that others have also experienced or even overcome similar forms of adversity.", "Someone might say since I failed this test, I must be too stupid for school .", "An impermanence reframe could be This wasn't the test score I hoped for, but everyone slips up now and then. This category is also related to those proposed by Walton and Brady (2020): (1) focus on the possibility of improvement, (2) recognize specific, normal causes, and (3) understand you're not the only one.", "Neutralizing involves removing or rewriting negative phrases and terms so they are more neutral (Pryzant et al., 2020).", "Someone might complain that Wendy's customer service is terrible .", "A neutralized reframe could be Wendy's customer service could use some improvement .", "Optimism does not mean to negate or deny the negative aspects of a situation, but instead to shift the emphasis to the more positive aspects of the situation, including expectations for a bright future (Carver et al., 2010).", "For example, if there is a negative emphasis, like in the sentence, I've completely worked myself to the bone this week, burning the candle at both ends... TGIF , we can use optimism to shift the emphasis towards the positive as follows: It's been a long week, but now I can kick back, relax, and enjoy my favorite shows because it's the weekend .", "Self-affirmation means to assert a more holistic or expansive version of oneself by listing one's values, skills, and positive characteristics (Cohen and Sherman, 2014; Silverman et al., 2013).", "Positive psychology gives many examples like love, courage, hope, gratitude, patience, forgiveness, creativity, and humor (Harris et al., 2007).", "Reflecting on these values can bolster one's sense of integrity (see Self-Affirmation Theory; Steele 1988), can reduce depressive affect (Enright and Fitzgibbons, 2000), and can translate to increased performance on measurable tasks like exams (Martens et al., 2006; Cohen et al., 2006; Sherman et al., 2009).", "Thankfulness can also be described more broadly as an attitude of gratitude (Emmons and Shelton, 2002).", "Adding more positive words that convey thankfulness or gratitude (e.g. appreciate, glad that, thankful for).", "For example, we can reframe the rhetorical question , Is it sad that I don't wanna be at home and wish that work could call me in early ? by expressing gratitude for career: I am thankful that I have a job that makes me want to get out of bed everyday . 4 Data Collection We sourced all of our data from the Twitter API, filtering tweets according to the hashtag #stressed due to a few reasons.", "Note that at the time of data collection and annotation, there were no publicly available datasets with annotated 3684 Label Distribution Count Label Description ICC Gen 25.4% 2,120 Growth Mindset Viewing a challenging event as an opportunity for the author specifically to grow or improve themselves.", "cognitive distortions, and the literature on distortion classification was still relatively unexplored (Simms et al., 2017; Shickel et al., 2020).", "We instead chose the simple keyword #stressed to signal the anxiety, negative affect, and hopelessness that has been shown to accompany cognitive distortions by prior work (Sears and Kraus, 2009).", "1 Our decision to use Twitter was also motivated by the 280 character limit, which ensured that samples were short, focused expressions of relatively atomic ideas, as opposed to longer narrative-style texts from discussion platforms like Reddit's r/rant .", "Our filtered collection of negative texts comes from a collection of over 1 million #stressed tweets written between 2012 and 2021, and it excludes any replies and retweets, any insubstantial tweets less than 30 characters, and any text containing a URL, which is often associated with spam (Zhang et al., 2012; Grier et al., 2010).", "After we removed other hashtags or Twitter handles from the text, we used TextBlob (Loria, 2018) to exclude any overtly positive texts with a non-negative sentiment score.", "Finally, to reduce any confounds between cognitive distortions and hate speech, and to make the human annotation task more agreeable for crowd-workers, we excluded examples that were flagged as offensive with over 80% confidence according to HateSonar (Davidson et al., 2017).", "1 We also considered pet peeve , fml , and other keywords but manual inspection revealed that these tweets were unlikely to contain cognitive distortions.", "In contrast, stressed hashtag provides a high precision data collection.", "We acknowledge this as a limitation and urge readers to keep this mind when interpreting our findings.", "We recruited crowdworkers to reframe 8,687 randomly-sampled texts with two workers assigned to each task, so we had two unique reframe annotations for every tweet.", "The annotators were encouraged to decide independently which reframing strategy to use, and they could combine multiple strategies in the same reframe.", "We simply asked annotators to record the strategies they selected.", "Additionally, they gave us, on a scale from 1-5, a score indicating how positive the original text was, and separately, how positive the text had become after they reframed it.", "Finally, we asked workers to mark advertisements, spam, or any text they felt they could not understand or effectively reframe.", "These examples were later removed from the corpus (see Appendix A for details).", "In total, 204 workers participated in this task.", "Before they worked on the task, workers were asked to be familiar with our task by reading our provided reframing examples for each of the six strategies (Section 3), along with detailed annotation instructions.", "Then they had to pass a qualification test to show they can recognize different strategies in different reframing examples, with at least 5 out of 6 multiple-choice questions answered correctly.", "We paid all annotators a fair wage above the federal minimum and both manually and programmatically inspected their work for quality (see Appendix A).", "After removing any poor-quality data, we were left with 8,349 reframed sentences.", "The strategy label distribution is given on the left side of Table 1, where a single reframe can have more than one strategy label.", "To determine the reliability of the reframing strategy constructs, we randomly sampled 100 annotations from Section 4.1 and asked three annotators to consider both the original text and the reframed text, and then the annotators marked which of the six strategies were used in the given reframe.", "This allowed us to compute inter-annotator agreement scores for the strategy labels in Table 1.", "We observe the Intra-class Correlation for one-way random effects between the three raters and find moderate inter-rater agreement across these attribute categories (min 0.32; max 68).", "We also asked this second round of annotators to evaluate the genuineness of the reframes on a scale from 1-5.", "Our instructions explain that, with a more genuine reframe, it is more likely that someone in the original situation would say something similar.", "We find that, across all strategy labels, the average genuineness score is 4 out of 5, so we know the data conforms reasonably well to our task instructions.", "With POSITIVEPSYCHOLOGYFRAMES , we then examine how generative models work to automatically suggest a negatively-oriented self-narrative with a more positive shift in perspective without distorting any of the underlying meaning of that text.", "To do so will make use of encoder-decoder or conditional language models, as well as the six positive psychology strategies outlined in Section 3.", "Let ( s, t, t ) be a single annotation tuple in POSITIVEPSYCHOLOGYFRAMES for original source text s and positive reframe target t , which uses positive psychology strategies given by the multi-hot encoded vector t .", "In the Positive Reframing task, our goal is to encode s and, at decoding time, produce t which makes use of t strategies and preserves the underlying meaning of s .", "Therefore, we formulate the problem as conditional generation and, during training, we maximize the standard language modeling objective 1 NN (cid:88) i =0 log p ( g i | g 0: i 1 ) over the string g = { s, t , t } = { <BOS> , s 1 , s 2 , ..., s n , <STRG> , grow , imp , ..., thank , <REFR> , t 1 , t 2 , ..., t m , <EOS> } where g i is the i th token in the string of length N , which contains the start token <BOS> , the tokenized source s 1: n , the tokenized reframe target t 1: m , and the binary tokens grow , imp , ... indicating whether a particular strategy (e.g. grow th mindset) was used in reframe t .", "At decoding time, we consider three settings: Unconstrained generation p ( t | s ) , Controlled generation p ( t | s, t ) , and a strategy Prediction form of generation p ( t, t | s ) .", "Unlike in the Unconstrained setting, the Controlled generation is conditioned on the desired strategies t .", "In the Prediction setting, the model will concurrently predict the strategies it used to generate its own reframe.", "Note that, we introduce three different model settings here to capture how positive reframing assistance might be used by people in the real world.", "Specifically, the Unconstrained setting models reframing text directly without being aware of any specific strategy to use.", "The Prediction setting extends the unconstrained mode, i.e., produce the reframed text and also output the reframing strategies used in the reframing process spontaneously.", "The Controlled setting simulates the scenario of producing a reframed text with the help of concrete positive reframing strategies.", "For ground truth training, development, and testing, we randomly partition the annotations using an 8:1:1 ratio, with 6,679 train, 835 development and 835 test data.", "We fine-tune the GPT and GPT-2 language models (Radford et al., 2019) as well as two Seq2Seq neural machine translation models LSTM (Hochreiter and Schmidhuber, 1997) and CopyNMT (See et al., 2017) and finally, two encoder-decoder models, BART (Lewis et al., 2020) and T5 (Raffel et al., 2020).", "For all models, we use greedy decoding.", "As an ablation in the Unconstrained setting, we also test a No-pretrain condition for GPT-2 in which we randomly initialize the model parameters before fine-tuning.", "Retrieval: We test two simple retrieval systems: Random retrieval of a reframed sentence from the training set, and SBERT (Reimers and Gurevych, 3686 Automatic Evaluation Human Evaluation Model R-1 R-2 R-L BLEU BScore TB Avg.", "2019) retrieval, which finds the most similar t in train by cosine similarity and retrieves one of the corresponding ground-truth r from the training set.", "Few-shot Learning: Brown et al. (2020) shows the few-shot capabilities of language models and especially larger models like GPT-3.", "We evaluate few-shot abilities of both GPT-3 and its open-source implementation, GPT-Neo (Black et al., 2021) using k = 5 exemplars (See Appendix C).", "Following other style transfer work with a parallel corpus (Jhamtani et al., 2017; Xu et al., 2012), we evaluate our models for semantic similarity with the ground truth using the BLEU (Pa-pineni et al., 2002), ROUGE (Lin, 2004), and BERTScore (Zhang et al., 2020a).", "Since there are two ground truth annotations per tweet, we take the maximum of the two scores and report the average across these maxima.", "We also report T extBlob or the average change in sentiment score according to TextBlob (Loria, 2018).", "Finally, we conduct human evaluation in which 50 items are distributed to 3 raters who score the reframed sentences for three criteria, each on a scale from 1 to 5.", "The criteria include Meaning Preservation (Shang et al., 2019), our task-specific objective, as well as the Positivity and Fluency of the generated text, following the sentiment style transfer literature (Luo et al., 2019) 5.4 Results Automatic Evaluation Across these metrics (Ta-ble 2, left ) in the unconstrained generation setting, the BART model provided the highest quality of positive reframes, while GPT provided the worst quality with results similar to the No-pretrain version of GPT-2.", "The pre-trained version of GPT-2 was trained on English web text, while GPT was trained on works of fiction, so it appears that pretraining decisions can affect performance.", "We tested the two best-performing models, T5 and BART, on the controlled generation and strategy-prediction settings as well and found that the both models performed reasonably.", "Overall, controlled generation boosts performance, since the model can target the gold standard's strategies, but these improvements are only slight (see the Controlled part in Table 2).", "This warrants further 3687 investigation: in Section 5.6, we explore models' ability to identify the underlying strategies given an existing reframe to understand whether models can make sense of these underlying constructs.", "Unsurprisingly, all supervised models outperformed our simple retrieval baselines.", "Most interestingly, few-shot GPT-3 and GPT-Neo also could not match the supervised models in terms of overlap with the ground truth (ROUGE, BLEU, BERTScore), but they still achieved a comparable positive shift in sentiment ( TextBlob).", "Human Evaluation Human judgments both support and elaborate on the automatic evaluation find-ings.", "For our best performing BART and T-5 models, the average scores are very high, even surpassing the quality of the Human gold standard in all of the unconstrained, predictive, and controlled settings.", "These systems most effectively induce a natural-sounding positive reframe while also preserving the meaning of the original text.", "This is critical: controlled BART model scored 4.07 in Positivity and 4.27 in Fluency while also achieving the winning Meaning preservation score.", "In contrast with BART, the few-shot systems fail to preserve the meaning of the original sentence, despite their ability to articulately induce a more positive sentiment (Positivity scores up to 4.17; Fluency scores up to 4.27).", "Meaning preservation is absolutely critical for this task.", "From these results, we can conclude that, at the present time, supervised learning may be the most viable option for achieving reliable positive reframing results.", "POSITIVEPSYCHOLOGYFRAMES will facilitate ongoing efforts in this direction.", "Qualitative Investigation Table 3 shows example reframes generated by our best controlled BART model, with one example for each strategy (for a similar comparison between models , see Table 5 in Appendix D).", "We see that, even without explicit lexical overlap between the generation and ground truth, the model reframes can still shift the cognitive distortions and negative outlook to a more positive perspective.", "In each of these examples, the model does so without losing the underlying meaning of the original text.", "Transformer-based models appear to be capable of solving our task with reasonable success.", "However, success can be highly variable (as evidenced by Table 5), so there is still room for significant improvement.", "We manually go through 100 randomly sampled model generations by our best controlled BART model, and summarize the main error classes here.", "We manually investigated 100 randomly sampled model generations by our best controlled BART model, and summarize the four largest error classes here.", "First, 26% of generations contained (1) insubstantial changes .", "These were especially prominent in the neutralizing strategy where the model would swap only a few negative words, like changing the phrase I hate it to I don't like it. On the other hand, some reframed generations were so drastically modified they contained (2) contradictions to the premise (9% of instances).", "For example, \" Feel like crying, this math class is impossible to pass \" was transformed into \" This math class is hard, but I know I can pass it \" a failure of meaning preservation.", "More concerningly, the system can generate (3) self-contradictions (6%) like the phrase, \" I don't like opening up to people, but I'm glad I have the courage to do it. \" Finally, like many other NLG systems, our system can produce (4) hallucinations (2%) with unmotivated perspectives, like mentioning a good night sleep when the original post was about nosebleeds in the bath.", "In Section 5.4, we observed only slight performance gains when conditioning the generation based on the ground-truth reframing strategy ( Control section in Table 2).", "For this reason, we take a closer look at whether models can reliably understand and classify the reframe strategies underlying a given source-reframe text pair.", "We formulate this problem as a multi-label multi-class classification task over sentence pairs ( s, t ) .", "Given both the source text and positive reframe target in the annotation tuple ( s, t ) from POSITIVEPSYCHOLOGYFRAMES , we predict the multi-hot encoded strategy vector t = [ s grow ; s imp ; ... ; s thank ] using transformer models.", "We experiment with a set of state-of-the-art classifiers, including BERT (De-vlin et al., 2019), RoBERTA (Liu et al., 2019), and XLNet (Yang et al., 2019).", "As shown in Table 4, all of the classification models can learn to recognize the thankfulness, optimism, and growth mindset strategies with moderate reliability ( F 1 > 0 . 60) .", "Although XLNet model cannot identify the neutralizing strategy very well, BERT and RoBERTa models can achieve an 3688 Original Text Strategy Positive Reframe", "F1 score of around 0.6.", "The impermanence and self-affirmation strategies appear more challenging for all three models to identify.", "Overall, the results here show that this task is tractable: reframe strategies are learnable by various classification models.", "This further supports the reliability of our Positive Psychology framework, confirming what we found with human reliability metrics in Section 4.2.", "Although we mainly treat this frame strategy classification as a robustness check and deep dive into the role of framing strategies, this task can also be a novel NLP or computational social science application on its own, i.e., determining the positive reframing relation between a pair of sentences.", "This work introduces a new and challenging NLG task called positive reframing .", "The objective is to construct a more positive outlook as a way of rephrasing a negative source text such that the meaning of that source is preserved.", "Our parallel dataset, POSITIVEPSYCHOLOGYFRAMES , will serve as a benchmark that will enable sustained work on this task.", "We experiment with many of the leading style-transfer models and show that these models can learn to shift from a negative to a more positive perspective using a combination of strategies from positive psychology.", "Importantly, the best models are fluent and effective reframing systems that can learn to largely preserve the meaning of the original text, even under a perspective shift.", "However, these models still struggle to generate reasonable positive perspectives, and even the best models are still prone to errors.", "We discuss four key error classes: insubstantial changes, contradictions to the premise, self-contradictions, and hallucinations, as shown in Error Analyses in Section 5.5.", "Overall, this suggests that our dataset can serve as a useful benchmark for understanding well-motivated positive reframing strategies and equipping natural language generation systems with positive perspectives.", "Future work can dive deeper into these issues by enforcing a stronger level of semantic equivalence between the generation and the source text 3689 (Nie et al., 2019).", "Even with semantic equivalence constraints, it would be necessary to also allow for the injection of new positive perspectives.", "Methods ranging from guided sequence generation (Krause et al., 2020) or semantic attention-guided decoding (Nie et al., 2019) to pragmatic reconstruction (Shen et al., 2019) and persona consistency (Kim et al., 2020) may all be applicable in follow-up studies.", "The authors would like to thank reviewers for their helpful insights and feedback.", "CZ is supported by the NSF Graduate Research Fellowship under Grant No.", "DGE-2039655 and DY is supported by the Microsoft Research Faculty Fellowship.", "This work is funded in part by a grant from Amazon.", "Annotation.", "We followed the guidelines for ethical annotation practices and crowdsourcing that are outlined in (Sheehan, 2018), including paying workers a fair wage above the federal minimum.", "If workers contacted us with any questions or concerns, we responded promptly to them within 24 hours.", "In the task interface, in the header, we warned annotators that the content might be upsetting, and we gave the following recommendation: if any point you do not feel comfortable, please feel free to skip the HIT or take a break. .", "Deployment.", "Although this data is designed for pro-social outcomes (i.e. increasing positivity in text), there may be unexpected use-cases for this data, such as obfuscating impolite or even hateful data to avoid detection (ElSherief et al., 2021).", "The parallel structure of the data means it is also possible to invert the direction of the seq2seq task to introduce more negative or pessimistic perspectives into a positive source.", "This is not a particularly new risk, since sentiment style transfer can accomplish a similar outcome in this direction.", "Still, we will require interested parties to sign a data-use agreement that encourages only ethical uses of POSITIVEPSYCHOLOGYFRAMES ." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method" ]
[ "We describe a span-level supervised attention loss that improves compositional generalization in semantic parsers.", "Our approach builds on existing losses that encourage attention maps in neural sequence-to-sequence models to imitate the output of classical word alignment algorithms.", "Where past work has used word-level alignments, we focus on spans; borrowing ideas from phrase-based machine translation, we align subtrees in semantic parses to spans of input sentences, and encourage neural attention mechanisms to mimic these alignments.", "This method improves the performance of transformers, RNNs, and structured decoders on three benchmarks of compositional generalization.", "Semantic parsers translate natural language utterances ( e.g., Schedule a meeting with Jean ) into executable programs ( e.g., CreateEvent( attendees=Jean) ), and play a crucial role in applications such as question answering systems and conversational agents (Liang, 2016; Gupta et al., 2018; Wen et al., 2017).", "As in many language understanding problems, a central challenge in semantic parsing is compositional generalization (Finegan-Dollak et al., 2018; Keysers et al., 2020).", "Consider a personal digital assistant for which developers have assembled separate collections of annotated utterances for user requests involving their calendars ( e.g., Schedule a meeting with Jean ) and their contact books ( e.g., Who is Jean's manager? ).", "An effective model should learn from this data how to additionally handle requests like Schedule a meeting with Jean's manager , composing skills from the calendar and contacts domains, with little or no supervision for such combinations.", "parsers (Dong and Lapata, 2016; Yin and Neubig, 2017), tend to perform poorly at out-of-distribution generalization of this kind (Lake and Baroni, 2018; Furrer et al., 2020; Suhr et al., 2020).", "Methods have been proposed to bridge the generalization gap using meta-learning (Lake, 2019; Wang et al., 2020) or specialized model architectures (Russin et al., 2019; Li et al., 2019; Liu et al., 2020; Chen et al., 2020).", "These have registered impressive performance on small synthetic benchmark datasets, but it has proven difficult to effectively combine them with large-scale pre-training (Lewis et al., 2020; Raffel et al., 2020) and natural data (Furrer et al., 2020).", "In contrast to this extensive literature on data transformations and model architectures, the design of loss functions to encourage compositional generalization has been under-explored.", "This paper investigates attention supervision losses that encourage attention matrices in neural sequence models to resemble the output of word alignment algorithms (Liu et al. (2016); Mi et al. (2016); Arthur et al. (2016); Lyu and Titov (2018), inter alia ) as a source of inductive bias for compositional tasks.", "Previous work has found that aligning program tokens ( e.g., FindManager in Fig. 1) to natural language tokens ( manager ) improves model performance (Misra et al., 2018; Rabinovich et al., 2017; Goldman et al., 2018; Richardson et al., 2018; Herzig and Berant, 2020; Oren et al., 2020).", "However, the token-level alignments derived from off-the-shelf aligners are often noisy, and the correspondence between natural language and program tokens is not always a many-to-one map of the kind returned by standard alignment algorithms.", "On the other hand, programs also have explicit hierarchical structure, which could be useful to induce better attention regularizers (Wang et al., 2019).", "Here we investigate the use of span-level alignments , identifying sub-programs that should be predicted as a unit and aligning all tokens in a sub-program to a CreateEvent CreateEvent( start=DateTime(date=Wednesday,time=NumberPM(2)), attendees=FindManager(recipient=Jean)) start DateTime date Wednesday time NumberPM 2 attendees FindManager recipient Jean A d d m ee ti n g w it h Je a n s m a n a g e r o n W e d n e s d a y a t 2 PM ?x0 directed_by ?x1 edited_by ?x1 ?x1 art_directed M1 gender female W h a t d i d M 1 s f e m a l e a r t d i r ec t o r d i r ec t a n d e d it Utterance Token Vectors [CLS] In which city did ... Piotr 's 2007 Bangkok 1st", "corresponding natural language span (Herzig and Berant, 2020).", "We present a simple algorithm to derive span-level alignments from token-level alignments.", "Our approach is compatible with multiple models (RNNs, transformers, and structured tree-based de-coders), pretrained or not.", "In experiments, span-based attention supervision consistently improves over token-level objectives, achieving strong results on three semantic parsing datasets featuring diverse formalisms and tests of generalization.", "Supervised Attention Existing token-level supervised attention approaches assume access to an alignment matrix A | u || z | with entries a i,j , where a i,j = 1 iff the i -th source (utterance) token u i is aligned to the j -th target (program) token z j .", "A | u || z | can be inferred using latent variable models (Brown et al., 1993; Och and Ney, 2003; Dyer et al., 2013).", "During training, when the decoder predicts a target token z j , supervised attention encourages the target-to-source attention distribution p att ( u i | z j ) to match the prior alignment distribution p prior ( u i | z j ) = a i,j (cid:80) k a k,j , which is normalized by the number of source tokens aligned to z j .", "We use a squared error loss (Liu et al., 2016): L sup _ att = 1 | z | | z | (cid:88) j =1 | u | (cid:88) i =1 (cid:0) p att ( u i | z j ) p prior ( u i | z j ) (cid:1) 2 .", "Neural Semantic Parsers A semantic parser maps a natural language (NL) utterance u to an executable program z .", "In this paper, we consider neural parsers using token-based attentive decoders, in which z is predicted as a sequence of consecutive tokens { z | z | j =1 } by attending to tokens in u = { u | u | i =1 } .", "Examples include sequence-to-sequence models based on recurrent networks (Dong and Lapata, 2016; Jia and Liang, 2016) or transformers (Vaswani et al., 2017; Raffel et al., 2020), as well as structured parsing methods that predict a program following its syntactic structure (Dong and Lapata (2018), see 3 for more details).", "Previous work has also used a cross entropy loss (Rabinovich et al., 2017; Oren et al., 2020).", "Sub-program-to-Span Alignment We present a simple heuristic algorithm to extract span-level alignments between programs and utterances from existing token-level results (Algo. 1).", "Fig. 1 illustrates example span-level alignments for two types of programs (LISP and simplified SPARQL ).", "Similarly to Dong and Lapata (2018), we assume each program can be decomposed into a top-level sketch and a set of sub-programs .", "1 For the LISP expression in Fig. 1a, the sketch contains the top-level function call ( CreateEvent( ? , ? ) ) and subprograms are named arguments paired with values 1 Unlike D&L, we allow sub-programs to include nonconsecutive (and possibly overlapping) spans of program tokens, e.g.,", "{?x0 {edited_by", "{?x1}} in Fig. 1b.", "We also permit non-disjoint sub-programs.", "( attendees=FindManager . . . ).", "For the SPARQL expression in Fig. 1b, sketches include the query form ( e.g., SELECT DISTINCT ) and sub-programs hold individual subject-relation-object assertions ( e.g., ?x0 edited_by ?x1 ).", "2 In this paper, we use these program decompositions to guide span-level alignment.", "The underlying intuition is that every token in a sketch or sub-program will get aligned to the same set of utterance tokens.", "Algo.", "1 extracts such set of utterance spans aligned to a sub-program z s from the set T z s of NL tokens that are aligned to tokens in z s (line 3).", "We present two variants of this approach, depending on the properties of the dataset (3).", "In the first case (lines 5-6), similar to bilingual phrase extraction in machine translation (MT; Och, 2002), we create a single consecutive utterance span u m : n via the outer bound of the aligned utterance tokens in T z s ( e.g., Block 1, Fig. 1a).", "In the second variant (lines 8-9), we find internally contiguous utterance spans (subsequences) in T z s and align them to z s .", "For instance, the sub-program ( ?x1 art_directed M1 ) in Block 2 of Fig. 1b aligns to two utterance spans: M1 's and art director .", "While this case does not have an exact analog in MT, it is reminiscent of the model of Chiang (2005) which extracts translation rules with discontinuous phrase segments, and could be useful in capturing long-range alignments of utterance subsequences to sub-programs 2 As we explain in Appendix B, such program decomposition could be easily generated using off-the-shelf syntax analyzers provided by the programming language.", "as in Block 2 (Andreas et al., 2013).", "Span-level alignments for a sub-program are then generated by pairing its program spans z p : q (spans with consecutive program tokens) with all its aligned utterance spans (lines 11-12).", "Finally, we generate alignments for sketch spans in z by pairing them with any utterance tokens that have not yet been aligned to a sub-program (lines 13-14).", "Algo.", "1 leverages the explicit hierarchical structures of programs to generate alignments between sub-programs and utterance spans.", "Such an idea of using structural information for alignment extraction has deep roots in statistical syntax-based MT, which leverages the syntactic structure of sentences to generate alignments between parse trees and NL constituents (Galley et al., 2004; Chiang, 2005; Liu et al., 2006).", "Our approach is also broadly related to lexicon induction models in semantic parsers based on probabilistic CCG grammars (Kwiatkowski et al., 2011) or other formalisms (Jones et al., 2012), which learn mapping rules between logical form templates and utterance tokens.", "We evaluate span-level supervised attention on three benchmarks of compositional generalization.", "SMCALFLOW Compositional Skills (SMCALFLOW-CS) is a new dataset created in this study based on the task-oriented dialogue corpus SMCALFLOW (Semantic Machines et al., 2020), featuring real-world human-generated utterances about calendar management.", "Like the motivating story in 1, we create training data for skills S involving event creation ( e.g., Schedule a meeting with Adam ) and organization structure ( e.g., Who's on Adam's team? ), while evaluating on examples C featuring compositional skills ( e.g., Add meeting with Adam and his team ).", "Utterances are annotated with LISP -style programs (Fig. 1a).", "Since zero-shot compositional generalization is highly non-trivial due to novel language patterns ( e.g., Adam and his team ) and program structures ( e.g., usage of List( ) to specify multiple attendees) in compositional examples, we consider a few-shot learning scenario, where a handful of compositional examples are included in the training set.", "Readers are referred to Appendix A for details of dataset construction.", "challenging compositional generalization dataset of 130 K synthetic utterances with SPARQL queries (Fig. 1b).", "Training and evaluation splits are constructed such that they have different distributions of compositional structures, while the distributions of atomic language ( e.g., director ) and program ( e.g., film.director ) constructs remain similar (Keysers et al., 2020).", "ATIS Text-to-SQL is a dataset of 3,809 SQL-annotated utterances about flight querying ( e.g., Flights from Seattle to Austin. ).", "We follow Oren et al. (2020) and use the query split (Finegan-Dollak et al., 2018), where training and evaluation programs do not overlap at template level.", "Models We apply span-level supervised attention to strong neural models on each dataset.", "We evaluate two systems on SMCALFLOW-CS: BERT 2S EQ , a sequence-to-sequence model with a BERT encoder and an LSTM decoder using copy mechanism, and COARSE 2F INE (Dong and Lapata, 2018), which uses (a BERT encoder and) a structured decoder that factorizes the generation of a program into sketch and value predictions.", "On CFQ, we use T5BASE (Raffel et al., 2020), and apply attention supervision on all the cross-attention heads in the last decoder layer.", "For ATIS , we take the best system from Oren et al. (2020) that is tuned for better generalization on this dataset, which is a sequence-to-sequence model with an ELMO encoder and coverage-based attention mechanism (See et al., 2017).", "We extract word alignments using IBM Model 4 in GIZA++ (Och and Ney, 2003), and canonicalize programs ( e.g., remove parentheses) to improve alignment quality.", "To extract span-level alignments, we use consecutive alignments (Case 1) in Algo.", "1 for SMCALFLOW-CS and ATIS , as those datasets feature simple one-to-one mapping between subprograms and utterance spans.", "For CFQ, we use nonconsecutive alignments (Case 2) to handle assertions aligned to disjoint NL spans (Fig. 1b).", "We apply Eq.", "(1) during model optimization using either the token and span level alignment matrix for token ( + TS ) and span ( + SS ) level supervised attention, respectively.", "See Appendix B for details.", "Tab.", "1 lists the evaluation results on SMCALFLOWCS with varying numbers of compositional examples in the training set ( C train ).", "3 We report accuracies on both the in-domain single-skill examples ( S ) as well as on the generalized compositional-skill examples ( C ).", "Both methods improve compositional generalization for BERT 2S EQ and COARSE 2F INE , while span-level supervised attention is more effective.", "Intuitively, span-level alignments could better capture the correspondence between sub-structures in utterances and programs, helping the parser to correctly predict such sub-programs in compositionally novel contexts by focusing on the corresponding utterance span.", "Interestingly, in such a low-resource learning scenario with only a handful of training compositional samples, span-level supervised attention offers more gains in extreme low-resource settings ( | C train | = 16 ), outperforming the base BERT 2S EQ model by 13% absolute ( 33 . 6% v.s. 46 . 8% for BERT 2S EQ ).", "Indeed, we found that more alignment-like attentions are associated with more accurate model predictions.", "For a BERT 2S EQ model with span-level supervision trained on | C train | = 64 , when predicting subprograms for the attendees argument ( e.g., attendees=FindManager(recipient=self) ) on compositional samples in C , the model achieves 86% sub-program accuracy if it assigns a time-step 3 We ran GIZA++ and extracted span-level alignments for each training split separately.", "average of at least 90% of its attention weights over the aligned utterance spans ( e.g., with my manager ) identified by Algo.", "1. Otherwise, the accuracy drops to 70% (more in Appendix C.1).", "Moreover, supervised attention may be a suf-ficient substitute for structured model architectures in some cases.", "Despite the unstructured BERT 2S EQ model's generally inferior performance without supervised attention, it matches the accuracies of COARSE 2F INE when both models are trained with span-level supervision.", "4 We also remark that span-based supervision maintains or improves performance on in-domain single-skill examples ( S ).", "For instance, the accuracy for BERT 2S EQ increases from 82 .", "8% to 83 .", "9% when | C train | = 16 .", "Next, on CFQ (Tab. 2), we report break-down results based on the syntactic types of questions: R ecursive questions with chained multi-hop relations ( e.g., u r : Was M1 influenced by a German writer? ), and C onjunctive ones with only conjunctions of entities and relations and without chained relations ( e.g., u c : Was M1 directed and edited by M2 and M3? ).", "While supervised attention is effective on recursive questions, it struggles on conjunctive ones.", "This may be because the model learns to attend to discontinuous utterance spans ( e.g., M1 directed and M2 and M3 in u c ) when predicting a relation ( e.g., directed_by ) in a conjunction, which could be more sensitive to alignment 4 We found that the sketch and sub-program decoders in COARSE 2F INE do not achieve their best DEV .", "accuracy at the same iteration during training, which could hurt performance in our few-short learning setting.", "errors.", "Additionally, utterance spans aligned to a sub-program in conjunctive questions are usually longer and more complex ( e.g., having multiple conjunctive entity mentions like Did M1 write M2, M3, M4, and M5? ), which might require more fine-grained supervision than uniformly treating every aligned utterance tokens equally as in Eq.", "(1).", "More analysis is in Appendix C.2.", "Finally, we present the results on the ATIS query splits in Tab.", "3, where span-level supervision is comparable with token-level one, further improving upon an already-strong model that targets for compositional generalization (ELMO with coverage based attention).", "Interestingly, token-level supervised attention is slightly worse than the baseline model on the standard i.i.d. splits, while span-level supervision does not offer further improvements.", "Empirically we observe that the utterance-SQL alignments in ATIS are much noisier than other two datasets due to redundant structures in SQL queries ( e.g., Join statements with intermediary ta-bles), whose aligned NL constituents are often not well defined (See Appendix B for more details).", "This paper demonstrated the effectiveness of span-level supervised attention as a simple and flexible tool for improving neural sequence models in a diverse set of architectures and tests of generalization.", "Future work might explore applications to other prediction tasks and joint learning of alignments with sequence model parameters.", "We thank the Semantic Machines team and anonymous reviewers for their valuable feedbacks.", "Pengcheng Yin was supported in part by an IBM Ph.D. fellowship." ]
[ "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "objective", "abstain", "other", "other" ]
[ "Previous studies in multimodal sentiment analysis have used limited datasets, which only contain unified multimodal annotations.", "However, the unified annotations do not always reflect the independent sentiment of single modalities and limit the model to capture the difference between modalities.", "In this paper, we introduce a Chinese singleand multimodal sentiment analysis dataset, CH-SIMS, which contains 2,281 refined video segments in the wild with both multimodal and independent unimodal annotations.", "It allows researchers to study the interaction between modalities or use independent unimodal annotations for unimodal sentiment analysis.", "Furthermore, we propose a multi-task learning framework based on late fusion as the baseline.", "Extensive experiments on the CH-SIMS show that our methods achieve state-of-the-art performance and learn more distinctive unimodal representations.", "The full dataset and codes are available for use at https://github.com/ thuiar/MMSA .", "Sentiment analysis is an important research area in Natural Language Processing (NLP).", "It has wide applications for other NLP tasks, such as opinion mining, dialogue generation, and user behavior analysis.", "Previous study (Pang et al., 2008; Liu and Zhang, 2012) mainly focused on text sentiment analysis and achieved impressive results.", "However, using text alone is not sufficient to determine the speaker's sentimental state, and text can be misleading.", "With the booming of short video applications, nonverbal behaviors (vision and audio) are introduced to solve the above shortcomings (Zadeh et al., 2016; Poria et al., 2017).", "portant and challenging subtasks (Baltru saitis et al., 2018; Guo et al., 2019).", "For intra-modal representation, it is essential to consider the temporal or spatial characteristics in different modalities.", "The methods based on Convolutional Neural Network (CNN), Long Short-term Memory (LSTM) network and Deep Neural Network (DNN) are three representative approaches to extract unimodal features (Cambria et al., 2017; Zadeh et al., 2017, 2018a).", "For inter-modal fusion, numerous methods have been proposed in recent years.", "For example, concatenation (Cambria et al., 2017), Tensor Fusion Network (TFN) (Zadeh et al., 2017), Low-rank Multimodal Fusion (LMF) (Liu et al., 2018), Memory Fusion Network (MFN) (Zadeh et al., 2018a), Dynamic Fusion Graph (DFG) (Zadeh et al., 2018b), and others.", "In this paper, we mainly consider late-fusion methods that perform intra-modal representation learning first and then employ inter-modal fusion.", "An intuitive idea is that the greater the difference between inter-modal representations, the better the complementarity of inter-modal fusion.", "However, it is not easy for existing late-fusion models to learn the differences between different modalities, further limits the performance of fusion.", "The reason is that the existing multimodal sentiment datasets only contain a unified multimodal annotation for each multimodal segment, which is not always suitable for all modalities.", "In other words, all modalities share a standard annotation during intra-modal representation learning.", "Further, these unified supervisions will guide intra-modal representations to be more consistent and less distinctive.", "To validate the above analysis, in this paper, we propose a Chinese multimodal sentiment analysis dataset with independent unimodal annotations, CH-SIMS.", "Figure 1 shows an example of the annotation difference between our proposed dataset and the other existing multimodal datasets.", "SIMS has 2,281 refined video clips collected from different movies, TV serials, and variety shows with spontaneous expressions, various head poses, occlusions, and illuminations.", "The CHEAVD (Li et al., 2017) is also a Chinese multimodal dataset, but it only contains two modalities (vision and audio) and one unified annotation.", "In contrast, SIMS has three modalities and unimodal annotations except for multimodal annotations for each clip.", "Therefore, researchers can use SIMS to do both unimodal and multimodal sentiment analysis tasks.", "Furthermore, researchers can develop new methods for multimodal sentiment analysis with these additional annotations.", "Based on SIMS, we propose a multimodal multitask learning framework using unimodal and multimodal annotations.", "In this framework, the unimodal and multimodal tasks share the feature representation sub-network in the bottom.", "It is suitable for all multimodal models based on late-fusion.", "Then, we introduce three late-fusion models, including TFN, LMF, and Late-Fusion DNN (LF-DNN), into our framework.", "With unimodal tasks, the performance of multimodal task is significantly increased.", "Furthermore, we make a detailed discussion on multimodal sentiment analysis, unimodal sentiment analysis and multi-task learning.", "Lastly, we verify that the introduction of unimodal annotations can effectively expand the difference between different modalities and obtain better performance in inter-modal fusion.", "In this work, we provide a new perspective for multimodal sentiment analysis.", "Our main contributions in this paper can be summarized as follows: We propose a Chinese multimodal sentiment analysis dataset with more fine-grained annotations of modality, CH-SIMS.", "These additional annotations make our dataset available for both unimodal and multimodal sentiment analysis.", "We propose a multimodal multi-task learning framework, which is suitable for all late-fusion methods in multimodal sentiment analysis.", "Besides, we introduce three late-fusion models into this framework as strong baselines for SIMS.", "The benchmark experiments on the SIMS show that our methods learn more distinctive unimodal representations and achieve state-of-the-art performance.", "In this section, we briefly review related work in multimodal datasets, multimodal sentiment analysis, and multi-task learning.", "To meet the needs of multimodal sentiment analysis and emotion recognition, researchers have proposed various of multimodal datasets, including IEMOCAP (Busso et al., 2008), YouTube (Morency et al., 2011), MOUD (Perez-Rosas et al., 2013), ICT-MMMO (Wollmer et al., 2013), MOSI (Zadeh et al., 2016), CMU-MOSEI (Zadeh et al., 2018b) and so on.", "In addition, Li et al. (2017) proposed a Chinese emotional audio-visual dataset and Poria et al. (2018) proposed a multi-party emotional, conversational dataset containing more than two speakers per dialogue.", "However, these existing multimodal datasets only contain a unified multimodal annotation for each multimodal corpus.", "In contrast, SIMS contains both unimodal and multimodal annotations.", "Multimodal sentiment analysis has become a major research topic that integrates verbal and nonverbal behaviors.", "Cambria et al. (2017) proposed a general multimodal sentiment analysis framework that is composed of representation learning on intra-modality and feature concatenation on inter-modality.", "Based on this framework, many studies focused on designing a new fusion network to capture better multimodal representations and achieve better performance.", "Zadeh et al. (2017) proposed a tensor fusion network, which obtains a new tensor representation by computing the outer product between unimodal representations.", "Liu et al. (2018) used a low-rank multimodal fusion method to decompose the weight tensor and decrease the computational complexity of tensor-based methods.", "Zadeh et al. (2018a) designed a memory fusion network with a special attention mechanism for cross-view interactions.", "Tsai et al. (2019) proposed crossmodal transformers to reinforce a target modality from another source modality by learning the attention across the two modalities' features.", "Tsai et al. (2018) learned meaningful multimodal representations by factorizing representations into two sets of independent factors: multimodal discriminative and modality-specific generative factors.", "Different from the above methods, we aim to learn more distinctive unimodal representations by introducing independent unimodal annotations.", "Multi-task learning aims to improve the generalization performance of multiple related tasks by utilizing useful information contained in these tasks (Zhang and Yang, 2017).", "A classical method is that different tasks share the first several layers and then have task-specific parameters in the subsequent layers (Liu et al., 2015; Zhang et al., 2016b).", "Based on this method, we design a multimodal multi-task learning framework for verifying the practicality and feasibility of independent unimodal annotations.", "In this section, we introduce a novel Chinese multimodal sentiment analysis dataset with independent unimodal annotations, CH-SIMS.", "In the following subsections, we will explain the data acquisition, annotation, and feature extraction in detail.", "Comparing with unimodal datasets, the requirements of multimodal datasets are relatively high.", "A fundamental requirement is that the speaker's face and voice must appear in the picture at the same time and remain for a specific period of time.", "In this work, to acquire video clips as close to life as possible, we collect target fragments from movies, TV series, and variety shows.", "After getting raw videos, we use video editing tools, Adobe Premiere Pro 1 , to crop target segments at the frame level, which is very time-consuming but accurate enough.", "Moreover, during the data collection and cropping, we enforce the following constraints: We only consider mandarin and are cautious with the selection of materials with the accent.", "The length of clips is no less than one second and no more than ten seconds.", "For each video clip, no other faces appear except for the speaker's face.", "Finally, we collect 60 raw videos and acquire 2,281 video segments.", "SIMS has rich character background, wide age range, and high quality.", "Table 1 shows the basic statistics for SIMS.", "2 3.2 Annotation We make one multimodal annotation and three unimodal annotations for each video clip.", "In addition to the increase in workload, the mutual interference between different modalities is more confused.", "To avoid this problem as much as possible, we claim every labeler can only see the information in the current modality when annotating.", "Besides, conducting four annotations at the same time is not 1 https://www.adobe.com/products/premiere.html 2 We consulted a legal office to verify that the academic usage and distribution of very short length videos fall under the fair use category.", "permitted.", "More precisely, every labeler makes unimodal annotation first and then performs multimodal annotation, which of the order is text first, audio second, then silent video, and multimodal last.", "For each clip, every annotator decides its sentimental state as -1 (negative), 0 (neutral) or 1 (pos-itive).", "we have five independent students in this field making annotations.", "Then, in order to do both regression and multi-classifications tasks, we average the five labeled results.", "Therefore, the final labeling results are one of { -1.0, -0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8, 1.0 } .", "We further divide these values into 5 classifications: negative { -1.0, -0.8 } , weakly negative { -0.6, -0.4, -0.2 } , neutral { 0.0 } , weakly positive { 0.2, 0.4, 0.6 } and positive { 0.8, 1.0 } .", "The histogram in the left of Figure 2 shows the distribution of sentiment over the entire dataset in four annotations.", "We can see that negative segments are more than positive segments.", "The main reason is that actors in film and television dramas are more expressive in negative sentiments than positive ones.", "The confusion matrix in the right of Figure 2 indicates the annotations difference between different modalities, which is computed as: D ij = 1 NN (cid:88) n =1 ( A ni A nj ) 2 (1) where i, j { m, t, a, v } , N is the number of all samples, A ni means the n th label value in modal i .", "difference between V and T is maximal, which is in line with expectations.", "Because audio contains text information, closer to multimodal while the connection between video and text is sparse.", "Furthermore, we provide the other attribute annotations, including speakers' age and gender.", "And we use sentimental annotations only in our following experiments.", "The extracted features for all modalities are as follows (we use the same basic features in all experi-ments):", "Text: All videos have manual transcription, including the Chinese and English versions.", "We use Chinese transcriptions only.", "We add two unique tokens to indicate the beginning and the end for each transcript.", "And then, pre-trained Chinese BERT-base word embeddings are used to obtain word vectors from transcripts (Devlin et al., 2018).", "It is worth noting that we do not use word segmentation tools due to the characteristic of BERT.", "Eventually, each word is represented as a 768-dimensional word vector.", "Audio: We use LibROSA (McFee et al., 2015) speech toolkit with default parameters to extract acoustic features at 22050Hz.", "Totally, 33-dimensional frame-level acoustic features are extracted, including 1-dimensional logarithmic fundamental frequency (log F0), 20-dimensional Mel-frequency cepstral coefficients (MFCCs) and 12-dimensional Constant-Q chromatogram (CQT).", "These features are related to emotions and tone of speech according to (Li et al., 2018).", "Vision: Frames are extracted from the video segments at 30Hz.", "We use the MTCNN face detection algorithm (Zhang et al., 2016a) to extract aligned faces.", "Then, following Zadeh et al. (2018b), we use MultiComp OpenFace2.0 toolkit (Baltrusaitis et al., 2018) to extract the set of 68 facial landmarks, 17 facial action units, head pose, head orientation, and eye gaze.", "Lastly, 709-dimensional frame-level visual features are extracted in total.", "In this section, we describe our proposed multimodal multi-task learning framework.", "Shown as Figure 3, based on late-fusion multimodal learning framework (Cambria et al., 2017; Zadeh et al., 2017), we add independent output units for three unimodal representations: text, audio, and vision.", "Therefore, these unimodal representations not only participate in feature fusion but are used to generate their predictive outputs.", "For the convenience in following introduction, in t ext, a udio and v ision, we assume that L u , D u i , D ur , where u { t, a, v } , represent the sequence length, initial feature dimension extracted by section 3.3 and representation dimension learned by unimodal feature extractor, respectively.", "The batch size is B .", "Unimodal subNets aim to learn intra-modal representations from initial feature sequences.", "A universal feature extractor can be formalized as: R u = S u ( I u ) (2) where I u RB L u D ui , R u RB D ur .", "In this work, following Zadeh et al. (2017); Liu et al. (2018), we use a Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network, a deep neural network with three hidden layers of weights W a and a deep neural network with three hidden layers of weights W v to extract textual, acoustic and visual embeddings, respectively.", "Feature fusion network aims to learn inter-modal representation with three unimodal representations, formulated as:", "where R t , R a , R v RB D ur are the unimodal representations.", "F ( ) is the feature fusion network and R m is the fusion representation.", "In this work, for full comparison with existing works, we try three fusion methods: LF-DNN, TFN (Zadeh et al., 2017) and LMF (Liu et al., 2018).", "Except for the training losses in different tasks, we sparse the sharing parameters via L2 norm, which aims to select intra-modal features.", "Therefore, our optimization objectives is: min 1 N t N t (cid:88) n =1 (cid:88) i i L ( y ni , y ni ) + (cid:88) j j || W j || 22 (4) where N t is the number of training samples, i { m, t, a, v } , j { t, a, v } .", "L ( y ni , y ni ) means the training loss of n th sample in modality i .", "W j is the sharing parameters in modality j and multimodal tasks.", "i is the hyperparameter to balance different tasks and j represents the step of weight decay of subNet j , respectively.", "Lastly, we use a three-layer DNN to generate outputs of different tasks.", "In this work, we treat these tasks as regression models and use the L1 loss as training loss in Equation", "4. 5 Experiments In this section, we mainly explore the following problems using SIMS: (1) Multimodal Sentiment Analysis: We evaluate the performance of multimodal multi-task learning methods comparing with the other methods.", "The aim is to validate the advantages of multi-task Model Acc-2 Acc-3 Acc-5 F1 MAE Corr EF-LSTM 69.37 0.0 51.73 2.0 21.02 0.2 81.91 0.0 59.34 0.3 -04.39 2.8 MFN 77.86 0.4 63.89 1.9 39.39 1.8 78.22 0.4 45.19 1.2 55.18 2.0 MULT 77.94 0.9 65.03 2.1 35.34 2.9 79.10 0.9 48.45 2.6 55.94 0.6 LF-DNN 79.87 0.6 66.91 1.2 41.62 1.4 80.20 0.6 42.01 0.9 61.23 1.8 MLF-DNN 82.28 1.3 69.06 3.1 38.03 6.0 82.52 1.3 40.64 2.0 67.47 1.8 (cid:53) 2.41 2.15 3.59 2.32 1.37 6.24 LMF 79.34 0.4 64.38 2.1 35.14 4.6 79.96 0.6 43.99 1.6 60.00 1.3 MLMF 82.32 0.5 67.70 2.2 37.33 2.5 82.66 0.7 42.03 0.9 63.13 1.9 (cid:53) 2.98 3.32 2.19 2.70 1.96 3.13 TFN 80.66 1.4 64.46 1.7 38.38 3.6 81.62 1.1 42.52 1.1 61.18 1.2 MTFN 82.45 1.3 69.02 0.3 37.20 1.8 82.56 1.2 40.66 1.1 66.98 1.3 (cid:53) 1.79 4.56 1.18 0.94 1.86 5.80 Table 2: (%) Results for sentiment analysis on the CH-SIMS dataset.", "(2) Unimodal Sentiment Analysis: We analyze the performance in unimodal tasks with unimodal or multimodal annotations only.", "The aim is to validate the necessary of multimodal analysis and set unimodal baselines for SIMS.", "(3) Representations Differences: We use t-SNE to visualize the unimodal representations of models with or without independent unimodal annotations.", "The aim is to show that the learned unimodal representations are more distinctive after using unimodal annotations.", "Early Fusion LSTM.", "The Early Fusion LSTM (EF-LSTM) (Williams et al., 2018) concatenates initial inputs of three modalities first and then use LSTM to capture long-distance dependencies in a sequence.", "Later Fusion DNN.", "In contrast with EF-LSTM, the Later Fusion DNN (LF-DNN) learns unimodal features first and then concatenates these features before classification.", "Memory Fusion Network.", "The Memory Fusion Network (MFN) (Zadeh et al., 2018a) accounts for view-specific and cross-view interactions and continuously models them through time with a special attention mechanism and summarized through time with a Multi-view Gated Memory.", "MFN needs Item Total NG WN NU WP PS #Train 1,368 452 290 207 208 211 #Valid 456 151 97 69 69 70 #Test 457 151 97 69 69 71 Table 3: Dataset splits in SIMS.", "word-level alignment in three modalities.", "However, this is not easy for SIMS because we haven't found a reliable alignment tool of Chinese corpus.", "In this work, we follow Tsai et al. (2019) to use CTC (Graves et al., 2006) as an alternative.", "Low-rank Multimodal Fusion.", "The Low-rank Multimodal Fusion (LMF) (Liu et al., 2018) model learns both modality-specific and cross-modal interactions by performing efficient multimodal fusion with modality-specific low-rank factors.", "Tensor Fusion Network.", "The Tensor Fusion Network (TFN) (Zadeh et al., 2017) explicitly models view-specific and cross-view dynamics by creating a multi-dimensional tensor that captures unimodal, bimodal and trimodal interactions across three modalities.", "Multimodal Transformer.", "The Multimodal Transformer (MULT) (Tsai et al., 2019) using the directional pairwise crossmodal attention to realize the interactions between multimodal sequences across distinct time steps and latently adapt streams from one modality to another.", "settings in detail, including dataset splits, hyperparameters selection, and our evaluation metrics.", "Dataset Splits.", "We shuffle all video clips in random first and then divide train, valid and, test splits by multimodal annotations.", "The detailed split results are shown in Table 3.", "Hyper-parameters Selection.", "Due to the different sequence lengths in different segments, it is necessary that fixing sequence length for the specific modality.", "Empirically, we choose the average length plus three times the standard deviation as the maximum length of the sequence.", "Besides, for all baselines and our methods, we adjust their hyperparameters using grid search with binary classification accuracy.", "For a fair comparison, in each experiment, we select five same random seeds (1, 12, 123, 1234, and 12345) and report the average performance of five times.", "Evaluation Metrics.", "The same as Liu et al. (2018); Zadeh et al. (2018b), we record our experimental results in two forms: multi-class classification and regression.", "For multi-class classification, we report Weighted F1 score and multi-class accuracy Acck , where k { 2 , 3 , 5 } .", "For regression, we report Mean Absolute Error (MAE) and Pearson correlation (Corr).", "Except for MAE, higher values denote better performance for all metrics.", "In this section, we present and discuss the experimental results of the research questions introduced in Section", "5. 5.3.1 Comparison with Baselines.", "multimodal evaluation results though new methods are multi-task.", "Results are shown in Table 2.", "Compared with single-task models, multi-task models have better performance in most of evaluation metrics.", "In particular, all three improved models (MLF-DNN, MLFM, and MTFN) have promotion significantly compared to corresponding original models (LF-DNN, LFM, and TFN) in all evaluation metrics except for Acc-5.", "The above results demonstrate that the introduction of independent unimodal annotations in multimodal sentiment analysis can significantly improve the performance of existing methods.", "Also, we find that some methods, such as MULT, that perform well on existing public datasets while they are not satisfactory on SIMS.", "It further illustrates that designing a robust, cross-lingual multimodal sentiment analysis model is still a challenging task, which is also one of our motivations for proposing this dataset.", "Due to the independent unimodal annotations in SIMS, we conducted two sets of experiments for unimodal sentiment analysis.", "In the first set of experiments, we use real unimodal labels to verify the model's ability of performing unimodal sentiment analysis.", "In the second set of experiments, we use multimodal labels instead of unimodal labels to verify the ability of predicting the true emotions of speakers when there is only unimodal information.", "Results are shown in Table", "4. Firstly, in the same unimodal task, the results under unimodal labels are better than those under multimodal labels.", "But the former cannot reflect the actual sentimental state of speakers.", "Secondly, under multimodal annotations, the performance with unimodal information only is lower than using multimodal information in Table 2.", "Hence, it is inadequate to perform sentiment analysis using unimodal information only due to the inherent limitations of unimodal information.", "Another motivation for us to propose CH-SIMS is that we think the unimodal representation differences will be greater with independent unimodal annotations.", "We use t-SNE (Maaten and Hinton, 2008) to visualize intra-modal representations learned in original models (LF-DNN, TFN, and LMF) and new models (MLF-DNN, MTFN, and MLMF), shown as Figure", "4. It is relatively obvious that new models learn more distinctive unimodal LF-DNN MLF-DNN TFN MTFN LMF MLMF Figure 4: Visualization in Unimodal Representations.", "representations compare to original models.", "Therefore, unimodal annotations can help the model to obtain more differentiated information and improve the complementarity between modalities.", "In this section, we compare the difference in the effects of combining different unimodal tasks on multimodal sentiment analysis.", "We aim to further explore the influence on multimodal sentiment analysis with different unimodal tasks.", "Furthermore, we reveal the relationship between multi-task learning and multimodal sentiment analysis.", "We conducted multiple combination experiments to analyze the effects of different unimodal subtasks on the main multimodal task.", "In this part, we only report the results in MLF-DNN.", "Results are shown in Table", "5. The results show that in the case of partial absence of three unimodal subtasks, the performance of the multimodal task has not significantly improved, or even damaged.", "Two factors may cause an adverse effect in multimodal learning, including the consistency between different unimodal representations and the asynchrony of learning in different tasks.", "The former means that unified annotations guide the representations to be similar and lack complementarity in different modalities.", "The latter means that the learning process in different tasks is inconsistent.", "Taken tasks Tasks Acc-2 F1 MAE Corr M 80.04 80.40 43.95 61.78 M, T 80.04 80.25 43.11 63.34 M, A 76.85 77.28 46.98 55.16 M, V 79.96 80.38 43.16 61.87 M, T, A 80.88 81.10 42.54 64.16 M, T, V 80.04 80.87 42.42 60.66 M, A, V 79.87 80.32 43.06 62.95 M, T, A, V 82.28 82.52 40.64 64.74 Table 5: (%) Results for multimodal sentiment analysis with different tasks using MLF-DNN.", "M, A as an example, the sub-network of subtask A is supervised by multimodal loss and unimodal loss.", "In contrast, subtask T and subtask V are supervised by their unimodal loss only.", "It means the A is learned twice while the T and the V are learned once only during an training epoch.", "Therefore, the introduction of unimodal tasks will reduce the consistency of the representation and strengthen the complementarity, but will also cause the asynchrony.", "As more unimodal tasks are introduced, the positive effects of the former gradually increase, and the negative effects of the latter gradually decrease.", "Finally, when all unimodal tasks are added, the negative effect of the latter is almost disappearing.", "In this paper, we propose a novel Chinese multimodal sentiment analysis dataset with independent unimodal annotations and a multimodal multi-task learning framework based on late-fusion methods.", "We hope that the introduction of CH-SIMS will provide a new perspective for researches on multimodal analysis.", "Furthermore, we conduct extensive experiments on discussing unimodal, multimodal, and multi-task learning.", "Lastly, we summarize our overall findings as follows: Multimodal labels cannot reflect unimodal sentimental states always.", "The unified multimodal annotations may mislead the model to learn inherent characteristics of unimodal representations.", "With the help of unimodal annotations, models can learn more differentiated information and improve the complementarity between modalities.", "When performing multi-task learning, the asynchrony of learning in different subtasks may cause an adverse effect on multimodal sentiment analysis.", "In the future, we will further explore the connection between multimodal analysis and multi-task learning and incorporate more fusion strategy, including earlyand middle-fusion.", "This paper is founded by National Natural Science Foundation of China (Grant No: 61673235) and National Key R&D Program Projects of China (Grant No: 2018YFC1707605).", "We would like to thank the anonymous reviewers for their valuable suggestions." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "objective", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "objective", "other", "other" ]
[ "A recently proposed lattice model has demonstrated that words in character sequence can provide rich word boundary information for character-based Chinese NER model.", "In this model, word information is integrated into a shortcut path between the start and the end characters of the word.", "However, the existence of shortcut path may cause the model to degenerate into a partial word-based model, which will suffer from word segmentation errors.", "Furthermore, the lattice model can not be trained in batches due to its DAG structure.", "In this paper, we propose a novel word-character LSTM(WC-LSTM) model to add word information into the start or the end character of the word, alleviating the influence of word segmentation errors while obtaining the word boundary information.", "Four different strategies are explored in our model to encode word information into a fixed-sized representation for efficient batch training.", "Experiments on benchmark datasets show that our proposed model outperforms other state-of-the-arts models.", "Name Entity Recognition(NER) is a basic task of many NLP systems including Information Retrieval (Virga and Khudanpur, 2003), Relationship Extraction (Miwa and Bansal, 2016), Question Answering (Molla et al., 2006).", "The main task of NER is to identify named entities such as person, location, organization, etc. in given text.", "Various methods have been proposed to tackle this problem, including Hidden Markov Models(HMMs) (Saito and Nagata, 2003), Maximum Entropy Models(ME) (Chieu and Ng, 2003), Support Vector Machines(SVM) (Ekbal and Bandyopadhyay, 2010) and Conditional Random Fields(CRF) (Feng et al., 2006).", "With the development of deep learning, neural net-c c c c c c (cid:9) Up Rise (cid:131) of 3 Water ^ River ~ Long (cid:9) Rise ^3 River Water ~ ^ Yangtze River w w w O O O B-LOC M-LOC E-LOCOO O B-LOC E-LOC O", "works (Huang et al., 2015; Lample et al., 2016; Habibi et al., 2017) have been introduced to NER task.", "To avoid the segmentation errors, most of neural Chinese NER models are character-based.", "Although character-based method has achieved good performance, it does not exploit word information in character sequence.", "Entity boundaries usually coincide with some word boundaries, which suggests that words in character sequence can provide rich boundary information for character-based model.", "To integrate words information into character-based model, Zhang and Yang (2018) propose a lattice-structured LSTM model to encode a sequence of input characters as well as all potential words that match a lexicon.", "Their model is an extension of character-based LSTM-CRF model and uses extra shortcut paths to link the memory cell between the start and the end characters of a word for utilizing word information.", "And the gated recurrent unit is used to control the contribution of shortcut paths and path between adjacent characters.", "However, as the study of (Yang et al., 2018) shown, the gate mechanism fails to choose the right path sometimes.", "As shown in Figure 1, wrong choices may cause lattice model to degenerate into a partial word-based model, which suffers from word segmentation errors.", "In addition, due to the variable length of words, the length of the whole path is not fixed.", "Besides, each character is bounded with a variable-sized candidate word sets, which means the amount of incoming and outcoming paths is not fixed either.", "In this case, lattice LSTM model is deprived of the power of batch training, and hence it is highly inefficient.", "To address the above problems, we propose a novel word-character LSTM(WC-LSTM) to integrate word information into character-based model.", "To prevent our model from degenerating into a partial word-based model, we assign word information to a single character and ensure that there are no shortcut paths between characters.", "Specifically, word information is assigned to its end character and start character in forward WC-LSTM and backward WC-LSTM respectively.", "We introduce four strategies to extract fixed-sized useful information from different words, which ensures that our proposed model can perform batch training without losing word information.", "We demonstrate the effectiveness of our architecture on four widely used datasets.", "Experimental results show that our proposed model outperforms other state-of-the-art models on the four datasets.", "Our contributions of this paper can be concluded as follows: We propose a novel word-character LSTM(WC-LSTM) to incorporate word information into character-based model.", "We explore four different strategies to encode word information into a fixed-sized vector, which enables our proposed model to be trained in batches and adapted to various application scenarios.", "Our proposed model outperforms other models and achieves new state-of-the-art over four Chinese NER datasets.", "We release the source code for further research 1 .", "Neural Networks have been shown to achieve impressive results on Name Entity Recognition task (Gregoric et al., 2018; Lin and Lu, 2018).", "Based on the level of granularity, most of the models can be divided into three categories: word-based models, character-based models, and hybrid models.", "Word-Based Models .", "Collobert and Weston (2008) propose one of the first word-based models for NER, with feature constructed from orthographic features, dictionaries and lexicons (Yadav and Bethard, 2018).", "Collobert et al. (2011) replace the hand-crafted features with word embeddings.", "Huang et al. (2015) propose a BiLSTM-CRF model for NER and achieves good performance.", "Ma and Hovy (2016) and Chiu and Nichols (2016) use CNN to capture spelling characteristics and Lample et al. (2016) use LSTM instead.", "When applied to Chinese NER, the above models all suffer from segmentation errors, since Chinese word segmentation is compulsory for those models.", "Character-Based Models .", "Peng and Dredze (2015) propose to add segmentation features for better recognition of entity boundary.", "Dong et al. (2016) integrate radical-level features into character-based model.", "To eliminate the ambiguity of character, Sun and He (2017) take the position of character into account.", "Although the above models have achieved good results, they all ignore word information in character sequence.", "Hybrid Models .", "Some efforts have been made to integrate word boundary information into character-based models.", "Motivated by the success of multi-task learning for Natural Language Processing (Liu et al., 2016, 2017; Zhang et al., 2018), Peng and Dredze (2016) first proposed to jointly train Chinese NER with Chinese word segmentation(CWS) task.", "Cao et al. (2018) apply adversarial transfer learning framework to integrate the task-shared word boundary information into Chinese NER task.", "Another way to obtain word boundary information is proposed by (Zhang and Yang, 2018), using a lattice LSTM to integrate word information into character-based model, which is similar to what is proposed in 1 https://github.com/liuwei1206/CCW-NER (cid:9) Up Rise (cid:131) Of ~ Long ^ River 3 Water D 5(cid:214) D 6(cid:214) D 7(cid:214) D :(cid:214) D 8(cid:214) D 9(cid:214) T 5(cid:214) T 6(cid:214) T 7(cid:214) T :(cid:214) T 8(cid:214) T 9(cid:214) T 5(cid:230) T 6(cid:230) T 7(cid:230) T :(cid:230) T 8(cid:230) T 9(cid:230) ~ ^ Yangtze River (cid:9) Rise <PAD> <PAD> <PAD> ^3 River Water T Stgy T 65 Stgy T Stgy T Stgy T 95 Stgy T :5 Stgy 1 5(cid:214) 1 6(cid:214) 1 7(cid:214) 1 :(cid:214) 1 8(cid:214) 1 9(cid:214) O O OB-LOC E-LOCOS SSSSS Word emb Char emb Strategy WC-LSTM CRF Figure 2: The architecture of our unidirectional model.", "this paper.", "The main differences are as follows.", "Firstly, they exploit word information by a DAG-structured LSTM, while we use a chain-structured LSTM.", "Secondly, instead of integrating to the hidden state of LSTM, our model add word information into the input vector.", "Finally, our model can be trained in batches and is more efficient.", "The architecture of our proposed model is shown in Figure 2.", "Same as the widely used neural Chinese NER model, we use LSTM-CRF as our main network structure.", "The differences between our model and a standard LSTM-CRF model are mainly on the embedding layer and LSTM and can be summarized as follows.", "First, we represent a Chinese sentence as a sequence of character-words pairs to integrate word information into each character.", "Second, to enable our model to train in batches and to meet different application requirements, we introduce four encoding strategies to extract fixed-sized but different information from words.", "Finally, a chain-structured word-character LSTM is used to extract features from both character and word for better predicting.", "Next, we will explain the main ideas for each component, including word-character embedding layer, word encoding strategy, and word-character LSTM.", "Formally, we denote a Chinese sentence as s = { c 1 , c 2 , ..., c n } , where c i denotes the i th character.", "We use c b,e to denote a character subsequence in s , which begins with b th character and ends with e th character.", "Take the sentence in Figure 2 for example, c 1 , 2 is (Rise).", "We use ws i to denote words assigned to i th character in forward WC-LSTM, which are a set of character subsequences c b,i , where b < i and c b,i matches a word in lexicon D .", "The lexicon D is the same as the one used in (Zhang and Yang, 2018), which is built by using automatically segmented large raw text.", "Similarly, we use ws i to denote the words for i th character in backward WC-LSTM, which are a set of character subsequences c i,e , where e > i and c i,e matches a word in lexicon D .", "Finally, the sentence s is represented as rs = { ( c 1 , ws 1 ) , ( c 2 , ws 2 ) , ..., ( c n , ws n ) } in our model, and its reverse representation is rs = { ( c n , ws n ) , ( c n 1 , ws n 1 ) , ..., ( c 1 , ws 1 ) } .", "In our model, Each position i in rs consists of two parts: i th character c i and the assigned words ws i .", "The origin number of words in ws i is s ti , and words are sorted by their length.", "We ensure each ws i has the same number s pi 2 in the whole batch by padding.", "We embed each character c i in dis-2 The number depends on the maximum s ti in the whole batch, and it can not be less than 1.", "where e c denotes a pre-trained character embedding lookup table.", "Similarly, for each ws i = { w i 1 , ..., w is pi } , the l th word w il in ws i is represented using x wil = e w ( w il ) (2) where e w denotes a pre-trained word embedding lookup table.", "As a result, the distributional representation of words ws i is { x wi 1 , ..., x wis pi } .", "Although the number of assigned words s pi for each character c i is same in one batch, the number varies from batch to batch.", "As a result, the size of input to the model is not fixed, which is not conducive to batch training.", "To acquire fixed-sized input, we introduce four different encoding strategies in this section.", "And we use x wsi to denote the final representation of word information for position i in following sections.", "Shortest Word First: For each word set ws i = { w i 1 , ..., w is pi } , we simply select word whose length is the shortest, i.e. w i 1 .", "Then x wsi = x wi 1 (3) Longest Word First: Contrary to the shortest word first, we select word whose length is the longest, i.e. w is ti .", "Note that s ti may be 0 , in this case, we set it to 1 .", "Then x wsi = x wis ti (4) Average: While the first two strategies can only use the information of partial words, we introduce an average strategy to utilize all word information.", "As its name indicates, the average strategy computes the centroid of the embeddings of all elements except paddings in word set , i.e. { w i 1 , ..., w is ti } .", "If s ti = 0 , we simply average all the padding value in the word set.", "Then x wsi = 1 s ti (cid:80) s ti l =1 x wil , if s ti > 0 1 s pi (cid:80) s pi l =1 x wil , if s ti = 0 (5) Self-Attention: Inspired by self-attention mechanism applied to sentence embedding (Lin et al., 2017), we exploit self-attention to better capture useful information from assigned words.", "For simplicity, we denote all the x wil as W i , which has the size s pi -byd w , where d w denotes the dimensionality of word embedding e w .", "We use self-attention mechanism to obtain a linear combination of s pi word embeddings in W i .", "The attention mechanism takes W i as input, and generates a weight vector a i .", "W 1 is a weight matrix with the size of d a -byd w and w 2 is a d a dimensional vector, where d a is a hyperparameter.", "Both of them are trainable parameters.", "If s ti > 0 , we use the mask to exclude the padding values; otherwise we reserve them.", "Finally, we use a i to get the weighted sum of all words.", "Inspired by the way character bigram is integrated into sequence labeling model (Chen et al., 2015; Yang et al., 2017), we concatenate each x ci with x wsi to utilize word information.", "And this is quite different from the way used in (Zhang and Yang, 2018), since they use extra shortcut paths to integrate word information into the hidden layer of LSTM.", "By concatenating, there is no shortcut path in our model and information can only flow between adjacent characters, which ensures that our model will not degenerate into a partial word-based model.", "Then the WC-LSTM functions are: c i o i i i f i = tanh (cid:18) W p (cid:20) x i h i 1 (cid:21) + b p (cid:19) (9) x i = x ci x wsi c i = c i (cid:12) i i + c i 1 (cid:12) f i h i = o i (cid:12) tanh ( c i ) (10) where o i , i i and f i denote output gate, input gate and forget gate respectively.", "W p and b p are parameters of affine transformation; denotes the logistic sigmoid function; denotes concatenation operation and (cid:12) denotes elementwise multiplication.", "The bidirectional WC-LSTM is applied in our model to leverage both information from the past and the future.", "To get the future information, we use a second WC-LSTM that reads the reverse representation of rs , i.e., rs = { ( c n , ws n ) , ( c n 1 , ws n 1 ) , ..., ( c 1 , ws 1 ) } .", "And the following operations to get each backward WC-LSTM hidden vector h i is the same as the one in the forward WC-LSTM.", "Finally, the update of each bidirectional WC-LSTM unit can be written as follows: x i = x ci x wsi x i = x ci x wsi h i = WC LSTM( h i 1 , x i ) h i = WC LSTM( h i +1 , x i ) h i = h i h i (11) where h i and h i are hidden states at position i of forward and backward WC-LSTM respectively, and denotes concatenation operation.", "Considering the dependencies between successive labels, we use a CRF layer to make sequence tagging.", "We define matrix O to be scores calculated based on the output H = { h 1 , h 2 , ..., h n } : O = W o H + b o (12) For a label sequence y = { y 1 , y 2 , ..., y n } , we define its probability to be: p ( y | s ) = exp (cid:0)(cid:80) i (cid:0) O i,y i + T y i 1 ,y i (cid:1)(cid:1) (cid:80) y exp (cid:0)(cid:80) i (cid:0) O i, y i + T y i 1 , y i (cid:1)(cid:1) (13) Where W o and b o are paramters to calculate O ; T is a transition score matrix and y denotes all possible tag sequences.", "While decoding, we use the Viterbi algorithm to find the label sequences that obtained the highest score: y = arg max y y (cid:88) i (cid:0) O i,y i + T y i 1 ,y i (cid:1) (14) Given N manually labeled data { ( s j , y j ) }| Nj =1 , we minimize the sentence-level negative log-likelihood loss to train the model: L = (cid:88) j log ( p ( y j | s j )) (15) Dataset Train sent Dev sent Test sent OntoNotes 15724 4301 4346 MSRA 46364 4365 Weibo NER 1350 270 270 Chinese resume 3821 463 477 Table 1: Statistics of the datasets 4 Experiments 4.1 Experimental Settings Dataset .", "We evaluate our model on four datasets, including OntoNotes4 (Weischedel et al., 2011), MSRA (Levow, 2006), Weibo NER (Peng and Dredze, 2015) and a Chinese resume dataset (Zhang and Yang, 2018).", "Both OntoNotes4 and MSRA datasets are news in simplified Chinese.", "Weibo NER dataset is social media data, which is drawn from the Sina Weibo.", "Chinese resume dataset consists of resumes of senior executives, which is annotated by (Zhang and Yang, 2018).", "For OntoNotes, we use the same training, development and test splits as (Che et al., 2013).", "For other datasets which have already been split, and we don't change them.", "We summarize the datasets in Table 1.", "Implementation Details .", "We utilize the character and word embeddings used in (Zhang and Yang, 2018), both of which are pre-trained on Chinese Giga-Word using word2vec model.", "Following (Zhang and Yang, 2018), we use the word embedding dictionary as Lexicon D in our model.", "For characters and words that do not appear in the pretrained embeddings, we initialize them with a uniform distribution 3 .", "When training the model, character embeddings and word embeddings are updated along with other parameters.", "For hyper-parameter configurations, we mostly refer to the settings in (Zhang and Yang, 2018).", "We set both character embedding size and word embedding size to 50.", "The dimensionality of each unidirectional multi-input LSTM hidden states is 100 for Weibo NER and Chinese Resume, and 200 for OntoNote 4 and MSRA.", "For self-attention strategy, we set the d a to 50.", "To avoid overfitting, we apply dropout to both embeddings and LSTM with a rate of 0.5.", "We use SGD to optimize all the trainable parameters.", "Learning rate is set to 0.015 initially and decays during training at a rate 3 The range is (cid:104) (cid:113) 3 dim , + (cid:113) 3 dim (cid:105) , where dim demotes the size of embedding.", "For evaluation, we use the Precision(P), Re-call(R) and F1 score as metrics in our experiments.", "OntoNotes .", "Table 2 shows the experimental results on OntoNote 4 dataset.", "The Input column shows the representation of input sentence, where Gold seg means a sequence of words with gold-standard segmentation, and No seg means a sequence of character without any segmentation.", "The first block in Table 2 are the results of word-based models (Wang et al., 2013; Che et al., 2013; Yang et al., 2016).", "By using gold-standard segmentation and external labeled data, all of them achieve good performance.", "But the only resource used in our model are pretrained character and word embeddings.", "The first two rows in the second block show the performance of the lattice model and character-based model.", "The character baseline denotes the original character-based BiLSTM-CRF model.", "Zhang and Yang (2018) propose a lattice LSTM to exploit word information in character sequence, giving the F1 score of 73.88%.", "Compared with the character baseline, lattice model gains 8.92% improvement in F1 score, which shows the importance of word information in character sequence.", "In the last four rows, we list the results of our proposed model.", "The results show that all of our models outperform other character-based models, and the one with self-attention strategy achieves the best result.", "Without gold-standard segmentation and external labeled data, our model gives competitive results to the word-based models on this dataset.", "Compared with the character baseline, our model with self-attention obtains 9.48% improvement in F1 score, which proves the effectiveness of our way to integrating word information.", "Compared with lattice model, all of our models achieve better results, which shows that our Models P R F1 Zhang et al. (2006) 92.20 90.18 91.18 Zhou et al. (2013) 91.86 88.75 90.28 Dong et al. (2016) 91.28 90.62 90.95 Cao et al. (2018) 91.73 89.58 90.64 Lattice (Zhang and Yang, 2018) 93.57 92.79 93.18 Character baseline 89.61 86.98 88.37 WC-LSTM + shortest 93.97 92.59 93.28 WC-LSTM + longest 94.33 93.11 93.71 WC-LSTM + average 94.58 92.91 93.74 WC-LSTM + self-attention 94.36 92.38 93.36 Table 3: Results on MSRA Models NE NM Overall Peng and Dredze (2015) 51.96 61.05 56.05 Peng and Dredze (2016) 55.28 62.97 58.99 Sun and He (2017) 54.50 62.17 58.23 He and Sun (2017) 50.60 59.32 54.82 Cao et al. (2018) 54.34 57.35 58.70 Lattice (Zhang and Yang, 2018) 53.04 62.25 58.79 Character baseline 47.98 57.94 52.88 WC-LSTM + shortest 52.99 65.75 59.20 WC-LSTM + longest 52.55 67.41 59.84 WC-LSTM + average 53.19 64.17 58.67 WC-LSTM + self-attention 49.86 65.31 57.51 Table 4: Results on Weibo NER approach to integrating word information is more reasonable than lattice model.", "MSRA .", "Table 3 shows the results on MSRA dataset.", "Zhang et al. (2006) and Zhou et al. (2013) use the statistical model with rich hand-crafted features.", "Dong et al. (2016) exploit radical features in Chinese character.", "Cao et al. (2018) joint train Chinese NER task with Chinese word segmentation, in which adversarial learning and self-attention mechanism are applied for better performance.", "We can observe that our proposed models outperformance the above models and the one with average strategy achieves new state-of-the-art performance.", "Weibo .", "Table 4 shows the results 4 on Weibo dataset.", "The NE, NM and Overall columns denote F1-score for named entities, nominal en-tities(excluding named entities) and both respectively.", "We can see that WC-LSTM model with longest word first strategy achieves new state-of-the-art performance.", "Multi-task learning (Peng and Dredze, 2015, 2016; Cao et al., 2018) and semi-supervised learning (Sun and He, 2017; He and Sun, 2017) are the most common methods 4 The results of (Peng and Dredze, 2015, 2016) are taken from (Peng and Dredze, 2017) Models P R F1 Lattice (Zhang and Yang, 2018) 94.81 94.11 94.46 Character baseline 93.26 93.44 93.35 WC-LSTM + shortest 94.97 94.91 94.94 WC-LSTM + longest 95.27 95.15 95.21 WC-LSTM + average 95.09 94.97 95.03 WC-LSTM + self-attention 95.14 94.79 94.96 Table 5: Results on Chinese Resume Time(s)/epoch Character baseline ( batch size =1) 880 Character baseline ( batch size =8) 253 Lattice 2245 WC-LSTM ( batch size =1) 980 WC-LSTM ( batch size =8) 350 Table 6: Time per epoch of models for Weibo NER task due to the small amount of training data.", "All of the above models require additional cross-domain or semi-supervised data.", "Compared with those models, our model does not need additional labeled data; we only exploit pretrained character and word embeddings.", "Resume .", "Table 5 shows the results on Chinese Resume dataset.", "Consistent with the previous results, our models outperform lattice model (Zhang and Yang, 2018).", "The above experimental results strongly verify that our method to utilize word information is more effective than the lattice model.", "Our proposed model has achieved state-of-the-art results on various domains such as news, social media, and Chinese resume.", "To further explore the efficiency of our model, we conduct some comparative experiments on training time and convergence speed.", "The lattice model proposed in (Zhang and Yang, 2018) is our principal comparison object, since it also utilizes the word information in character sequence.", "Our model is an extension of the character-based model, so we also report the results on character-based model as character baseline.", "We only conduct our experiments on OnteNotes dataset due to space limitation.", "And we choose the model with the self-attention strategy for the comparative experiments, as it outperforms other strategies on OntoNotes dataset.", "The training time of each epoch for all models is shown in Table 6.", "The lattice model needs the most training time for each epoch, since it can only Figure 3: Convergence curve of models.", "be trained with batch size=1 due to its complex DAG structure.", "Compared with it, our model with batch size=1 only need half of the training time.", "Which shows that our model is more efficient.", "With batch size=8, our model is nearly 6 times faster than the lattice model, which further demonstrates the efficiency of our model.", "Compared with the character baseline, our model only adds a small amount of training time but greatly improves the performance.", "All the experiments are conducted on a single GPU with NVIDIA Tesla K40m.", "Figure 3 shows the learning curve of the models in Table 6.", "As we can see from the figure, whether with batch size=1 or 8, our model can converge within the same epochs as lattice model does.", "But compared with the lattice model, our model with batch size=8 only takes about 1/7 of their training time per epoch.", "Besides, we can observe from Figure 3, both our model and lattice model significantly outperform the character baseline, which shows the importance of the word information again.", "Case Study .", "Word information is very useful for Chinese NER task, since it can provide rich word boundary information.", "To verify that our model can better utilize the boundary information, we analyze an example from OntoNotes dataset.", "As shown in Table 7, the character-based model cannot detect the existence of the entity (Northeast Asia) without word information.", "The lattice model incorrectly recognizes Sentence (truncated) New Northeast Asian Continental Bridge Latent words , , , , , , , Northest, Northeast Asia, North Asia, Second largest, Subcontinent, Continent, Continental bridge, Land bridge Gold labels O O B-LOC M-LOC E-LOC O O O Character O O O O O O O O Lattice O O B-LOC M-LOC M-LOC M-LOC M-LOC E-LOC Shortest O O B-LOC M-LOC E-LOC O O O Longest O O B-LOC M-LOC E-LOC O O O Average O O B-LOC M-LOC E-LOC O O O Self-attention O O B-LOC M-LOC E-LOC O O O Table 7: An example of that our models can mitigate the influence of wrong boundary information while utilizing word information.", "(Northeast Asian Continental Bridge) as an entity, which is caused by the wrong selection of paths.", "Different from the lattice model, our models are not disturbed by the wrong boundary information and make the correct predictions.", "Strategies Analysis .", "In this part, we analyze the difference between strategies.", "The application scenarios of shortest word first and longest word first can be explained by Nested Name Entity Recognition (Ju et al., 2018; Sohrab and Miwa, 2018).", "Short word first is good at identifying inner nested entities due to the short word information, while longest word first tends to identify flat entities with the help of long word information.", "Taking (Yangtze River Delta) as an example, shortest word first recognizes (Yangtze) and (Delta) as entities, but longest word first tend to think that they are part of the entity (Yangtze River Delta).", "Both results are reasonable, but the right result depends on specific needs.", "The average and self-attention strategies are the combination of all words information and can use more information.", "Intuitively, they should outperform the shortest word first and the longest word first.", "But results on Weibo NER(Table 4) and Re-sume(Table 5) show the opposite effect.", "We conjecture that this is caused by the small amount of training data since more word information but small dataset will lead to overfitting.", "The average strategy is a special case of the self-attention strat-Sentence (truncated) Fang Weizhong and Jing Shuping Latent words , , , , Fang Wei, Zhongjing, Jing Shuping, Jingshu, Shuping Gold labels B-PER M-PER E-PER B-PER M-PER E-PER Average B-PER M-PER E-PER B-PER M-PER E-PER Self-attention B-PER M-PER M-PER E-PER B-PER E-PER Table 8: An example of our model applied to informal text using the average strategy and self-attention strategy.", "egy where all weights are the same, so we would like to see the latter outperforms the former when training data is sufficient.", "Surprisingly, the average strategy achieves higher F1 score than the self-attention strategy in MSRA dataset(Table 3).", "We carefully analyze the experimental results and find that there are a large number of informal texts in the MSRA test set.", "Specifically, the MSRA test set contains some very long sentences, in which there are a series of Chinese person name without delimiter.", "As shown in Table 8, when applied to such informal text, the self-attention strategy fails to determine the entity boundary sometimes while the average strategy correctly recognizes the entities.", "And we conjecture that, with more trainable parameters, the self-attention strategy can better fit the formal text in the training set but cannot adapt well to the informal data in the test set, so it performs worse than the average strategy.", "Finally, the application scenarios of different strategies can be summarized as followings.", "If the training data is sufficient, we recommend using self-attention for formal texts and average strategy for informal texts.", "If there is only a very small amount of annotated data, we recommend using the shortest words first for inner nested entities and longest word first strategy for flat entities.", "Lexicon and Embeddings .", "To further analyze the contribution from word lexicon and pretrained word embeddings, we conduct some comparative experiments by using the same word lexicon with and without pretrained embeddings.", "We choose the strategy that achieving the best performance for each dataset.", "We estimate the contribution of the lexicon by replacing pretrained word embeddings with randomly initialized embeddings 5 .", "As shown in Table 9, both lexicon 5 Same initialization strategy as in Implement Details.", "and pretrained word embeddings are useful to our model.", "However, different from the result in lattice model(Yang et al., 2018), pretrained word embeddings contribute more than lexicon to our model.", "Taking the result on Ontonote for example, the contribution of pretrained embeddings can be estimated as (9 . 48% 2 . 86%) = 6 .", "62% , which is higher than the contribution of lexicon 2 .", "86% .", "The results show that our model relies more on pretrained embeddings instead of the lexicon, which explains the excellent performance of our model in different domains.", "In this paper, we propose a novel method to utilize word information in character sequence for Chinese NER.", "Four encoding strategies are introduced to extract fixed-sized but different information for batch training.", "By using WC-LSTM to extract features from the character vector and word vector, our model can effectively exploit word boundary information and mitigate the influence of word segmentation errors.", "Experiments on datasets in different domains show that our model is more efficient and faster than the lattice model and also outperforms other state-of-the-art models.", "In the future, we plan to further improve and perfect the proposed method, such as exploring some strategies to handle OOV words.", "Also, the proposed methods can be further extended to other Chinese NLP tasks, such as CWS, Text Classifica-tion, and Sentiment Analysis.", "The research work is supported by the No.", "BHKX-17-07 project of Hefei Innovation Research Institute, Beihang University.", "We would like to thank Jie Yang for his open-source toolkit NCRF++ (Yang and Zhang, 2018).", "Moreover, we sincerely thank all anonymous reviewers for their valuable comments." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "other", "abstain", "method", "method", "abstain", "other", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "objective", "abstain", "other", "other", "other", "other" ]
[ "Role-oriented dialogue summarization is to generate summaries for different roles in the dialogue, e.g. , merchants and consumers.", "Existing methods handle this task by summarizing each role's content separately and thus are prone to ignore the information from other roles.", "However, we believe that other roles' content could benefit the quality of summaries, such as the omitted information mentioned by other roles.", "Therefore, we propose a novel role interaction enhanced method for role-oriented dialogue summarization.", "It adopts cross attention and decoder self-attention interactions to interactively acquire other roles' critical information.", "The cross attention interaction aims to select other roles' critical dialogue utterances, while the decoder self-attention interaction aims to obtain key information from other roles' summaries.", "Experimental results have shown that our proposed method significantly outperforms strong baselines on two public role-oriented dialogue summarization datasets.", "Extensive analyses have demonstrated that other roles' content could help generate summaries with more complete semantics and correct topic structures.", "1 1 Introduction Dialogue summarization aims at compressing the main content of a long conversation into a short text.", "With the development of online conversation tools, the amount and length of conversation are growing up rapidly.", "Since a dialogue often contains complicated structure and ellipsis, it is time-consuming to read the whole dialogue.", "Dialogue summarization thus becomes valuable since it could extract the key point of a conversation and greatly reduce the time cost.", "This technique is widely used in customer service (Liu et al., 2019), meeting (McCowan et al., 2005), online chatting (Gliwa et al., 2019), etc.", "In a dialogue, each role has its own opinion and goal, and different roles exchange information or reach a consensus through interactions.", "Therefore, in addition to summarizing the whole dialogue, we could summarize the main content for each role.", "Lin et al. (2021) first define the role-oriented dialogue summarization task and provide a related dataset, CSDS.", "They define role-oriented dialogue summarization as grasping the main viewpoint of a given role from dialogue and mention the usage of role-oriented summaries in the customer service domain, e.g. , reflecting the user's requirements and evaluating agent service quality.", "Besides, role-oriented summarization is beneficial to other dialogue domains such as medical inquiry (Song et al., 2020) and court debate (Duan et al., 2019).", "labeling process (Song et al., 2020).", "They ignore the strong relativeness among summaries for different roles and thus fail to utilize the information from other roles to enhance the summaries.", "However, information from other roles is also crucial for this task.", "We summarize two cases where other roles' information helps: (1) Other roles' dialogue utterances could help enhance the informativeness of summaries.", "In Figure 1, utterance 7 ( Yes, it is OK normally. ) is the key utterance of the agent's content, expressing a confirmation to the user's question.", "While only extracting it makes the agent summary ambiguous since it lacks the confirming object ( JD can pay via wechat in blue).", "In this case, the agent summary needs to integrate the content from the user's utterance (utterance 6 in yellow) to enhance its informativeness.", "(2) Other roles' summaries could help judge the key content in the dialogue.", "In a dialogue, different roles often discuss the same topic.", "Therefore, considering the key content of the other role could help grasp the key content of a given role.", "As shown in Figure 1, the user summary contains a question about the payment (in red), and the agent summary contains the response to the payment question (in blue).", "If the summary of one role struggles in judging whether the discussion about payment should be contained in the summary, by referring to the summary of the other role, the summarization model could be more confident to include this information in the summary.", "Although we notice the importance of other roles' information, it is difficult to extract the key information from other roles through a simple multi-task framework.", "The main issue is that it could not judge which information from other roles is useful without modeling the interaction between different roles.", "Thus, in this work, we propose two interaction methods to obtain key information from other roles for enhancing role-oriented summarization.", "First, we apply a cross attention interaction to let each role decoder select the most useful dialogue utterances from other roles.", "Specifically, we calculate the Cross Attention for different roles' utterances separately and add a new Attention Divergence Loss to interactively share the cross attention distributions between different roles.", "Second, we apply a decoder self-attention interaction to let each role decoder obtain other roles' summary information.", "We develop an interactive mechanism between decoders to consider other role summary information embedded in the decoder states.", "A new Role Attention module is added to each role decoder, where the attention object is the hidden states of other role decoders.", "At last, we use the role attention result and multiple context attention results to predict the word probability distribution of the summary.", "Through these two modules, the model could acquire more precise information from other roles and provide better role-oriented summaries.", "To examine the effectiveness of our method, we conduct experiments on two dialogue summarization datasets (Lin et al., 2021; Song et al., 2020) with role-oriented summaries in different domains (customer service, medical inquiry).", "We apply our method on two widely-used summarization frameworks (RNN-based and Transformer-based).", "The results have shown that, compared with baseline systems and naive multi-task approaches, applying role interactions could significantly improve the quality of role-oriented summaries.", "Further analyses verify that our proposed method can help the model correctly attend to other roles' key information and generate summaries with more complete semantic and correct topic structures.", "The main contributions of this paper include: (1) We are the first to enhance role-oriented dialogue summarization by focusing on other roles' key information.", "(2) We innovatively design two role interaction methods to obtain other roles' key information useful for generating summaries.", "(3) Experimental results on two datasets have shown that our method could lead to considerable improvements.", "Besides, our method has good generalizability since it works on multiple baseline frameworks.", "Dialogue summarization has been studied in various domains, e.g. , meeting (McCowan et al., 2005; Janin et al., 2003), daily chatting (Gliwa et al., 2019; Chen et al., 2021), customer service (Liu et al., 2019; Zou et al., 2021), and medical inquiry (Song et al., 2020; Krishna et al., 2021).", "Considering the particularity of dialogue, many studies try to improve the dialogue summarization performance by focusing on dialogue-specific features (Feng et al., 2021), such as topic information (Chen and Yang, 2020), discourse structure (Chen and Yang, 2021), coreference information (Liu et al., 2021) 2546 and speaker information (Lei et al., 2021; Zhu et al., 2020).", "However, all the above studies focus on summarizing the whole dialogue.", "Only a few studies pay attention to role-oriented summarization, which aims to summarize the main content of a single role in the dialogue.", "A relative task is focused meeting summarization (Wang and Cardie, 2013; Mehdad et al., 2014; Zhong et al., 2021).", "It aims to summarize a specific part of the meeting dialogue, while role-oriented summarization focuses on a single role, and the relationship between different roles is much closer.", "Tamura et al. (2011) focus on contact center dialogue summarization, but they only extract salient sentences from the dialogue and do not summarize for different roles.", "Due to the lack of labeled data, Zhang et al. (2021) propose an unsupervised method to generate summaries for the customer and the agent under a variational auto-encoder framework.", "As for supervised methods, there are only two datasets available for training.", "Lin et al. (2021) propose a customer service domain dataset named CSDS, where each dialogue has an overall summary and two role-oriented summaries for user and agent.", "They train two separate models for generating user summaries and agent summaries.", "Song et al. (2020) provide a medical inquiry dialogue summarization dataset where each dialogue has two extractive summaries for the patient and the doctor.", "They train a sequence labeling model to extract summaries for these two roles.", "Compared with these approaches, to the best of our knowledge, we are the first to enhance role-oriented summarization by explicitly considering other roles' critical information.", "Interactive decoding is a mechanism to share information between different decoders in the decoding process.", "Zhou et al. (2019) propose this mechanism and use it on machine translation to simultaneously decode from both left-to-right and right-to-left.", "Wang et al. (2019) and Liu et al. (2020) further utilize it on more complex machine translation tasks, including multilingual translation and speech translation.", "In this work, we first use the interactive decoding mechanism on the summarization task to decode summaries for different roles, aiming at utilizing other roles' summary information for summarization.", "Besides, we also propose an interaction method on cross attention to utilize other roles' critical dialogue utterance information.", "Given a dialogue D containing m utterances { u 1 , ..., u m } and p speakers S = { s 1 , ..., s p } , the role-oriented summarization task aims to generate a summary y k for each speaker s k .", "Each utterance u k consists of a speaker role r k S and related content.", "By concatenating all the utterances and related speaker roles, we achieve the final input { x 1 , ..., x n } .", "Note that since both datasets used in this work have two speakers, one asking questions and one giving answers, we thus use y user and y agent to represent two role-oriented summaries in the following illustration 2 .", "In a traditional encoder-decoder framework for dialogue summarization, the encoder hidden states represent the semantic information of input dialogue utterances, and the decoder hidden states contain the information used to generate summaries.", "To fully exploit the information from other roles, we apply two role interactions on the attention module of both hidden states.", "We present the structure of our method in Figure 2 and introduce the details of interactions in the following paragraphs.", "Our method is constructed based on a multi-task framework where an encoder is used to encode dialogue utterances and two role decoders (user decoder and agent decoder) are used to decode user summary and agent summary.", "First, the input { x 1 , ..., x n } is sent to an encoder (omitted in the figure for simplicity) and the encoder outputs the context hidden representation { h 1 , ..., h n } .", "In the decoding phase, to calculate the cross attention results for different roles separately, we use User Mask and Agent Mask to split the context information into user context H enc u and agent context H enc a .", "H enc u contains the hidden representation of all user utterances, and H enc a contains the hidden representation of all agent utterances.", "The cross attention module extracts the most useful information from the context according to the temporary decoder state.", "Here we modify the module to attend to different role contexts separately.", "2 Here we need to point out that our method could also apply to dialogues with more than two speakers.", "Taking user decoder as example, at step k , we use the hidden state of user decoder h user k to attend to user context H enc u and agent context H enc a , obtaining two attention distributions att uu,k , att ua,k and context attention results c uu,k , c ua,k .", "Both context results involve generating summaries.", "The process is the same with agent decoder, where two attention distributions are noted as att au,k , att aa,k .", "Since existing models are poor at extracting important information from other roles, it reflects in incorrect cross-role attentions att au,k (agent decoder to user context) and att ua,k (user decoder to agent context).", "Meanwhile, the same-role attentions att uu,k (user decoder to user context) and att aa,k (agent decoder to agent context) are learned better since most information of role-oriented summaries comes from the given role's utterances.", "Thus we want to use the same-role attention to guide the cross-role attention.", "As different roles often discuss the same topic in one dialogue, the accumulated cross attention distribution for user decoder and agent decoder on the same role's utterances should be similar.", "A new Attention Divergence Loss is added to constrain this attention similarity as: L att user = KL(Avg( att au ) || Avg( att uu )) L att agent = KL(Avg( att ua ) || Avg( att aa )) By minimizing these two losses, the agent decoder attends to user utterances as the user decoder does, and the user decoder attends to agent utterances as the agent decoder does.", "Two role decoders interactively learn to focus on the key information of the other role in dialogue utterances.", "Since the decoder calculates the hidden states that could help predict summaries, the hidden states must contain much important information of summaries.", "We thus try to exploit the information embedded in other role decoders.", "Specifically, for user decoder, at time step t , we achieve the decoder hidden states h user t and use a Role Attention module to weigh the last t hidden states of agent decoder { h agent1 , ..., h agent t } 3 .", "The role context information r user t is obtained by adding all the agent hidden states with their weights, and it helps generate the probability of next word y user t for user summary.", "The calculation formulas are given as: r user t = Attn( h user t , h agent1: t ) p ( y user t ) = F ( h user t , r user t , c uu,k , c ua,k ) The function F includes an MLP layer to fuse different information and a softmax layer to predict the vocabulary probability distribution.", "The process is the same with the agent decoder, and two decoders decode interactively.", "In the training phase, we use the teacher-forcing method to jointly train two role decoders and use the Negative Log-Likelihood loss to optimize.", "The 3 Since two decoders decode simultaneously, at step t , the other decoder could only provide the states from step 1 to t .", "L nll = ( | y user | (cid:88) i =1 log P ( y user i | y user <i , y agent <i , D )+ (1 ) | y agent | (cid:88) i =1 log P ( y agent i | y agent <i , y user <i , D ))", "is a hyper-parameter for balancing the weights of different summarization tasks.", "Besides, we add the attention divergence loss to constrain the attention distribution, and the total loss is calculated as: L = L nll + ( L att user + L att agent ) is a hyper-parameter for balancing the weights of different loss functions.", "In the inference phase, we also make some adjustments to beam search for our proposed method.", "We maintain two beams, one for user summary and one for agent summary.", "At each step of decoding, the k th sequence of the user summary beam should consider the states in the k th sequence of the agent summary beam for role attention.", "Once one beam has finished decoding, we keep the beam fixed and search for the other one.", "The beam search will finish if both beams have finished searching.", "There are two dialogue summarization datasets with role-oriented summarization tasks.", "Thus, we evaluate the effectiveness of our proposed method on both datasets.", "First, we experiment on a Chinese fine-grained customer service summarization dataset named CSDS 4 (Lin et al., 2021).", "It provides separate summaries for the user and the agent, and both may contain multiple topics.", "The other one is a Chinese medical inquiry summarization dataset MC 5 (Song et al., 2020).", "Each dialogue has a summary of the patient's description and a summary of the doctor's suggestion.", "We note them as user summary and agent summary as well.", "Most of the summaries in MC are extractive, and only a few are different from dialogue scripts.", "Moreover, most dialogues in MC have only one topic.", "Comparing two 4 https://github.com/xiaolinAndy/CSDS.", "5 https://github.com/cuhksz-nlp/HET-MC.", "We use the official crawling script to acquire the dataset and divide some data from the training set as the validation set.", "Due to the website update, the data may have a slight difference compared with the version in the original paper.", "datasets, MC is easier to summarize while CSDS is more specific for role-oriented summarization and more challenging.", "The detailed statistics of the two datasets are given in Table 1.", "We apply the role interaction methods on two widely-used seq2seq models in the summarization community, including PGN (See et al., 2017) and BERTAbs (Liu and Lapata, 2019).", "Therefore, we will introduce these two backbone models and how we apply Role Interactions to them.", "PGN is an LSTM-based seq2seq model with a copy mechanism to copy words from the input and a coverage mechanism for constraining context attention.", "We set two PGN-based baselines for comparison.", "PGN-single is to separately train two PGN models for generating user summary and agent summary, while PGN-multi tries to jointly train two PGN models by sharing the same encoder.", "Both baselines adopt all the dialogue context as input.", "To apply role interactions, we choose the output of the LSTM cell in the decoder as the query to calculate cross attention and role attention.", "The attention object in role attention is the output of the LSTM cell from the other decoder.", "Since we calculate the cross attention for different roles separately, we use a learnable gate p role to control the weight of different cross attentions and add them together according to their weights to achieve the overall context attention distribution.", "It is also used for the copy and coverage mechanism.", "We set PGN-cross as adding cross attention interaction, PGN-self as adding decoder self-attention interaction, and PGN-both as adding both interactions.", "Transformer has been widely used in language understanding and generation models due to its strong representation ability and concurrency, especially", "in pretrained models (Devlin et al., 2019; Lewis et al., 2020).", "Here we choose BERTAbs (Liu and Lapata, 2019) as the backbone structure since it performs well on many summarization datasets and is available for non-English languages such as Chinese.", "It adopts a pretrained BERT model as encoder and a transformer decoder structure to decode summaries.", "Both the encoder and the decoder contain six layers, and each layer contains three sub-layers (self-attention, encoder-decoder attention, feedfor-ward).", "Similar with PGN-based methods, we set BERT-single and BERT-multi as two baselines.", "We apply both interactions to each layer in BERTAbs.", "For cross attention interaction, we change the encoder-decoder attention sub-layer into two separate cross attention modules; for decoder self-attention interaction, we add the role attention module parallel with the encoder-decoder attention module.", "The query, key, and value of the role attention module are all the output from the self-attention sub-layer.", "BERT-cross , BERT-self , and BERT-both are kept the same with the settings in PGN-based methods.", "We add the role information to the front of the utterance in each turn and concatenate all the utterances in the dialogue sequentially as the input of the model.", "Both PGN 6 and BERTAbs 7 baseline methods are adopted from publicly available codes.", "For PGN-based methods, we use pretrained Chinese word vectors provided by Tencent 8 , and the vocabulary size is 10,000.", "While for BERTAbs-based methods, we use the base version of Chinese BERT-wwm 9 .", "The best checkpoint is chosen based on validation set loss, and we use beam search to decode summaries on the best checkpoint with beam size 5.", "For choosing hyper-parameters, since the agent summary is more complex than the user summary in MC, we set to be 0.2 to give the agent summary more weight.", "It is set to be 0.5 for CSDS.", "is set to be 0.5 for PGN and 0.25 for BERTAbs.", "The hyper-parameter settings are chosen by experimenting on the validation set.", "More details are given in Appendix A. 6 https://github.com/atulkum/pointer_summarizer 7 https://github.com/nlpyang/PreSumm 8 https://ai.tencent.com/ailab/nlp/en/embedding.html 9 https://github.com/ymcui/Chinese-BERT-wwm 4.3 Evaluation Metrics We adopt six common automatic evaluation metrics to evaluate the summary quality.", "The metrics include traditional n-gram overlapping metrics, such as ROUGE (Lin and Hovy, 2002), BLEU (Pap-ineni et al., 2002), and distributed representation matching metrics, including BERTScore (Zhang et al., 2020) and MoverScore (Zhao et al., 2019).", "We use files2rouge toolkit to calculate the F1 score of ROUGE-1, ROUGE-2, ROUGE-L.", "More details of evaluation scripts are given in Appendix A. In addition to automatic metrics, we also compare the summary quality at a fine-grained level through human evaluation.", "Following the human evaluation process in Lin et al. (2021), we recruit several volunteers and let them evaluate the summaries in the following aspects: (1) Informativeness : Does the generated summary correctly cover the information in the ground truth summary?", "(2) Non-redundancy : Does the generated summary not contain repeated, meaningless or unnecessary information?", "(3) Fluency : Is the generated summary well-formed, semantically complete, and easy to understand?", "All three aspects are evaluated at the sub-summary level 10 on a three-point scale, 0 for the worst, 1 for the medium, and 2 for the best.", "First, we present the results of automatic metrics with Student's t-test as significance test in Table 2 and 3.", "The results are similar on the two datasets.", "First, the multi-task mechanism could bring some improvement than separately training on most of the metrics.", "However, the improvement is limited, especially for the PGN model on CSDS.", "After adding the enhancement of other roles' information, the performance is significantly boosted.", "On CSDS, PGN-single and BERT-single are two strong baselines provided in Lin et al. (2021) 11 .", "For PGN-based methods, the best method PGN-both utilizes two interactions and achieves 2.84 and 1.53 higher points on ROUGE-L for user summary and agent summary than PGN-single.", "For BERTAbs-10 We split summaries into different topic segments, and each segment is a sub-summary, same as the process inLin et al. (2021).", "11 Note that we do not mention the baseline Fast-RL (Chen and Bansal, 2018) in Lin et al. (2021).", "It first extracts salient utterances and then generates summary sentences from each utterance separately, which is not available to add our proposed interaction methods.", "based methods, the improvements are even greater, which are 4.73 and 2.69.", "We also conduct ablation studies by only applying one interaction ( -cross or -self ).", "Both settings show promising improvement over the single and multi baselines on nearly all the metrics, demonstrating the effectiveness of each interaction method.", "In comparison, applying two interactions together yields the best result on the majority of metrics.", "The circumstance is similar on MC.", "User summarization is relatively simple on MC, and the baseline methods could achieve high performance (5.35 points of ROUGE-2 higher than the best performance in the original paper (Song et al., 2020)).", "Despite this, both cross attention interaction and decoder self-attention interaction could still increase the performance of user summary a bit.", "Additionally, the improvement on agent summary is more significant.", "PGN-both method achieves 0.90 points of ROUGE-2 and 1.29 points of MoverScore improvement, while BERT-both achieves 0.76 points of ROUGE-2 and 0.66 points of MoverScore improvement.", "PGN-both also beats the best result in the original paper on most of the metrics, which uses additional information such as hospital department and disease name.", "In conclusion, our proposed two interaction methods could bring remarkable improvement on different backbone structures CSDS Info Non-Red Flu Overall PGN-multi 0.69 /0.65 0.54/0.55 0.70/0.79 0.64/0.66 PGN-both 0.66/ 0.69 0.58 / 0.59* 0.73 / 0.81 0.66 / 0.70* BERT-multi 0.58/0.56 0.66 / 0.61 0.84/ 0.87 0.69/0.68 BERT-both 0.62* / 0.60* 0.62/0.60 0.85 / 0.87 0.70 / 0.69 Table 4: The human evaluation results for CSDS.", "To evaluate the summary quality at a more fine-grained level, we compare the summaries from different models according to the pre-defined three aspects: informativeness, non-redundancy, fluency.", "Since the multi-task framework works better than the single baseline, we directly compare it with applying both interactions.", "As CSDS is more challenging for this task, we randomly select 100 samples from the test set and obtain the outputs of two baseline methods (PGN-multi and BERT-multi) and two interaction methods (PGN-both and BERT-both).", "We recruit three volunteers and train them on the evaluation rules 12 .", "Then we let them evaluate the generated summaries according to the ground 12 More details are in Appendix C with ethical concerns.", "truth and the original dialogue in the three aspects.", "We run the inter-annotator agreement study on three volunteers' scores, and obtain a reasonable kappa score, 0.48 on average.", "We also calculate an Overall metric by averaging the results of all three aspects to represent the summary quality in general.", "We normalize the result into 0 to 1 and present it in Table 4.", "The result shows different trends on two backbone structures.", "For the PGN model, applying interactions could largely reduce the redundancy of both user and agent summary, with a comparable performance of informativeness.", "Besides, it also improves the fluency of the two summaries.", "For the BERTAbs model, the interaction method significantly improves the informativeness while the redundancy also increases a bit.", "The difference exists because BERTAbs prefers to generate short summaries.", "Thus, considering information from other roles could help generate more useful information but also induce some redundant text.", "In contrast, PGN tends to generate lengthy summaries.", "When considering information from other roles, it first tries to discard the redundant texts and only retains more important ones.", "The fluency improvement on both methods proves that other roles' information helps generate more semantically complete summaries.", "Considering the overall metric, we conclude that our proposed interaction method is also effective through human evaluation.", "Agent Summary Completeness Analysis The agent summary often suffers semantic incompleteness due to missing key information from other roles (Lin et al., 2021).", "Since our proposed role interactions aim at extracting other roles' key information, we wonder whether they work on these incomplete cases.", "Following the settings in Lin et al. (2021), we compare the summary quality of samples that need to integrate other roles' information and those that do not need separately 13 .", "The result in Table 5 shows that the interaction method could actually help improve the performance on samples that need to integrate.", "Besides, samples that do not need also get improved.", "We believe that it is because considering other roles' information could also help extract critical content from the role's own utterances as well.", "Topic Structural Summary Analysis Since we assume that role interactions could help generate better summaries by sharing the same discussion topic, we wonder whether the summaries generated by our methods could include the correct topic structure.", "More specifically, we want to find out the performance of our methods on summarizing each topic.", "Following the evaluation method in (Lin et al., 2021), we treat each sentence in the summary as a sub-summary for a single topic and calculate the number of matching sub-summaries with the reference by a ROUGE-L-based matching algorithm.", "We calculate the precision, recall, and F1 scores of correctly matched sub-summary ratios and present them in Table 6.", "The result shows that two role interaction methods achieve higher recall and F1 scores on sub-summary matching.", "It proves that role interactions could help the model grasp the discussion topic in the dialogue and generate a more accurate summary for each topic.", "We also present an example in Appendix B to prove the effectiveness of our proposed role interaction method.", "In this paper, we focus on the role-oriented dialogue summarization task.", "To fully exploit the information from other roles, we propose two role interaction methods on cross attention and decoder 13 It is judged by considering whether the summary needs to refer to other roles' utterances, which is already labeled in CSDS.", "self-attention.", "The cross attention interaction calculates the context information for different roles separately and uses same-role attention to guide cross-role attention.", "The decoder self-attention interaction adds a role attention module to attend to other role decoder states interactively.", "Experiments on two dialogue summarization datasets prove that both interactions perform significantly better than strong baseline methods.", "Adding role interactions also helps generate summaries with complete semantics and correct topic structure.", "In the future, we will try to apply this method to other dialogue-related tasks and conduct more experiments on stronger summarization methods.", "We thank all volunteers for their great help on the evaluation, and all anonymous reviewers' suggestions are very appreciated.", "The research work described in this paper has been partially supported by the National Key Research and Development Program of China under Grant No. 2020AAA0108600 and the Natural Science Foundation of China under Grant No. 62106263." ]
[ "abstain", "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "result", "objective", "abstain", "result", "abstain", "method", "method", "abstain", "objective", "objective", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "objective", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "result", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other" ]
[ "As part of growing NLP capabilities, coupled with an awareness of the ethical dimensions of research, questions have been raised about whether particular datasets and tasks should be deemed off-limits for NLP research.", "We examine this question with respect to a paper on automatic legal sentencing from EMNLP 2019 which was a source of some debate, in asking whether the paper should have been allowed to be published, who should have been charged with making such a decision, and on what basis.", "We focus in particular on the role of data statements in ethically assessing research, but also discuss the topic of dual use, and examine the outcomes of similar debates in other scientific disciplines.", "NLP tools are increasingly being deployed in the wild with potentially profound societal implications.", "Alongside the rise in technical capabilities has been a growing awareness of the moral obligation of the field to self-assess issues including: dataset and system bias (Zhao et al., 2017), dataset ethics (Bender and Friedman, 2018), and dual use (Hovy and Spruit, 2016).", "More recently, there has also been vigorous debate on whether it is ethical for the community to work on certain topics or data types.", "This paper aims to investigate this issue, focused around the examination of a paper recently published at EMNLP 2019 on automatic prison term prediction by Chen et al. (2019).", "Specifi-cally, the paper in question proposes a neural model which performs structured prediction of the individual charges laid against an individual, and the prison term associated with each, which can provide an overall prediction of the prison term associated with the case.", "This model was constructed using a large-scale dataset of real-world Chinese court cases.", "The primary question we attempt to address in this paper is on what basis a given paper satisfies basic ethical requirements for publication, in addition to examining the related question of who should make this judgement.", "Note that our intention is in no way to victimise the authors of the paper in question, but rather to use it as a test case to objectively ground an ethical assessment.", "The authors did highlight potential ethical concerns of its application, but missed the point that there are data ethics issue in the first place.", "Note also that, given the topic of the paper, we will focus somewhat on NLP applications in the legal domain, but the majority of the find-ings/recommendations generalise and will be of equal relevance to other domains.", "The first dimension to consider is data ethics: the data source and procedure used to construct a dataset have an immediate impact on the generalis-abilty/interpretation of results based on that dataset, as well as the ability for real-world harm to happen (intentionally or otherwise) through its use.", "A number of proposals have recently been made regarding documentation procedures when releasing datasets to assist here, in particular data statements (Bender and Friedman, 2018) and datasheets (Gebru et al., 2018).", "Amalgamating the two, relevant questions to the specific case are the following, each of which we discuss briefly.", "1 Which texts were included and what were the goals in selecting texts?", "The dataset was constructed from published records of the Supreme People's Court of China, following work by Xiao 1 Note that many other important questions are covered in the respective frameworks, and our presentation here is biased towards the specific paper of interest.", "et al. (2018) in the context of a popular shared task on automatic legal judgement prediction.", "The reason for constructing this particular dataset is to improve the accuracy of prison term prediction by decomposing it into a set of charge-based prison term predictions.", "Why was the dataset created?", "To enhance the structure and granularity of earlier datasets, and achieve empirical gains in predictive accuracy.", "Were the people represented in the dataset informed about the data collection?", "There is no mention of interaction with either the defendants or court officials about the use of the data.", "The documents are in the public domain.", "Could this dataset expose people to harm or legal action?", "Yes, the defendants are identifiable and the dataset directly pertains to legal action.", "Does it unfairly advantage or disadvantage a particular social group?", "The dataset does not include explicit metadata regarding the demographics of the defendants, and the data has first names removed, but not surnames or other named entities.", "It is easy to imagine instances where the surname and location references could make the individual identifiable or could expose demographic information, esp. for ethnic minorities or areas of lower population density.", "Were the people represented in the dataset provided with privacy guarantees?", "No, no steps were taken other than removing their first names.", "Does the dataset contain information that might be considered sensitive or confidential?", "Yes, given that the labels represent prison time served by real-world individuals, and having personally identifying information entombed in a dataset that potentially has longevity (cf. the notoriety of Pierre Vinken from the Penn Treebank) could potentially have direct or indirect consequences for those individuals and their families or group.", "Does the dataset contain information that might be considered inappropriate or offensive?", "Many of the cases are criminal in nature, so there are potentially personal and confronting details in the court cases, including information about the victims.", "How was the data annotated, and what are the demographic characteristics of the annotators and annotation guideline developers?", "The annota-tion of the data is via court officials in terms of their legal findings, rather than via third-party annotations.", "No details are provided of the presiding court officials and their demographics, despite there being ample evidence of demographic bias in legal decision-making in other countries (Schanzen-bach, 2005; Rachlinski et al., 2008; Yourstone et al., 2008).", "Will the dataset be updated?", "We highlight this particular question because cases can be overturned or appealed and new evidence can come to light.", "In this particular case, the Supreme People's Court in China has no legal avenue for appeal, but it is still presumably possible for a case to be reopened on the basis of fresh evidence and a different finding made, or overturned completely if a miscarriage of justice is found to have occurred.", "On the one hand, this doesn't immediately affect the labels in the dataset, as the sentencing is based on the facts that were available at the time, but it could lead to situations where a legal case which was ultimately annulled is inappropriately preserved in the dataset in its original form, implying guilt of the individuals which was later disproven.", "Of these, which are relevant to whether the paper is ethically sound, or could have made the paper less ethically questionable?", "Carrying out the research with the involvement of relevant legal authorities would certainly have helped, in terms of incorporating domain interpretation of the data, getting direct input as to the ultimate use of any model trained on the data (noting that the paper does return to suggest that the model be used in the Review Phase to help other judges post-check judgements of presiding judges).", "The lack of any mention of ethics approval is certainly troubling given the sensitivity of the data/task.", "The paper does briefly mention the possibility of demographic bias, without making any attempt to quantify or ameliorate any such bias.", "Privacy is an interesting question here, as we return to discuss under data misuse in Section 2.2, in addition to discussing the legality of using court documents for NLP research.", "Having said this, we acknowledge that similar datasets have been constructed and used by others (esp.", "Xiao et al. (2018)), including in major NLP conferences (e.g. Zhong et al. (2018), Hu et al. (2018)).", "However, this should never be taken as a waiver for data ethic considerations.", "Also notable here are court proceeding datasets such as that of Aletras et al. (2016), where the use case is the prediction of the violation of human rights (fo-cusing on torture/degrading treatment, the right to a fair trial, and respect for privacy), which is more clearly aligned with social good (although there is more dataset documentation that could have been provided in that paper, along the lines described above).", "The conversation of what social good is, though, remains an open one (Green, 2019).", "In sum, there is a level of ethical naivety and insensitivity in the paper, with the lack of ethics approval, end-user engagement, and consideration of the privacy of the defendants all being of immediate concern, but also long-term concerns including whether NLP should be used to such ends at all.", "Dual use describes the situation where a system developed for one purpose can be used for another.", "An interesting case of dual use is OpenAI's GPT-2.", "In February 2019, OpenAI published a technical report describing the development GPT-2, a very large language model that is trained on web data (Radford et al., 2019).", "From a science perspective, it demonstrates that large unsupervised language models can be applied to a range of tasks, suggesting that these models have acquired some general knowledge about language.", "But another important feature of GPT-2 is its generation capability: it can be used to generate news articles or stories.", "Due to dual-use concerns, e.g. fine-tuning GPT-2 to generate fake propaganda, 2 OpenAI released only the small version of the pre-trained models.", "It was, however, not received well by the scientific community, 3 with some attributing this decision to an attempt to create hype around their research.", "4 The backlash ultimately made OpenAI reconsidered their approach, and release the models in stages over 9 months.", "5 During these 9 months, OpenAI engaged with other organisations to study the social implications of their models (Solaiman et al., 2019), and found minimal evidence of misuse, lending confidence to the publication of the 2 https://www.middlebury.edu/institute/ academics/centers-initiatives/ctec/ctec-publications-0/industrialization-terrorist-propaganda .", "3 https://thegradient.pub/openai-please-open-source-your-language-model/ .", "4 https://towardsdatascience.com/ openais-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8 .", "5 https://openai.com/blog/gpt-2-6-month-follow-up/#fn1 .", "larger models.", "In November 2019 OpenAI released the their final and largest model.", "6 OpenAI's effort to investigate the implications of GPT-2 during the staged release is commendable, but this effort is voluntary, and not every organisation or institution will have the resources to do the same.", "It raises questions about self-regulation, and whether certain types of research should be pursued.", "A data statement is unlikely to be helpful here, and increasingly we are seeing more of these cases, e.g. GROVER (for generating fake news articles; Zellers et al. (2019)) and CTRL (for controllable text generation; Keskar et al. (2019)).", "All of that said, for the case under consideration it is not primarily a question of dual use or misuse, but rather its primary use: if the model were used to inform the Supreme Court, rather than automate decision-making, what weight should judges give the system?", "And what biases has the model learned which could lead to inequities in sentencing?", "It is arguable that decisions regarding human freedom, and even potentially life and death, require greater consideration than that afforded by an algorithm, that is, that they should not be used at all.", "Although no other governments appear to be automating legal decision-making per se , many governments are embracing algorithms to anal-yse/inform judicial decisions.", "In countries such as the United States and Australia, there has been analysis of legal decisions to understand factors such as the race/ethnicity of the defendant or the time of the day when the judge make a decision, and how this impacts on decision-making (Zatz and Hagan, 1985; Stevenson and Friedman, 1994; Snowball and Weatherburn, 2007; Kang et al., 2011).", "The French government has, however, under Article 33 of the Justice Reform Act made it illegal to analyse algorithmically any decision made by a judge, with what some argue is the harshest possible penalty for misconduct involving technology: a five-year sentence.", "7 Two decades ago, Helen Nissenbaum sounded the alarm about automating accountability (Nis-senbaum, 1996).", "She expressed concerns that can be summarised in four categories.", "First, comput-erised systems are built by many hands and so lines of responsibility are not clear.", "Secondly, bugs are inevitable.", "Third, humans like to blame the com-6 https://openai.com/blog/gpt-2-1-5b-release/ .", "puter, which is problematic because of her fourth observation: that software developers do not like to be held responsible for their tools that they create.", "Nissenbaum is not the only author who questions whether there should be limitations on certain uses of computer science (Leins, 2019).", "Sciences We have consultations, which of the inventions and experiences which we have discovered shall be published, and which not; and take all an oath of secrecy for the concealing of those which we think fit to keep secret; though some of those we do reveal sometime to the State, and some not.", "The work of Ron Fouchier, a Dutch virologist, is informative in considering publication practices in the NLP community.", "Fouchier discovered a way to make the bird flu H5N1 transmissible between ferrets, and therefore potentially very harmful to humans.", "Fouchier's research extended the potential scope of the virus beyond its usual avian transmission routes and extended the reach of his research beyond his laboratory when he submitted his paper to a US journal.", "The Dutch government objected to this research being made public, and required Fouchier to apply for an export licence (later granted).", "The situation raised a lot of concerns, and a lot of discussion at the time (Enserink, 2013), as well as a series of national policies in response.", "8 That said, Fouchier's work was not the first or last to be censored.", "Self-censorship was mentioned as early as the 17th-century by British philosopher Bacon, often credited with illuminating the scientific method (Grajzl and Murrell, 2019).", "Most recently, similar questions not about how research should be done, but whether it should be done at all, have arisen in the recent Chinese CRISPR-Cas 9 case, where HIV immunity in twins was allegedly increased, without prior ethical approval or oversight.", "9 As the capabilities of language models and computing as a whole increase, so do the potential implications for social disruption.", "Algorithms are not 8 https://www.jst.go.jp/crds/en/ publications/CRDS-FY2012-SP-02.html .", "likely to be transmitted virally, nor to be fatal, nor are they governed by export controls.", "Nonetheless, advances in computer science may present vulnerabilities of different kinds, risks of dual use, but also of expediting processes and embedding values that are not reflective of society more broadly.", "Questions associated with who decides what should be published are not only legal, as illustrated in Fouchier's work, but also fundamentally philosophical.", "How should values be considered and re-flected within a community?", "What methodologies should be used to decide what is acceptable and what is not?", "Who assesses the risk of dual use, misuse or potential weaponisation?", "And who decides that potential scientific advances are so socially or morally repugnant that they cannot be permitted?", "How do we balance competing interests in light of complex systems (Foot, 1967).", "Much like nuclear, chemical and biological scientists in times past, computer scientists are increasingly being questioned about the potential applications, and long-term impact, of their work, and should at the very least be attuned to the issues and trained to perform a basic ethical self-assessment.", "Given all of the above, what should have been the course of action for the paper in question?", "It is important to note that the only mentions of research integrity/ethics in the Call for Papers relate to author anonymisation, dual submissions, originality, and the veracity of the research, meaning that there was no relevant mechanism for reviewers or PC Chairs to draw on in ruling on the ethics of this or any other submission.", "A recent innovation in this direction has been the adoption of the ACM Code of Ethics by the Association for Computational Linguistics, and explicit requirement in the EMNLP 2020 Calls for Papers for conformance with the code: 10 Where a paper may raise ethical issues, we ask that you include in the paper an explicit discussion of these issues, which will be taken into account in the review process.", "We reserve the right to reject papers on ethical grounds, where the authors are judged to have operated 10 https://2020.emnlp.org/call-for-papers counter to the code of ethics, or have inadequately addressed legitimate ethical concerns with their work This is an important first step, in providing a structure for the Program Committee to assess a paper for ethical compliance, and potentially reject it in cases of significant concerns.", "Having said this, the ACM Code of Ethics is (deliberately) abstract in its terms, with relevant principles which would guide an assessment of the paper in question including: 1.2 Avoid harm ; 1.4 Be fair and take action not to discriminate ; 1.6 Respect privacy ; 2.6 Perform work only in areas of competence ; and 3.1 Ensure that the public good is the central concern during all professional computing work .", "In each of these cases, the introspection present in a clearly-articulated data statement would help ameliorate potential concerns.", "What could an ethics assessment for ACL look like?", "Would an ethics statement for ACL be enough to address all concerns?", "As argued above, it is not clear that ACL should attempt to position itself as ethical gatekeeper, or has the resources to do so.", "And even if ACL could do so, and wanted to do so, the efficacy of ethics to answer complex political and societal challenges needs to be questioned (Mittelstadt, 2019).", "There certainly seems to be an argument for a requirement that papers describing new datasets are accompanied by a data statement or datasheet of some form (e.g. as part of the supplementary material, to avoid concerns over this using up valuable space in the body of the paper).", "This still leaves the question of what to do with pre-existing datasets: should they all be given a free pass; or should there be a requirement for a data statement to be retrospectively completed?", "The GDPR provides some protection for the use of data, but its scope and geographic reach are limited.", "Further, the term anonymised is often a misnomer as even data that is classified by governments and other actors as anonymous can often easily be reidentified (Culnane and Leins, 2020).", "What about code and model releases?", "Should there be a requirement that code/model releases also be subject to scrutiny for possible misuse, e.g. via a central database/registry?", "As noted above, there are certainly cases where even if there are no potential issues with the dataset, the resulting model can potentially be used for harm (e.g. GPT-2).", "One could consider this as part of an extension of data statements, in requiring that all code/model releases associated with ACL papers be accompanied with a structured risk assessment of some description, and if risk is found to exist, some management plan be put in place.", "Looking to other scientific disciplines that have faced similar issues in the past may provide some guidance for our future.", "Finally, while we have used one particular paper as a case study throughout this paper, our intent was in no way to name and shame the authors, but rather to use it as a case study to explore different ethical dimensions of research publications, and attempt to foster much broader debate on this critical issue for NLP research.", "This research was supported in part by the Australian Research Council (DP200102519 and IC170100030).", "The authors would like to thank Mark Dras, Sarvnaz Karimi, and Karin Verspoor for patiently engaging in rambling discussions which led to this hopefully less rambling paper, and to the anonymous reviewers for their suggestions and insights." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other" ]
[ "Data augmentation is an effective way to improve the performance of many neural text generation models.", "However, current data augmentation methods need to define or choose proper data mapping functions that map the original samples into the augmented samples.", "In this work, we derive an objective to formulate the problem of data augmentation on text generation tasks without any use of augmented data constructed by specific mapping functions.", "Our proposed objective can be efficiently optimized and applied to popular loss functions on text generation tasks with a convergence rate guarantee.", "Experiments on five datasets of two text generation tasks show that our approach can approximate or even surpass popular data augmentation methods.", "End-to-end neural models are generally trained in a data-driven paradigm.", "Many researchers have proposed powerful network structures to fit training data well.", "It has also become ubiquitous to increase the training data amount to improve model performance.", "Data augmentation is an effective technique to create additional samples in both vision and text classification tasks (Perez and Wang, 2017; Shorten and Khoshgoftaar, 2019; Wei and Zou, 2019), which perturb samples without changing their labels.", "For text generation tasks, there can be more types of data perturbation to construct augmented samples, including corrupting the input text (Xie et al., 2017), the output text (Norouzi et al., 2016; Kurata et al., 2016), or both (Zhang et al., 2020).", "As such, classification tasks can be regarded as special cases of generation tasks in terms of incorporating data augmentation techniques, and this work mainly discusses text generation tasks.", "The focus of previous work on text data augmentation has been to design proper augmentation techniques to create augmented samples.", "Some augmentation methods have been proposed for general text tasks.", "For example, different general replacement operations have been explored to edit words in a text sample, ranging from simple look-up tables (Zhang et al., 2015) to pretrained masked language models (Kobayashi, 2018; Wu et al., 2019).", "Sennrich et al. (2016) propose to augment text sequences by back-translation.", "For some generation tasks such as dialogue generation, general augmentation methods may not yield stable improvements and it requires to carefully incorporate the task property to design useful augmented samples (Zhang et al., 2020).", "All these methods need to explicitly construct augmented samples, and the data mapping functions from the original samples to the augmented samples are mostly defined apriori.", "This motivates us to raise a question, whether we can skip the step to define or choose proper augmented data mapping functions to accomplish effective data augmentation.", "To answer this question, we aim to formulate the problem of data augmentation for general text generation models without any use of augmented data mapping functions.", "We start from a conventional data augmentation objective, which is a weighted combination of loss functions associated with the original and augmented samples.", "We show that the loss parts of the augmented samples can be re-parameterized by variables not dependent on the augmented data mapping functions, if a simple Euclidean loss function between the sentence representations is applied.", "Based on this observation, we propose to directly define a distribution on the re-parameterized variables.", "Then we optimize the expectation of the augmented loss parts over this distribution to approximate the original augmented loss parts computed with various augmented data mapping functions.", "We make different assumptions on the variable distributions and find that our proposed objective can be computed and optimized efficiently by simple gradient weighting.", "If stochastic gradient descent (SGD) is used, our objective is guaranteed with the convergence rate O (1 / T ) .", "Our objective can be coupled with popular loss functions on text generation tasks, including the word mover's distance (Kusner et al., 2015) and the cross-entropy loss.", "Our approach, which utilizes the proposed objective and optimizes it by SGD, has two advantages.", "First, it provides a unified formulation of various data perturbation types in general text generation models, which sheds a light on understanding the working mechanism of data augmentation.", "Second, the optimization of our approach is simple and efficient.", "Without introducing any new sample during training, we can avoid additional calculation efforts on augmented samples, often with the total size much larger than the original data size.", "Hence, our approach maintains high training efficiency.", "Extensive experiments are conducted to validate the effectiveness of our approach.", "We mainly use the LSTM-based network structure (Bahdanau et al., 2015; Luong et al., 2015b) and perform experiments on two text generation tasks neural machine translation and single-turn conversational response generation.", "Results on five datasets demonstrate that the proposed approach can approximate or even surpass popular data augmentation methods such as masked language model (Devlin et al., 2019) and back-translation (Sennrich et al., 2016).", "Data augmentation has shown promising improvements on neural models for different text generation tasks such as language modeling (Xie et al., 2017), machine translation (Sennrich et al., 2016) and dialogue generation (Niu and Bansal, 2019; Cai et al., 2020).", "Existing text data augmentation methods can be mainly categorized into word-level augmentation and sentence-level augmentation.", "Word-level augmentation methods perturb words within the original sentence.", "Common operations include word insertion and deletion (Wei and Zou, 2019), synonym replacement (Zhang et al., 2015), and embedding mix-up (Guo et al., 2019).", "Masked language models can be used by masking some percentages of tokens at random, and predicting the masked words based on its context (Wu et al., 2019; Cai et al., 2020).", "Sentence-level data augmentation is not limited to edit only a few words in the original sentence, but to generate a complete sentence.", "For example, back-translation is originally proposed to translate monolingual target language data into source language to augment training pairs in machine translation (Sennrich et al., 2016).", "It is later extended to paraphrase sentences in any text dataset, in which two translation models are applied: one translation model from the source language to target language and another from the target to the source.", "GAN-based and VAE-based models have also achieved impressive results to create entire sentences to augment the training data (Hu et al., 2017; Cheng et al., 2019).", "For dialogue generation, retrieved sentences can be good supplement of the original corpus (Zhang et al., 2020).", "Both word-level and sentence-level augmentation methods need to define their augmented data mapping functions (i.e. operations to edit words or models to generate sentences) apriori.", "Some works train policies to sample a set of word-level operations (Niu and Bansal, 2019), but the operation candidates are still pre-defined.", "A few works learn to construct augmented samples and optimize the network jointly (Hu et al., 2019; Cai et al., 2020).", "Different from previous work, our goal is not to propose or learn novel augmented data mapping functions.", "Instead, we investigate whether the effectiveness of data augmentation can be achieved while we do not bother to use any specific augmented data mapping function.", "Besides data augmentation, data weighting is another useful way to improve model learning.", "It assigns a weight to each sample to adapt its importance during training.", "The sample weights are often carefully defined (Freund and Schapire, 1997; Ben-gio et al., 2009) or learnt by another network (Jiang et al., 2018; Shu et al., 2019).", "Data augmentation is often combined with data weighting together to weight the original and augmented samples.", "We are given original samples D = { ( x , y ) } with x , y both as text sequences.", "Without loss of generality, a deep generation model is to learn a mapping function f x,y by a deep neural network that outputs y given x .", "As mentioned in the introduction, text generation tasks mainly have three types of augmented data: one (or several) perturbed input text x by one (or several) augmented data mapping function x ; one (or several) perturbed output text y by one (or several) augmented data mapping functions y ; one (or several) perturbed paired text ( x , y ) by corresponding augmented data mapping functions.", "Proper augmented data mapping functions are often supposed to generate perturbed sequences or sequence pairs that are close to the original one.", "They are assumed to be given apriori in optimizing the generation model for now.", "Let (cid:96) ( f x,y ( x ) , y ) denote the loss function to be minimized for each sample.", "We first use augmented data in the input domain as an example to present the problem formulation and introduce our approach, then later discuss other types of augmented data.", "Data augmentation methods generally apply an augmented loss per sample with its augmented samples: (cid:96) aug = (cid:96) ( f x,y ( x ) , y ) + (cid:88) x : x F w x (cid:96) ( f x,y ( x ) , y ) (1) where w x is the importance weight associated with each augmented sample, x is the augmented data mapping function that constructs x , and F is the function space containing all feasible augmented data mapping functions.", "In this section, we aim to formulate the problem of data augmentation for general text generation models without any use of augmented data mapping functions.", "We introduce our approach by assuming that the loss function (cid:96) is the most simple Euclidean distance,", "i.e.", "where u and v are the sentence representations of two sentences,", "i.e. the target sequence and the predicted sequence.", "Other conventional loss functions in text generation will be discussed in Section 5. We first rewrite each loss part of an augmented data point in (1) from a polar coordinate system in Sec 4.1.", "In this way, we can regard the total augmented loss part with multiple augmented data mapping functions as sampling different points in the polar coordinate system.", "This inspires us that we can skip to define any augmented data mapping function, but only design a joint distribution of the perturbation radius and perturbation angle in the polar coordinate system.", "In Sec 4.2, we show two probability distribution substantiations, and find that our approach can be optimized efficiently by simply re-weighting the gradients.", "In Sec 4.3, we discuss the extension of our approach for other augmented data mapping function types.", "By treating f x,y ( x ) , f x,y ( x ) and y as three vertices in the Euclidean space, we can form a triangle (il-lustrated in Fig. 1a) with the three vertices and the loss between them as edges.", "For a given augmented data mapping function x and a sample ( x , y ) , we can rewrite (cid:96) ( f x,y ( x ) , y ) using the polar coordinate system with f x,y ( x ) as the pole and ( f x,y ( x ) , y ) as the polar axis: (cid:96) 2 ( f x,y ( x ) , y ) = (cid:96) 2 ( f x,y ( x ) , y ) + (cid:96) 2 ( f x,y ( x ) , f x,y ( x )) 2 (cid:96) ( f x,y ( x ) , f x,y ( x )) (cid:96) ( f x,y ( x ) , y ) cos (3) where is the radian of f x,y ( x ) .", "We can observe that, the rewritten augmented sample loss part depends on the original sample loss (cid:96) ( f x,y ( x ) , y ) as well as the radius r and radian of f x,y ( x ) .", "Here r is the data perturbation distance (cid:96) ( f x,y ( x ) , f x,y ( x )) .", "Therefore, we can map each augmented data mapping function x F into ( r, ) P , where P is a joint distribution of ( r, ) 1 .", "A weighted summation of the augmented loss parts from different augmented data mapping functions can be seen as an empirical estimation of the expectation of the rewritten loss by sampling different ( r, ) 's from their joint distribution P , though the corresponding ground truth P is not observed.", "This inspires us how to avoid to specifically design or choose several augmented data mapping functions and their weights used in (1).", "We can directly design the distribution P of ( r, ) and optimize the expectation of the rewritten loss (i.e. the right hand side in (3)) under this distribution.", "Hence, we propose to optimize the following objective to mimic the effect of data augmentation: 1 It is worth pointing out that even if the three vertices (i.e., f x,y ( x ) , y , and f x,y ( x ) ) lie in high dimensional spaces, we can always use the distribution of ( r, ) cover all possible triangles formed by them.", "And our derivation will not lose its generalization in high dimensional spaces, since we does not make use of the vertices but only edges of the triangles.", "(4) where ( e ; r, ) is a function of an edge e in the loss function space given ( r, ) :", "( e ; r, ) = (cid:112) e 2 + r 2 2 er cos .", "(5) 4.2 Optimization We design specific distributions of ( r, ) used in the proposed objective (4) and their optimization.", "We assume the two variables are independent: p ( r, ) = p ( r ) p ( ) .", "In the following corollary, we first show the result by assuming that both r and follow uniform distributions.", "Recall that proper data mapping functions augment samples close to the original one.", "An ideal case is thus to perturb samples with their output representations uniformly surrounding that of the original sample.", "The uniform distribution with a small perturbation radius upper bound R can simulate this ideal case.", "Corollary 1. We are given the perturbation distance upper bound R and assume that r U (0 , R ) , U (0 , ) .", "E ( r, ) P [( (cid:96) ( f x,y ( x ) , y ))] is upper bounded by 12 (cid:96) ( f x,y ( x ) , y ) + C 1 (cid:96) 2 ( f x,y ( x ) , y ) + C 2 ( R ) , where C 1 is a constant and C 2 ( R ) is another constant dependent on R .", "Proof is in the Appendix.", "With the above result, we can optimize the objective in (4) by minimizing the derived upper bound.", "We calculate its gradient: (cid:96) our = 3 2 (cid:96) () + 2 C 1 (cid:96) () (cid:96) () (8) where contains all neural model parameters.", "It can be observed that the major difference of the above gradient compared with the original one of the objective in (1) lies in the second part of (8), which weights the original gradient by the loss value.", "This means that the performance improvement brought by data augmentation under our formulation can be equivalently accomplished by specialized data weighting.", "Indeed, many data weighting methods (Lin et al., 2017) favors hard examples by reducing the gradient contribution from easy examples and increasing the importance of hard examples (example with large loss value in our ap-proach), which significantly boost the performance.", "This in turn shows that simple uniform distributions assumed here should be reasonable and effective.", "Instead of uniform distribution, we can assume a uniform distribution on but an exponential distribution on r such that a small perturbation distance is preferred with a higher probability.", "Corollary 2. We are given the expected value of the perturbation distance as R and assume that r Exp ( 1 R ) , U (0 , ) .", "(9) E ( r, ) P [( (cid:96) ( f x,y ( x ) , y ))] is upper bounded by C 1 ( R ) (cid:96) ( f x,y ( x ) , y ) + C 1 ( R ) 2 (cid:96) 2 ( f x,y ( x ) , y ) + C 2 ( R ) , where C 1 ( R ) and C 2 ( R ) are constants dependent on R .", "Proof is in the Appendix.", "The above corollary shows that even if different distributions are assumed, we can still use gradient weighting to optimize the proposed objective, where C 1 ( R ) can be set as a hyper-parameter.", "If the loss is Lipschitz smooth, of which Euclidean distance is the case, we can prove the convergence of our approach with the convergence rate O (1 / T ) , if SGD is used.", "The proof is provided in the Appendix, which is extended from results in Reddi et al. (2016).", "Theorem 1. Suppose (cid:96) our is in the class of finite-sum Lipschitz smooth functions, has -bounded gradients, and the weight of the loss gradient is clipped to be bounded by [ w 1 , w 2 ] .", "Let the learning rate of SGD t = c/ T where c = (cid:113) 2( (cid:96) our ( 0 ) (cid:96) our ( )) L 2 w 1 w 2 where L is the Lipschitz constant and is an optimal solution.", "Then the iterates of SGD of our approach with (cid:96) our satisfy: min 0 t T 1 E [ || (cid:96) our ( t ) || 2 ] (cid:115) 2( (cid:96) our ( 0 ) (cid:96) our ( )) Lw 1 T w 2 .", "We now discuss how our approach can be applied to other types of augmented data.", "For augmented data on the output domain, the objective in (1) becomes: (cid:96) aug = (cid:96) ( f x,y ( x ) , y ) + (cid:88) y F w y (cid:96) ( f x,y ( x ) , y ) .", "(11)", "The augmented loss part can be rewritten using the polar coordinate system with y as the pole and ( y , f x,y ( x )) as the polar axis, illustrated in Fig. 1b: (cid:96) 2 ( f x,y ( x ) , y ) = (cid:96) 2 ( y , f x,y ( x )) + (cid:96) 2 ( y , y ) 2 (cid:96) ( y , f x,y ( x )) (cid:96) ( y , y ) cos .", "(12)", "For data perturbation on both the input and output space, we have: (cid:96) aug = (cid:96) ( f x,y ( x ) , y ) + (cid:88) x, y F w x, y (cid:96) ( f x,y ( x ) , y ) .", "(13)", "Illustrated in Fig. 1c, we first make use of the triangle inequality that: (cid:96) ( f x,y ( x ) , y ) 1 2 ( (cid:96) ( f x,y ( x ) , y ) + (cid:96) ( y , y )) +12( (cid:96) ( f x,y ( x ) , f x,y ( x )) + (cid:96) ( f x,y ( x ) , y )) .", "Similarly, the augmented data mapping function y can be re-parameterized into a function of the radius r = (cid:96) ( y , y ) (still the perturbation distance) and the radian of y .", "The objective turns out to be the same as (4).", "(14)", "Using (3) and (12), the objective is rewritten as: (cid:96) our = (cid:96) ( f x,y ( x ) , y ) + E ( r, ) P [ r + ( (cid:96) ( f x,y ( x ) , y ))] .", "(15)", "Note that E ( r, ) P [ r ] is a scalar which is not dependent on any learning parameter.", "Thus optimizing the above objective is equivalent to optimizing (4).", "From the above analysis, we can see that our proposed objective in (4) can be applied to handle all three kinds of augmented data mapping functions in text generation models.", "In theory, our approach can be applied to any Lipschitz smooth loss function that holds the equation (3).", "In this section, we show another valid loss function in our approach the word mover's distance (WMD) (Kusner et al., 2015; Zhao et al., 2019), which is previously used in various text generation tasks.", "Next, we discuss the cross entropy loss, in which the proposed objective is not an upper-bound of the data augmentation objective.", "However, our approach can still converge with the same convergence rate and experimental results in the next section validate the effectiveness of our approach with the cross-entropy loss.", "WMD, also named the optimal transport distance (Chen et al., 2018a), leverages optimal transport to find an optimal matching of similar words between two sequences, providing a way to measure their semantic similarity:", "where p u,i /p v,j is the probability distribution of the sentence,", "i.e.", "(cid:80) i p u,i = 1 and (cid:80) j p v,j = 1 .", "d i,j is the cost for mis-predicting u i to v j , where the squared Euclidean distance d i,j = (cid:107) u i v j (cid:107) 2 is used and u i /v j is the word embedding vector.", "Note that the Euclidean distance in (2) is a special case of WMD by replacing the 1-gram used in WMD to n -gram with n larger than the sentence's length.", "WMD is the squared L 2 Wasserstein distance.", "We take its squared root,", "i.e.", "(cid:96)", "WD = (cid:96) WMD , which holds an upper bound as the right hand side in (3).", "Also, (cid:96) WD is Lipschitz smooth.", "Theorem 2. For the L 2 Wasserstein distance W 2 ( , ) on the Wasserstein space W 2 ( R n ) and any x, y, z W 2 ( R n ) , we have W 2 ( y, z ) 2 W 2 ( x, y ) 2 + W 2 ( z, x ) 2 2 W 2 ( x, y ) W 2 ( z, x ) cos .", "Here is the angel between the xy and zx , xy is the geodesic (shortest path) connecting x, y in W 2 ( R n ) , and zx is the geodesic connecting z, x in W 2 ( R n ) .", "Theorem 3. u and v are given as fixed.", "Assuming that u is Lipschitz continuous with respect to the parameters .", "Then (cid:96) WD ( u , v ) is Lipschitz continuous with respect to the parameters .", "Roughly speaking, according to Sturm et al. (2006)[Proposition 2.10], the sectional curvature of Wasserstein space W 2 ( R n ) is non-negative.", "Hence, every geodesic triangle in W 2 ( R n ) is fatter than the one with same sides length in R 2 .", "As a consequence, an inequality like cosine law is satisfied on W 2 ( R n ) ,", "i.e., Theorem 2 holds.", "A formal proof of the above two theorems is provided in the Appendix.", "Thus, all our derivations in Section.", "4 hold.", "The exact computation of (cid:96) WD is expensive during training.", "In our experiments, we resort to the inexact proximal point method for optimal transport algorithm to compute it (Chen et al., 2018a).", "Although WMD is effective for various sequence generation tasks, the most conventional loss function adopted in existing generation models is the cross-entropy loss.", "It measures the word difference at each word y i of the output sequence y : (cid:96) CE ( y i , p i ) = y Ti log( p i ) (18) (cid:96) CE ( y , p ) = | y | (cid:88) i =1 (cid:96) CE ( y i , p i ) (19) where y i is the target one-hot vector with the correct dimension as 1 and 0 elsewhere, and p i is the predicted probability output by a softmax layer.", "We adopt the maximum likelihood estimation as the training paradigm by assuming truth for preceding words in predicting p i .", "The cross-entropy loss is also Lipschitz smooth, and thus we can guarantee its convergence from Theorem 1. Unfortunately, it does not satisfy the equation in (3), and thus minimizing our objective in (4) does not necessarily approximate the data augmentation objective in (1).", "In our experiments, we also try the cross-entropy loss, and results show that our objective is effective to improve the model performance compared with the base model.", "This is not surprising since our approach is optimized by gradient weighting and thus at least it is a useful data weighting method.", "The proposed approach provides a new paradigm and understanding of data augmentation for text generation.", "To evaluate that our approach can mimic the effect of data augmentation, we conduct experiments on two text generation tasks neural machine translation and conversational response generation.", "We compare our approach with two most popular data augmentation methods (one token-level and one sentence-level augmentation method) that can be applied on various text generation tasks: Masked Language model (MLM): We use a pretrained BERT (Devlin et al., 2019; Wolf et al., 2020) and randomly choose 15% of the words for each sentence.", "BERT takes in these masked words to predict these masked positions with new words.", "We augment one sample from each original training sample.", "Thus the data size increases to twice of the original one.", "Note that we only augment the English side of translation datasets.", "Back-translation (BT): For neural machine translation, we employ a fixed target-to-source translation model trained on the original dataset.", "For conversational response generation, we perturb both the input and output text of the original sample pair using two pretrained translation model: an English-to-German model and its backward counterpart, which are obtained using the WMT14 corpus with 4.5M sentence pairs 2 .", "We again augment one sample from each original training sample.", "We set the same weight w of all augmented loss parts used in (cid:96) aug as a hyper-parameter, and tune it on the development set of each dataset.", "Since Euclidean distance is a special case of WMD as dis-2 Datasets used in this work can be found at https: //nlp.stanford.edu/projects/nmt/,http://coai.cs.tsinghua.edu.cn/hml/dataset/#commonsense Model De En En De Vi En En Vi Fr En En Fr It En En It CE 27.98 22.85 24.22 27.09 40.49 40.86 29.70 26.85 CE+MLM 28.70 23.23 24.40 26.20 40.03 40.79 29.35 26.90 CE+BT 29.35 24.09 25.00 27.41 40.87 42.64 30.44 27.94 CE+OURS 29.16 23.26 24.74 27.12 40.46 40.94 29.79 27.11 WD 28.53 22.95 24.03 26.69 39.71 40.48 29.74 27.08 WD+MLM 28.80 22.98 24.33 26.88 39.57 40.61 29.98 26.59 WD+BT 28.56 23.10 24.51 26.74 39.77 40.60 29.56 27.33 WD+O URS 28.91 23.42 24.26 26.73 40.46 41.07 29.86 27.15 Table 1: BLEU scores on various translation datasets.", "cussed in Sec 5.1, we show results of all methods with the use of the cross-entropy loss and WD.", "We mainly use the Fairseq (Ott et al., 2019) Seq2seq implementation as our model.", "Both encoder and decoder are one-layer LSTM.", "The word embedding dimension is 256.", "Attention (Luong et al., 2015b) is used with a dropout rate of 0.1.", "All parameters are randomly initialized based on the uniform distribution [ 0 . 1 , +0 . 1] .", "We use SGD to optimize our models, and the learning rate is started with 1.0.", "After 8 epochs, we start to halve the learning rate after each epoch.", "All experiments are run on a single NVIDIA V100 GPU.", "Code for our experiments are available once our work is accepted.", "We use translation benchmarks IWSLT14 EnDe, EnFr, EnIt, and IWSLT15 EnVi in our experiments.", "The datasets of IWSLT14 are pre-processed with the script in Fairseq 3 .", "For IWSLT14 datasets, we use tst2011 as validation set and tst2012 as test set.", "The IWSLT15 dataset is the same as that used in Luong et al. (2015a), and the validation and test sets are tst2012 and tst2013, respectively.", "Table 1 shows the BLEU scores on their test sets.", "For both cross-entropy loss and L 2 Wasserstein distance, all data augmentation methods (MLM, BT and OURS) perform better than the corresponding base models in most cases.", "The improvement margins are different across the various datasets.", "The reason may be that the datasets are in different scales and the alignment difficulty between different languages can also vary.", "The performance of MLM is not stable from our results, which is largely due to that masked tokens are possible to 3 https://github.com/pytorch/fairseq/ blob/master/examples/translation/prepare-iwslt14.sh 0 2 4 6 8 10 12 14 16 18 20 22 24 Number of Samples ( training data size) 0 5 10 15 20 25 30 BLEUEN DE+BT EN DE+Ours DE EN+BT DE EN+Ours Figure 2: BLEU scores by models updated with the same number of samples.", "be filled in with different semantic ones and thus the semantics of the sentence changes.", "Therefore, the augmented data are not aligned indeed, and the translation model learning can be distracted.", "Note that we also evaluate our method using the Transformer model and get some similar findings.", "Experimental results of the Transformer model are presented in the appendix.", "Compared to BT and MLM, our approach that mimics the effect of data augmentation without actually constructing augmented samples, shows encouraging results.", "Note that our proposed objective may not have a theoretical guarantee on the cross-entropy loss.", "Yet, it still manages to improve the base model except for Fr En, and surpasses MLM on all datasets.", "With the use of L 2 Wasserstein distance, our approach even outperforms BT and achieves the best performance on half test sets.", "This validates the benefits of not using any specific data augmentation mapping function in data augmentation as in our proposed objective.", "We provide further analysis on the performance of our approach versus BT.", "In Fig. 2, we compare testing BLEU scores obtained by models updated with the same number of samples.", "Since we construct one augmented sample from each original training sample, the total number of samples used in BT is twice as much as that of our approach.", "We can see that our approach achieves compatible performance with BT, while only requires half of the training data.", "This shows that our approach, without involving additional calculations on extra samples, can effectively save the computational expense.", "Fig. 3 shows the sensitivity of performance under different hyper-parameters.", "For our approach, we vary across different C 1 ( R ) 's; for BT, we vary the sample weight w of the augmented samples.", "We re-scale C 1 ( R ) by 10 4 and w by 10 1 , in order to visualize them within the same range of x-axis.", "Both BT and our approach demonstrate their robustness under different settings of their hyper-parameters.", "We use the English single-round Reddit conversation dataset (Zhou et al., 2018).", "Following previous work on data augmentation for dialogue system (Cai et al., 2020; Zhang et al., 2020), we simulate a low data regime so that data augmentation is expected to be more effective.", "Thus, we select data pairs with the length of both the query and response less than 20, and randomly split them into 200K for training, 2K for validation and 5K for testing.", "Automatic evaluation for each method is performed on all test data.", "We report Perplexity, BLEU and BLEU-k (k=1,2) to measure the response coherence; Distinct-k (k=1,2) (Li et al., 2016) to measure the response diversity.", "We also hire five annotators from a commercial annotation company for manual evaluation on 200 pairs randomly sampled from the test set.", "Results of all methods are shuffled for annotation fairness.", "Each annotator rates each response on a 5-point scale (1: not acceptable; 3: acceptable; 5: excellent; 2 and 4: used in unsure case) from two perspectives: Fluency and Relevance.", "Results are summarized in Table 2. On automatic metrics, BT only shows marginal improvements on a few metrics, which can not exhibit its strength as in translation tasks.", "MLM effectively increases the response diversity (Dist1&2).", "This is due to nature of the conversation data that conversation pair often remains coherent even if the semantics of the query or response has been slightly changed.", "Thus, MLM can increase data diversity, which is appreciated in training response generation models.", "In terms of human evaluation, BT and MLM can barely improve the base model.", "As for our approach, it achieves the best or second best results on most metrics for both loss functions, demonstrating more robust performance than BT and MLM.", "This is consistent with our statement in the introduction that we often need to design proper augmented data mapping functions carefully for a target generation task, which requires non-trivial work.", "As such, it is meaningful to avoid the use of specific data augmentation techniques and find a unified formulation of data augmentation for general generation tasks.", "From our results, the proposed objective demonstrates its power to achieve the effect of data augmentation across different generation tasks.", "We have proposed an objective of formulating data augmentation without any use of any augmented data mapping function.", "We show its optimization and provide the corresponding convergence rate.", "Both the L 2 Wasserstein distance and the cross-entropy loss are discussed with their use in our objective and their corresponding theoretical guarantees.", "Different from previous data augmentation works that need to add manipulated data into the training process, our gradient based approach provides a potential way to obtain performance improvements, which may come from augmented data, without incurring the computational expense.", "Experiments on both neural machine translation and conversational response generation validate the effectiveness of our objective compared to existing popular data augmentation methods: masked language models and back-translation.", "We believe this work provides a new understanding of data augmentation.", "Our approach can also be useful to a wide range of tasks including text classification tasks, which can be seen as special cases of text generation tasks, and cross-modality generation tasks such as image captioning, in which we can skip the step to use various image augmentation techniques.", "We would like to point out that some parts of our approach can be improved in the future, which may lead to a better performance and generalization.", "Firstly, current distributions we choose in the re-parameterized loss are relatively simple.", "Some points under current continuous distributions may not correspond to valid text sequences in the original text space, due to the discreteness of natural languages.", "A possible way is that we change to leverage more informative distributions, such as including prior distributions computed from several augmented samples.", "Secondly, our method is derived under the framework of SGD and it is possible to extend it to the Adam framework (Kingma and Ba, 2014; Chen et al., 2018b; Reddi et al., 2019).", "We also leave the more general version of our work in the future." ]
[ "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "objective", "method", "objective", "objective", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "method", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "objective", "other", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "other", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "result", "objective", "objective", "result", "result", "method", "abstain", "objective", "abstain", "abstain" ]
[ "Effective dialogue involves grounding, the process of establishing mutual knowledge that is essential for communication between people.", "Modern dialogue systems are not explicitly trained to build common ground, and therefore overlook this important aspect of communication.", "Improvisational theater (improv) intrinsically contains a high proportion of dialogue focused on building common ground, and makes use of the yes-and principle, a strong grounding speech act, to establish coherence and an actionable objective reality.", "We collect a corpus of more than 26,000 yes-and turns, transcribing them from improv dialogues and extracting them from larger, but more sparsely populated movie script dialogue corpora, via a bootstrapped classifier.", "We fine-tune chit-chat dialogue systems with our corpus to encourage more grounded, relevant conversation and confirm these findings with human evaluations.", "For humans, dialogue is fundamentally a collaborative, cooperative process by which partners coordinate via turns or acts to jointly construct a common world state (Bohm and Nichol, 2004).", "Without coordination, partners may establish different or conflicting world states, leading to solipsism in the best case and conflict in the worst.", "Clark and Schaefer (1989), describe five dimensions of grounding , by which partners cooperate to establish common ground , or a shared world state.", "The dimension of initiation of next relevant contribution is the most effective of these in expressing understanding of an ongoing dialogue, and yet is the least observed in dialogue systems.", "Improvisational theater (improv) is a form of theater in which most or all of what is performed is unscripted, created spontaneously by the actors in real time.", "Because the performance is not scripted and there is typically little to no scenery or other es-Figure 1: Explicit (top) and implicit (bottom) examples of yes-and s in the SPOLIN corpus.", "The text highlighted in light blue reflects acceptance of the context established in the prompt (yes) and the text highlighted in orange initiates a new relevant contribution to the dialogue (and).", "tablished environment, 1 there is no objective reality that can naturally ground the scene.", "Hence, actors must mainly rely on dialogue in order to build a coherent scene and progressively establish a common world view.", "This necessitates accelerated use of the initiation of next relevant contribution, which in improv is known as the yes-and principle.", "The yes-and principle is a rule-of-thumb that suggests that a participant should accept the reality of what the other participant has said (yes) and expand or refine that reality with additional information (and).", "Since actors consciously abide by this principle during improv performances, there is a high proportion of these turns embedded in improv dialogue, which helps ensure scenes are coherent and interesting.", "1 except for, on occasion, external stimulus such as a suggestion from the audience Open-domain neural dialogue systems, by contrast, specifically lack coherence and interestingness.", "They commonly repeat previous utterances (Li et al., 2016c) or generate non-committal, generic statements such as I don't know that are logically coherent as a response but preempt further conversation (Sordoni et al., 2015; Serban et al., 2015; Li et al., 2016a).", "Either of these developments leads to a conversational black hole and discourages participation in further dialogue turns.", "This is a critical shortcoming for open-domain dialogue agents, which, unlike task-oriented dialogue systems, are not guided by specific objectives other than entertainment (Huang et al., 2020).", "It would behoove such systems to adopt the strategies improvisers include by habit in their dialogues and, consequently, incorporating improv acts should be a key focus for the dialogue community.", "Yet, to the best of our knowledge, this has not been previously done.", "There has been work in applying improv to build believable agents that interact with humans (Bruce et al., 2000; Winston and Magerko, 2017) or generate improvised stories (Martin et al., 2016), but development of improv-capable systems in the neural era is largely absent, stymied, we suspect, by the lack of substantial corpora.", "This is unsurprising; while improv speech acts such as yes-and are crucial in all dialogues, they are only highly concentrated in improv dialogues.", "And improv dialogues are quite difficult to collect; research collections (Busso and Narayanan, 2008) have been far too small to be useful in the modern ML era.", "The art form has historically been mostly ephemeral, performed live in regional venues on shoestring budgets and rarely recorded.", "2 Transcripts are all but absent and mainstream media products are rare.", "3 However, the liberalization of high quality audio podcasts since 2014 has enabled the availability of a long tail of niche products, improv included (McHugh, 2016).", "2 The art form has long roots, extending to the Italian Com-media dell'arte tradition from the 16th century and farces from the Roman era, but we constrain our focus to the post20th century form developed and championed by e.g. Keith Johnstone (Johnstone, 2017), Del Close (Halpern et al., 1994), and our corpus' namesake, Viola Spolin (Spolin et al., 1986).", "Spolin was the originator of Theater Games , acting exercises that encourage the development of specific theatrical skills.", "As our corpus is similarly designed to elicit specific skills, we backronym it in recognition of her influence.", "3 One exception, the long-running TV show Whose Line Is It Anyway , has, despite a large number of episodes, surprisingly little continuous improvised dialogue, due to the rapid-fire nature of the program.", "Therefore we set our objective as collecting yes-and -type dialogue pairs ( yes-and s) to enable their modeling by corpus-driven dialogue systems.", "We mine podcasts and existing movie script corpora for dialogue that abides by the yes-and principle and extract dialogue pairs from these sources to build the Selected Pairs Of Learnable ImprovisatioN ( SPOLIN ) corpus.", "SPOLIN is a collection of more than 26,000 English dialogue turn pairs, each consisting of a prompt and subsequent response , which abide by the yes-and principle, though in diverse manners.", "Examples of yes-and type dialogue pairs collected for SPOLIN are in Figure", "1. The corpus is substantial enough to be usable for fine-tuning existing dialogue models to encourage more yes-and behavior, and beyond that may prove a valuable knowledge base for empirical sociolinguistic studies on this dialogue act.", "Our contributions are summarized as follows: We carefully curate Selected Pairs Of Learnable ImprovisatioN ( SPOLIN ), the first large-scale corpus of yes-and dialogue acts, sourced from improv and movie dialogues.", "We iteratively build a high-precision yes-and classifier, which we use to mine additional yes-and s from dialogue corpora with high volume but low yes-and density.", "We fine-tune existing open-domain conversational models with our corpus and confirm via human evaluations that this approach improves creative grounding.", "We release our models and data for public use, including a 64,000 turn pair extension of the core SPOLIN , at https://justin-cho.", "Our data collection has five stages:", "1. Manually extract yes-and s from a rich corpus of improv to obtain an initial set of yes-and s.", "2. Construct a yes-and classifier from the corpus of collected yes-and data and negative examples.", "3. Use the classifier from step 2 to automatically extract yes-and candidates from a much larger but sparser dialogue corpus.", "4. If necessary, manually validate candidates before adding them to the yes-and corpus.", "5. Repeat from step 2 as needed.", "An overview of this process is shown in Figure", "2. 2.1 Core yes-and Collection from Spontaneanation We select the Spontaneanation 4 podcast as a source of concentrated yes-and s for its relatively noise-free recording quality and high-quality volume of broad domain improv dialogue.", "Each episode of this podcast includes an approximately 30 minute-long improv session performed by professional improvisers.", "Over its 201 episodes, we identified a total of 43K lines of useful spoken dialogue.", "Given the confluence of a lack of objective reality, and uninterrupted multiturn dialogue, the improvisers mostly abide by the yes-and principle, and therefore Spontaneanation is a rich resource for natural, high-quality yes-and s.", "As it exists only in audio form, and automatic transcription services are too noisy for high quality annotation use, we 4 https://www.earwolf.com/show/ spontaneanation-with-paul-f-tompkins/ ask Amazon Mechanical Turk workers (Turkers) to listen to the improv sessions, view Amazon Transcribe preliminary transcriptions, and re-transcribe all of the yes-and s that they hear using our transcription interface, shown in Figure", "3. The interface is based on oTranscribe, an open-source transcription service.", "Although the quality of transcriptions is poor, we find that including them assists the Turkers in identifying speaker turns and also understanding parts that are sometimes incomprehensible without helping context.", "One of the main challenges for the data collection process is to recruit competent Turkers who are able to develop a good understanding of the yes-and principle.", "We actively recruit potential annotators to our task by inviting denizens of the sub-Reddit TurkerNation, rather than simply inviting workers through Amazon's native task posting interface based on HIT approval rate and total number of HITs approved.", "Our approach enables more human-level engagement, making it easier to determine Turkers' English fluency and experience with improv.", "To ensure their competence, Iteration 1 2 3 4 Spontaneanation + 10,459 10,459 10,459 10,459 Spontaneanation -3,225 5,587 Cornell + -3,327 8,464 12,220 Cornell 10,459 13,786 15,698 17,092 Total Training Samples 20,198 27,572 37,846 45,358 Dev Set Acc.", "Turkers first read yes-and guidelines (in the appendix) then demonstrate their level of understanding through qualification Human Intelligence Tasks (HITs), which test whether the candidates can identify if a yes-and exists in a 30 second audio segment and transcribe it if there is one.", "s Even after inviting Turkers for the actual HIT of transcribing yes-and s, we frequently monitor the quality of the data they collect and provide feedback for incorrectly identified yes-and s.", "Apart from base pay for each episode they work on, we provide incentives for extracting more yes-and s.", "The pay for our HITs averages well above California minimum wage.", "From all of the episodes, we extract 10,959 yes-and s, indicating about 25% of the total number of dialogue turns in Spontaneanation are yes-and s.", "Although larger than any improv corpus, let alone yes-and corpus known to date, we seek to increase our corpus volume from 10,959 turn pairs.", "The Cornell Movie-Dialogs Corpus (Danescu-Niculescu-Mizil and Lee, 2011, Cornell ) contains 304,713 turns, nearly an order of magnitude more than Spontaneanation , and it is one of the closest in domain to improv among existing dialogue datasets.", "However, a sample annotation of 300 randomly selected turn pairs by Turkers reveal only 11.1% of pairs are yes-and s.", "We thus use the already-collected yes-and s to probe Cornell for likely candidates, to speed the search process.", "Recent developments of language models pre-trained on massive text data enable the training of high-accuracy models for down-stream tasks even with a small number of samples, by leveraging the contextualized embed-dings that these models learn (Devlin et al., 2019; Radford et al., 2019).", "We thus fine-tune an initial BERT-based sequence classifier based on the implementation of Wolf et al. (2019a) with the yes-and s from the Spontaneanation episodes to determine if a given dialogue pair is a yes-and , using a high threshold (initially, a 95% probability of being yes-and ) to bias for precision.", "We ask Turkers to validate the turn pairs identified by the classifier and add the validated pairs to our yes-and corpus.", "This procedure can be iterated.", "For the first iteration, we train the classifier with a balanced number of nonyes-and s chosen by random sampling from Cornell , a reasonable assump-tion due to the relatively low concentration of yes-and s observed.", "The same Turkers that extracted yes-and s from Spontaneanation are invited to validate the yes-and candidates filtered out by the classifier using the interface shown in Figure", "4. In order to ensure consistent annotation standards among Turkers, they are given a small number of overlapping HITs against which we validated.", "For 90 samples of unfiltered yes-and candidates from Cornell , the two workers yield a reasonably high Cohen's value of 0.74.", "Turkers are paid at rates consistent with their rates on the extraction-from-Spontaneanation task.", "After the set of Cornell yes-and candidates are validated, the yes-and s and nonyes-and s are added to the training set to train a new classifier, and the same process is repeated.", "We hold out 500 dialogue pairs from each subcategory (i.e. Spontaneanation yes-and s) as the development set for monitoring the classifier's performance after each iteration.", "We incrementally lower the classification threshold for choosing a yes-and candidate as the classifier improved.", "We set this threshold on each iteration except for the first by retrospective evaluation of the classifier on the actual yes-and candidates' labels from previous iterations.", "The threshold with the highest F1 score is chosen to filter new yes-and candidates to be validated.", "We balance each progressively larger corpus with negative sample turn pairs, which are either randomly selected from Cornell (round 1), selected Figure 4: Amazon Mechanical Turk interface for validating yes-and candidates determined by the yes-and classifier.", "The latter forces the classifier to make decisions based on semantic features relevant to a yes-and instead of only stylometric features in Spontaneanation yes-and s.", "We stop this iterative process after four rounds, when fewer than 5,000 new yes-and candidates are identified by the classifier, yielding a total corpus size of 26,435 yes-and s and 23,938 negative samples.", "An overview of this iterative process is summarized in Table", "1. The negative sampling procedure, while somewhat ad-hoc, ultimately provides a mix of turn pairs from both corpora that is sufficient to allow extraction of yes-and s from new corpora at high precision rates, and is sufficient for our goals.", "Although the concept of a yes-and is easy to define and understand, there are borderline cases between a yes-and and a nonyes-and that make the validation phase more difficult than originally expected.", "One of the cases that confused Turkers in the earlier stages of data collection is the case of yes-but s.", "A yes-but is a yes-and with a response that is coherent with the provided reality, but does not appear to provide an affirmative acceptance of a suggestion given in the prompt.", "These are different from contradictions that do not align with the previously established reality.", "In addition, there are instances where the response is a yes-and , but is accepted by a speaker other than the one to whom the prompt is directed.", "Some yes-and responses initiates a repair of a problem encountered while accepting the prompt, due to a confusion or a possible inconsistency, by asking for clarification (Clark and Schaefer, 1989).", "While these responses may not strictly establish more detail, they provide information for ultimately establishing new information.", "We elide these edge cases under the umbrella category yes-and in SPOLIN as they further our top-level goal of providing relevant, actionable turn responses.", "Examples of some of these subtle differences are shown in Table", "2. 3 Dataset Analysis In order to provide a better understanding on the characteristics of our corpus, we annotate 200 yes-and s and 200 nonyes-and s in SPOLIN 's development set to categorize them into specific yes-and or nonyes-and types.", "We classify yes-and s into explicit yes-and s, implicit yes-and s, or yes-but s.", "Only 15% of all yes-and s are explicit yes-and s, containing phrases such as Yeah or Sure that reflects agreement.", "Even with such phrases, identifying explicit yes-and s is not a trivial task because it requires semantic understanding of the relevance of the context established in the prompt and that introduced in the response.", "In fact, there are nonyes-and s that contain phrases affirming agreement but have no contributions or have new contributions that lack relevance.", "The majority (78%) of yes-and s are implicit yes-and s, meaning that the agreement is implied, often in a subtle manner.", "The remaining 7% are yes-but s.", "Nonyes-and s are divided into contradictions and others .", "Most of the nonyes-and were other s, as only 5% of candidates extracted from Cornell are contradictions , which are dialogue pairs with Type Example % yes-and Explicit P: Does this map look homemade to you?", "Other s encompass any dialogue pairs with a response that lacks coherence to the prompt or adds no or minimal contributions.", "The distribution and examples of different types of yes-and s and nonyes-and s are shown in Table", "2. The main focus of our work is on yes-and s, but we provide nonyes-and s as part of SPOLIN for those interested in training their own classifiers.", "The negative samples are collected using the methods described in Section 2.2.", "The composition details of SPOLIN are shown in Table", "3. 4 Experiments To evaluate the effect of SPOLIN on generating yes-and responses and thus improving generated dialogue quality, we train a common architecture with a variety of fine-tuning data configurations, both with and without SPOLIN .", "Specifically, for each data configuration we fine-tune a doublehead GPT-2 model (117M-parameter version based on the implementation by Wolf et al. (2019b)), which achieved state-of-the-art performance on Persona-chat for the ConvAI-2 dialogue competition (Zhang et al., 2018).", "We fine-tune the models using two learning objectives, which we weigh equally in calculating loss:", "1. Predicting the next word.", "2. Predicting the next correct candidate that best fits the dialogue given the dialogue history.", "The language modeling component uses pre-trained weights from OpenAI, while the candidate classification head is trained from scratch.", "For evaluation, we use the language modeling component of the fine-tuned model to generate single-turn responses for the yes-and prompts in the development set.", "We use nucleus sampling (Holtzman et al., 2020) for the decoding step to keep only the top tokens with a cumulative probability that together exceed 0.9, from which the next token is chosen with multinomial sampling.", "For our experiments, we use several established dialogue datasets as baselines, namely Persona-chat (Zhang et al., 2018), Cornell (Danescu-Niculescu-Mizil and Lee, 2011) (the unfiltered corpus out of which we extract yes-and s, as described in Section 2.2), and DailyDialog (Li et al., 2017b).", "Each of these is an English-language open-domain casual conversation corpus with 100k300k turns.", "For each of these datasets, we either simply fine-tune on that dataset, or fine-tune and then further Figure 5: Interface used by human evaluators to rank responses based on their quality as a yes-and , where a rank of 1 is most preferred.", "fine-tune with SPOLIN .", "In another configuration, we also fine-tune directly with SPOLIN on top of GPT-2.", "The original GPT-2 implementation prepends the personalities given in Persona-chat to the dialogue sequence input before tokenization.", "For fine-tuning to datasets apart from Persona-chat , we simply do not prepend any auxiliary information to the dialogue sequence input.", "Automatic metrics that rely on n-gram overlap, such as BLEU, ROUGE, and METEOR, are often used for generative models when there is little variability in the target output (Papineni et al., 2002; Lin, 2004; Banerjee and Lavie, 2005).", "However, there can be a wide variety of responses that qualify as a good yes-and , a problem common to open-domain generation tasks.", "An adequate evaluation of our models requires assessing the main yes-and criteria: agreement with the context and the quality of the new relevant contribution, both of which are not feasible with the aforementioned metrics.", "Therefore, we ask human evaluators to compare the quality of the yes-and s generated by various models and the actual response to the prompt in SPOLIN that is used as the input.", "We ask human evaluators to rank a set of four responses given a prompt, comparing the responses of a model trained only with SPOLIN , a model trained with an existing dialogue corpus, a model trained with both, and the actual response pair from the development set, denoted as Gold.", "These four responses are randomly ordered for each question to prevent evaluators from developing a bias for responses that frequently have a good or poor response in a set order, as shown in Figure", "5. The evaluators are permitted to provide the same rank for different responses if they are equal in quality.", "This evaluation set contains 100 such prompts, and each is evaluated twice by different evaluators.", "The results of the average ranking and some of the examples generated by the models are shown in Table", "4. Results show that models trained only with SPOLIN or with SPOLIN and another dialogue dataset are preferred to the models trained only with another dialogue dataset, although in the case of DailyDialog , the average ranking improves only by at most 0.06 after fine-tuning with SPOLIN .", "However, even the responses generated by models trained with SPOLIN are not ranked as well as the actual responses in the development set, indicating our models are still inferior to professional human improviser quality.", "The approach to classifier-based mining we describe in Section 2.2 can naturally be applied to other dialogue corpora.", "We thus next consider mining the gigantic (441M sentence) OpenSubtitles (Lison and Tiedemann, 2016) collection.", "As OpenSubtitles contains undesirable material, such as subtitles for media with minimal dialogue, we instead mine from the (3.3M sentence) SubTle corpus (Ameixa et al., 2013), a preprocessed subset of OpenSubtitles that heuristically combines subtitle sequences into dialogue form.", "By iterating through half of this corpus, we collect more than 40,000 yes-and s from it alone, which, when added to SPOLIN , yields what we call SPOLIN -extended, which contains about 68,000 yes-and s, more than 2.5 times the size of the core SPOLIN .", "Heuristics for finding alternations mean that SubTle's utterances are shorter than those in Spontaneanation and Cornell , so once the proportion of utterances longer than the average length of in Spontaneanation and Cornell (18.5 words) is less than 40%, we stop further collection in the remainder of the dataset.", "SPOLIN extended is available in the same public repository as SPOLIN .", "Details of the iterative process as applied to SubTle are in the appendix.", "Many works have identified the same issues of repetitive or non-committal responses generated by neural conversational systems that are at least partially related to the lack of sufficiently high quality yes-and s we deal with in this work; approaches that mitigate these problems vary.", "The majority of recent works focus on diversifying the responses by modifying the training and decoding objectives (Li et al., 2016a,b, 2017a, 2016c; Xu et al., 2017; Shao et al., 2017).", "Other methods introduce latent variables to encourage diversity (Serban et al., 2017; Zhao et al., 2017).", "Some explore methods of re-weighing training instances that encourage diversity (Liu et al., 2018; Lison and Bibauw, 2017; Du and Black, 2019).", "Our approach is complementary to all the model-based approaches described here, as it simply deals with the production of a particularly useful corpus , that can be used to fine-tune on top of these methods.", "We provide a survey of publicly available text-based datasets frequently used for open-domain dialogue systems and discuss their limitations for our purpose of generating grounded responses (see Table 5 for an overview).", "DailyDialog is a collection of multi-turn dialogue with manually annotated emotion and intent labels (Li et al., 2017b).", "Danescu-Niculescu-Mizil and Lee (2011) created the Cornell Movie-Dialogs Corpus , a compilation of dialogue sequences paired with meta data about the movie and characters.", "Persona-chat provides dialogue sequence coupled with corresponding personas (Zhang et al., 2018).", "The Ubuntu Dialogue Corpus contains 1 million dialogue turns extracted from Ubuntu chat logs, which discuss Ubuntu-related technical support (Lowe et al., 2015).", "The Twitter Triple Corpus is a dataset of 4K dialogue triples extracted from Twitter (Sordoni et al., 2015).", "OpenSubtitles is a huge collection of subtitles that span various genres, but the absence of speaker turn annotations make it difficult to modify into dialogue format (Lison and Tiedemann, 2016).", "Ameixa et al. (2013) use heuristics to reformat OpenSubtitles into dialogues with some limited success.", "Clark and Schaefer (1989) illustrate grounding in conversations with examples from the London-Lund Corpus (Greenbaum and Svartvik, 1990), a corpus of full conversations annotated with prosodic and paralinguistic features.", "A second version of the corpus was compiled with the same annotations standards as the first using more recent spoken and text data (Pldvere et al., 2017).", "These corpora were not collected with the criteria for yes-and s in mind.", "Even for datasets with dialogue taking place in a similar domain as improv, they naturally contain only a small proportion of yes-and s.", "However, the relatively large sizes of these datasets still make them useful for dialogue systems.", "They can be used effectively for grounded conversations if the yes-and s or other desirable dialogue acts can be filtered out or given higher weights in training to enforce their characteristics in the responses generated.", "Our data collection approach is similar to the method of Yarowsky (1995), which formalizes the bootstrapping mechanism of iteratively improving a classifier and label unlabeled data.", "The main difference from the Yarowsky algorithm and our approach is that, rather than using a fully automated process for increasing training data, we use a probability threshold to regulate recall, followed by human judgment to ensure high precision.", "Apart from Clark and Schaefer (1989) there have been other taxonomies of grounding.", "For example, Traum (1999) considers six categories; among these are acknowledge and continue , which, taken together, map nicely to yes-and .", "Magerko et al. (2009) and Fuller and Magerko (2010) note the importance of establishing common ground in improv.", "Inspired by yes-and s in improv, we carefully construct SPOLIN , a collection of dialogue pairs with responses that are not only coherent with dialogue context but also initiate the next relevant contribution.", "We extract high-quality yes-and s from Spontaneanation and build a classifier with them, which is then used to mine additional yes-and s from the Cornell Movie-Dialogs Corpus .", "We further use our mining technique to elicit a corpus of more than 68,000 yes-and turn pairs, easily the largest collection of this dialogue act known to exist.", "From human evaluations of dialogue models trained with various data configurations we find that SPOLIN is usefulwhen including it we are able to build models that can generate yes-and s more consistently than when we leave it out.", "Nevertheless, our models are still inferior at producing good yes-and s when compared to professional improvisers.", "We plan to continue our data-driven approach for grounded conversations by expanding our dataset through our iterative data collection process with other larger text-based open-domain dialogue corpora and extend our work to model and collect longer conversations exhibiting more complex improv-backed turns.", "Many thanks to Nanyun Peng and Xinyu Wang for key contributions in a preliminary study, to Paul F. Tompkins, Colin Anderson, and Earwolf for allowing us to include yes-and s extracted from Spontaneanation in SPOLIN , to Paul Elsberg, Risa Harms, P.T. McNiff, and Peter Schell for initial inspiration, and to Jordan Boyd-Graber for feedback on the final draft.", "This material is based on research sponsored by the AFRL and DARPA under agreement number FA8650-18-C-7878.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the AFRL, DARPA, or the U.S. Government." ]
[ "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "objective", "method", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "other", "other", "other", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "objective", "method", "method", "result", "method", "objective", "other", "other", "other" ]
[ "Knowledge graphs have evolved rapidly in recent years and their usefulness has been demonstrated in many artificial intelligence tasks.", "However, knowledge graphs often have lots of missing facts.", "To solve this problem, many knowledge graph embedding models have been developed to populate knowledge graphs and these have shown outstanding performance.", "However, knowledge graph embedding models are so-called black boxes, and the user does not know how the information in a knowledge graph is processed and the models can be difficult to interpret.", "In this paper, we utilize graph patterns in a knowledge graph to overcome such problems.", "Our proposed model, the graph pattern entity ranking model (GRank), constructs an entity ranking system for each graph pattern and evaluates them using a ranking measure.", "By doing so, we can find graph patterns which are useful for predicting facts.", "Then, we perform link prediction tasks on standard datasets to evaluate our GRank method.", "We show that our approach outperforms other state-of-the-art approaches such as ComplEx and TorusE for standard metrics such as HITS@ n and MRR.", "Moreover, our model is easily interpretable because the output facts are described by graph patterns.", "Knowledge graphs can be used to describe real-world relations as facts in a form that a computer can easily process and has been used for many artificial intelligence tasks (Hakimov et al., 2012; Daiber et al., 2013; Bordes et al., 2014).", "In a knowledge graph, a fact is represented by a labeled and directed edge, called a triple ( h , r , t ), where h and t are entity nodes and r is a relation label of an edge from h to t .", "Knowledge graphs such as YAGO ( Suchanek et al., 2007), DBpedia (Auer et al., 2007), and Freebase (Bol-lacker et al., 2008) have developed rapidly in recent years and are used for many artificial intelligence tasks such as question answering, content tagging, fact-checking, and knowledge inference.", "Although some knowledge graphs already contain millions of entities and billions of facts, they might still be incomplete and some facts may be missing.", "Hence, we need to develop a system that can predict missing facts to complete knowledge graphs automatically.", "Many kinds of models for link prediction have been developed to estimate unknown facts.", "Knowledge graph embedding models, which are the most widely used approach in this field, map entities and relations in a knowledge graph onto a vector space and obtain the latent underlying features.", "However, these models are generally difficult to interpret, as we do not know how information is processed in the models and the predicted facts are output without explanation.", "In this paper, we construct statistical models based on graph pattern matching .", "These models are not only easy to interpret compared to knowledge graph embedding models but also outperform state-of-the-art models for link prediction.", "Defining graph pattern association rules (GPARs) for a knowledge graph.", "Introducing a graph pattern probability model (GPro) and discussing its flaws.", "Proposing a novel model, the graph pattern entity ranking model (GRank), which uses graph patterns to rank entities.", "Proposing distributed rankings to address the problem arising from having the same score for multiple entities.", "Evaluating the proposed models through link prediction tasks for standard datasets: It is shown that our model outperforms most state-of-the-art knowledge graph embedding models for the HITS@ n and MRR metrics.", "The remainder of this paper is organized as follows.", "In Section 2, we discuss related work on link prediction.", "In Section 3, we define the terms and notation used in this paper.", "In Section 4, we define standard confidences for GPARs and discuss their problems.", "In Section 5, we propose the GRank model to deal with these problems.", "In Section 6, we present an experimental study in which we compare our models with baseline results for benchmark datasets.", "In Section 7, we conclude this paper.", "We categorize related work for link prediction into two groups: work on knowledge graph embedding models (which are latent feature models) and work on observed feature models.", "Recently, knowledge graph embedding models have yielded great results in link prediction tasks.", "Knowledge graph embeddings models embed entities and relations on a continuous space and can be roughly classified into three types: translation-based models, bilinear models, and neural network-based models.", "The first translation-based model was the TransE (Bordes et al., 2013) model, which gained attention because of its effectiveness and simplicity.", "TransE employs the principle h + r = t , where h , r and t are the embeddings of h , r and t , respectively.", "While this principle efficiently captures first-order rules, the TransE approach still has some problems.", "The conflict between principle and regularization is one of these problems and the TorusE (Ebisu and Ichise, 2018) model was recently proposed to solve this problem by embedding entities and relations on a torus manifold.", "RESCAL (Nickel et al., 2011) was the first bilinear model, where each relation is represented by a square matrix and the score of the triple ( h , r , t ) is calculated by a bilinear map which corresponds to the matrix of the relation r and whose arguments are h and t .", "Hence, RESCAL represents the most general form of a bilinear model.", "Extensions of RESCAL have been proposed by restricting bilinear functions, for example, DistMult (Yang et al., 2015) and ComplEx (Trouillon et al., 2016) restrict the matrices representing the relations to diagonal matrices.", "Neural network-based models have layers and an activation function like a neural network.", "The Neural Tensor Network (NTN) (Socher et al., 2013) has a standard linear neural network structure and a bilinear tensor structure, and can be considered as a generalization of RESCAL, where the weight of the network is trained for each relation.", "Graph Convolutional Networks (GCNs) (Duvenaud et al., 2015; Defferrard et al., 2016; Kipf and Welling, 2017) exploit the convolution operator to capture local information for a graph, however these models are for undirected graphs.", "Relational GCNs (Schlichtkrull et al., 2017) and ConvE (Dettmers et al., 2018) are generalizations of GCNs for knowledge graphs.", "Knowledge graph embedding is the standard approach for link prediction.", "However, it suffers from low interpretability, resulting in triples which are predicted without any clear reason.", "The main advantage of observed feature models over knowledge graph embedding models is their interpretability.", "Additionally, Toutanova et al. (2015) proposed a relatively simple logistic regression model, the Node+LinkFeat model, which utilizes only one-hop information in a knowledge graph and demonstrated that it performs far better for link prediction on standard datasets than most existing knowledge graph embedding models.", "However, it has also been shown that the Node+LinkFeat model cannot deal with a low-redundancy dataset because the model uses information which is too local.", "On the other hand, it has shown that a logistic regression model, the PRA model (Lao and Cohen, 2010; Lao et al., 2011), which utilizes multi-hop information do not have sufficient accuracy (Liu et al., 2016).", "This suggests logistic regression does not have enough power to deal with deep information.", "These studies have motivated research toward developing a more efficient model utilizing deeper information.", "We begin by discussing GPARs, which were proposed recently by Fan et al. (2015), and have shown their usefulness for social network graphs Team A U.K. located_in p 1 p 2 p 3 p 4 p 5 member_of p m manager_of U.K. France nationality Team B Germany located_in p' 1 p' 2 p' 3 p' 4 p' 5 member_of p' m manager_of U.K. Germany nationality Italy Team C p'' 1 p'' 2 p'' 3 p'' 4 p'' 5 member_of p'' m manager_of Germany Italy France nationality Figure 1: Graph G ex of sports teams.", "because graph patterns can capture deeper information lying in a knowledge graph and a GPAR explicitly describe the process of prediction.", "However, the definition of GPARs by Fan et al. cannot be applied to a knowledge graph because Fan et al. assumes a different structure for a social network graph than a knowledge graph.", "In the following section, we define GPARs for a knowledge graph.", "In this section, we introduce the definitions and notation required to discuss GPAR-based models.", "We modify GPARs for application to a knowledge graph following the definitions of Fan et al. (2015).", "Knowledge Graph: A graph is defined as G = { ( h , r , t ) } E R E , where E denotes a set of entities and R denotes a set of relations.", "An element ( h , r , t ) of G is called a triple and represents the directed relation r between h and t .", "An example graph G ex is shown in Figure 1, where p i represents a person, Teams A, B, and C represent sports teams, and countries are entities in E ex with labeled arrows between two entities representing directed relations in R ex .", "VGP denotes a set of variables, x and y are two designated variables, and R is the set of relations of G .", "We suppose VGP has no redundancy, in other words, z VGP , ( z i , r , z j ) GP ( x , y ) , z = z i z = z j .", "Some examples of graph patterns on G ex are shown in Figure 2, where GP 1 , ( x , y ) = { ( z , member of , x ) , ( z , nationality , y ) } , GP 2 , ( x , y ) = { ( z , manager of , x ) , ( z , nationality , y ) } , and located in ( x , y ) = { ( x , located in , y ) } .", "Our focus in this paper is on finding useful graph patterns for link prediction.", "Graph Pattern Matching Function: A matching function of GP ( x , y ) on ( h , t ) E E is an injective function m : VGP E that satisfies the following conditions: m ( x ) = h , m ( y ) = t , and for all ( z i , r , z j ) GP ( x , y ) , ( m ( z i ) , r , m ( z j )) G .", "M ( GP ( x , y ) , ( h , t )) denotes the set of all matching functions of GP ( x , y ) on ( h , t ).", "We say GP ( x , y ) matches ( h , t ) if there is at least one matching function of GP ( x , y ) on ( h , t ) (i.e. M ( GP ( x , y ) , ( h , t )) , ).", "For example, m : VGP 1 , ( x , y ) E ex ( m ( x ) = Team A , m ( z ) = p 1 , m ( y ) = U .", "GPAR: A graph pattern association rule (GPAR) AR is defined as GP ( x , y ) r ( x , y ) , where GP ( x , y ) and r ( x , y ) are graph patterns and r ( x , y ) = { ( x , r , y ) } .", "For example, a GPAR AR 1 = P 1 , ( x , y ) located in ( x , y ) would indicate that if there is a matching function of GP 1 , ( x , y ) on ( h , t ), then it is likely that there is also a matching function of located in ( x , y ) on ( h , t ), i.e. ( h , located in , t ) is a fact.", "Our task is the link prediction of a knowledge graph, i.e. to predict the missing entity of a query , which is formally defined as follows:", "Query: A query is a triple which is missing an entity: ( h , r , ?) or (? , r , t ).", "We divide a knowledge graph G into queries and answers to use as training data for our model.", "Let Q r , head ( Q r , tail ) denote the set of training queries missing a head (tail) entity for a relation r obtained from G ; then Q r , head ( Q r , tail ) is defined as follows: Q r , head = { (? , r , t ) | ( h , r , t ) G } , Q r , tail = { ( h , r , ?) | ( h , r , t ) G } In this case, the answers of training queries are defined as follows: a (? , r , t ) = { h | ( h , r , t ) G } , a ( h , r , ?) = { t | ( h , r , t ) G } A knowledge graph usually contains only positive triples.", "Hence, we adopt the partial completeness assumption (PCA) (Galarraga et al., 2013, 2015) to generate negative answers.", "Partial Completeness Assumption: if ( h , r , t ) is in G , then t E , (( h , r , t ) < G ( h , r , t ) is negative) (1) h E , (( h , r , t ) < G ( h , r , t ) is negative) (2) The standard PCA definition consists only of Equation (1), but we add Equation (2) because we also need to allow negative answers for Q r , head .", "Under PCA, negative answers for each question are defined as follows: n (? , r , t ) = E \\ a (? , r , t ) , n ( h , r , ?) = E \\ a ( h , r , ?) 4 Standard Confidence and Problems 4.1 AMIE with GPARs An association rule is essentially a binary clas-sifier, i.e. the antecedent of an association rule matches or does not match, and an association rule is thus evaluated.", "Following this idea, we suggest the most straightforward way to define the confidence , which indicates the reliability of an association rule, is the conditional probability, which is the probability of the consequent given the antecedent for a GPAR.", "The conditional probability Pr tail ( r ( x , y ) | GP ( x , y ) ) of a GPAR GP ( x , y ) r ( x , y ) to predict a tail is defined as follows: conf tail ( GP ( x , y ) r ( x , y ) ) = Pr tail ( r ( x , y ) | GP ( x , y ) ) = P ( h , r , ?) Q r , tail |{ t a ( h , r , ?) | M ( GP ( x , y ) , ( h , t )) , }| P ( h , r , ?) Q r , tail |{ t E | M ( GP ( x , y ) , ( h , t )) , }| For each query, the candidate entities found by the graph pattern are counted for the denominator while only correct entities are counted for the numerator.", "This confidence is used to evaluate GPARs only to answer queries with a missing tail because Q r tail and its answers are used to define it.", "Interestingly, GPARs with this confidence are equivalent to AMIE (Gal arraga et al., 2013, 2015), which was proposed to find horn clauses for a knowledge graph, although AMIE was proposed before the appearance of GPARs.", "However, AMIE originally has only one confidence value for a GPAR because AMIE is not designed for link prediction.", "Hence, we introduce the following alternative definition for the confidence value to answer a query missing a head entity.", "We define another confidence to deal with a query with a missing head entity as follows:", "conf head ( GP ( x , y ) r ( x , y ) ) = Pr head ( r ( x , y ) | GP ( x , y ) ) P (? , r , t ) Q r , head |{ h a (? , r , t ) | M ( GP ( x , y ) , ( h , t )) , }| P (? , r , t ) Q r , head |{ h E | M ( GP ( x , y ) , ( h , t )) , }|", "Additionally, we restrict matching functions to injective functions as defined in Section 3.1, which is different from AMIE, because the restriction avoids redundant matching functions which map multiple variables to the same entity and gives a good bias for real-world knowledge.", "For example, an GPARGP 3 , ( x , y ) sibling of ( x , y ) , where GP 3 , ( x , y ) = { ( z , parent of , x ) , ( y , child of , z ) } , is helpful to predict siblings.", "However, let p represent a person, GP 3 , ( x , y ) matches ( p , p ) although p is not a sibling of p .", "The above restriction omits such concerns.", "For another example, a GPAR { ( z 1 , manager of , x ) , ( z 1 , manager of , z 2 ) , ( z 2 , located in , y ) } located in ( x , y ) on the graph G ex in Figure 1 should not be considered helpful because m ( x ) = m ( z 2 ) holds for a matching function m of the antecedent pattern and as a result, the GPAR is almost tautological.", "We consider two confidence values for GPARs, con f tail and con f head , referred to as the graph pattern probability model (GPro).", "However, GPro cannot deal with queries where counting the number of matching functions is crucial.", "An example where the number of matching functions is important is shown in Figure", "1. In G ex , the country that Team C is located in is missing.", "One might guess that Team C is located in Italy because most of the Team C players have Italian nationality and the nationality of a player often matches the country that the team is located in.", "However, GPro underestimates the GPAR AR 1 , ( x , y ) = GP 1 , ( x , y ) located in ( x , y ) , which is equivalent to one's guessing process: conf tail ( AR 1 , ( x , y ) ) = 2 / 5, while conf tail ( AR 2 , ( x , y ) ) = 1 / 2, where AR 2 , ( x , y ) = GP 2 , ( x , y ) located in ( x , y ) .", "Hence, GPro judges AR 2 , ( x , y ) is more useful than AR 1 , ( x , y ) , and as a result, GPro predicts Team C is located in Germany rather than Italy.", "This problem is caused by considering a GPAR as a binary classifier, i.e. the matching number is not taken into account.", "For example, if we apply AR 1 , ( x , y ) = P 1 , ( x , y ) located in ( x , y ) to a query (Team A , located in , ?) in the traditional way (as a binary classifier), the output will contain two entities with equal weighting, the U.K. and France, because P 1 , ( x , y ) matches (Team A , U . K . ) and (Team A , France).", "Then, one of the output entities is correct and the other is incorrect.", "This is the reason why AR 1 , ( x , y ) is underestimated.", "To deal with this problem, in this paper, we consider a GPAR as an entity ranking system by counting the number of matching functions of the antecedent graph pattern rather than considering as a binary classifier.", "As well as considering a GPAR as a binary clas-sifier, we consider it as an entity ranking system.", "Entities are ranked according to a score, based on their number of matching functions.", "Moreover, we introduce the distributed rankings for entities, which are proposed to deal with situations where multiple entities have the same score.", "Then, we define the evaluation metrics for the distributed rankings to evaluate GPARs for link prediction.", "These approaches overcome the problems shown in Section 4.2.", "We consider a GPAR as a ranking system in this section to rank queries for which counting the number of matching functions of the antecedent is helpful, as shown in Section 4.2.", "First, we define a scoring function whose arguments are a graph pattern GP ( x , y ) and a pair of entities ( h , t ).", "The scoring function returns the number of matching functions of a pattern on a pair, which is formally defined as follows: score ( GP ( x , y ) , ( h , t )) = | M ( GP ( x , y ) , ( h , t )) | Given a pattern GP ( x , y ) and a query ( h , r , ?), we can obtain the score ( GP ( x , y ) , ( h , t )) for each candidate tail entity t .", "Then we obtain the rankings of the tail entities in descending order of the scores.", "The head entity rankings for a query (? , r , t ) are also obtained in this way.", "This ranking method gives us new perspective when we apply GPARs to answer a query.", "For example, if we apply AR 1 , ( x , y ) = P 1 , ( x , y ) located in ( x , y ) to a query (Team A , located in , ?) the U.K. will be ranked first and France second.", "In this situation, we can say that AR 1 , ( x , y ) works because the correct entity ranks higher than the wrong entity.", "We can basically evaluate a GPAR as an entity ranking system by evaluating output rankings by an evaluation metric for an ranking system such as the mean average precision .", "However, often multiple entities have the same score and traditional metrics cannot deal with the situation.", "To deal with this problem, we propose a new concept, called distributed rankings , and the corresponding metrics in the following sections.", "We propose distributed rankings where each entity can distribute over multiple ranks and each rank can have multiple entities, to deal with situations where multiple entities have the same score.", "Traditional rankings of entities are represented by a matrix Rank = ( rank i , j ) { 0 , 1 } n n , where n is the number of entities, and for each column and row there is one 1 element.", "In this matrix, columns represent entities and rows represent ranks.", "For example, rank i , j = 1 means that the entity j has rank i .", "On the other hand, distributed rankings of entities are represented by a matrix dRank = ( drank i , j ) [0 , 1] n n , where the summation of a column or a row is equal to", "1. Different from traditional rankings, the value of each element is continuous and multiple elements can be greater than 0 in a column or a row.", "For example, rank i , j = 0 .", "5 means that half of the entity j has rank i .", "Note that a traditional ranking matrix is a distributed ranking matrix.", "Given a pattern GP ( x , y ) and a query ( h , r , ?), We obtain distributed rankings of entities, dRANK ( GP ( x , y ) , ( h , r , ?)), according to their scores as follows.", "Let a be the number of entities whose scores are greater than the entity represented by j and let b be the number of entities whose scores are the same as the entity represented by j .", "Then, drank i , j , an element of dRANK ( GP ( x , y ) , ( h , r , ?)), is determined to be 1 / b for a + 1 i a + b and 0 otherwise.", "Distributed rankings of head entities for a query (? , r , t ) are obtained in the same way, and we refer to them as dRANK ( GP ( x , y ) , (? , r , t )).", "Unlike traditional rankings, distributed rankings are uniquely determined from the scores of entities.", "Traditional rankings can be evaluated by metrics such as the average precision or the cumulative gain.", "However, distributed rankings cannot be evaluated by these metrics.", "Hence, we require a different evaluation metric for distributed rankings.", "We use a GPAR to obtain distributed entity rankings as shown in Section 5.1.", "In this section, we define a metric to evaluate distributed rankings of entities by generalizing the average precision to evaluate a GPAR.", "For a pattern GP ( x , y ) and a training query ( h , r , ?), the distributed precision at k , dPre k , of dRANK ( GP ( x , y ) , ( h , r , ?)) is defined as follows: dPre k ( GP ( x , y ) , ( h , r , ?)) = P ki = 1 P t j a ( h , r , ?) drank i , j k where t j is an entity represented by j and drank i , j is an element of dRANK ( GP ( x , y ) , ( h , r , ?)).", "The elements related with correct entities ranked higher or equal to k are summed up as the traditional precision at k .", "Dataset # Entities # Relations # Training # Validation # Test WN18 40,943 18 141,442 5,000 5,000 WN18RR 40,943 11 86,835 3,034 3,134 FB15k 14,951 1,345 483,142 50,000 59,071 FB15k-237 14,541 237 272,115 17,535 20,466 Table 1: Statistics of benchmark datasets.", "where t j is an entity represented by j , drank i , j is an element of dRANK ( GP ( x , y ) , ( h , r , ?)), and n is the number of entities.", "The numerator of the average precision for traditional rankings is the summation of the precision at k for relevant entities.", "However, a relevant entity represented by j is distributed over multiple ranks in dRANK so that the precision at k multiplied by drank k , j is summed over k where a relevant entity j is distributed.", "dAP ( GP ( x , y ) , (? , r , t )) for a training query with a missing head can be defined in the same way.", "The distributed mean average precision for a GPARGP ( x , y ) r ( x , y ) is defined as follows: dMAP head ( GP ( x , y ) r ( x , y ) ) = X (? , r , t ) Q r , head dAP ( GP ( x , y ) , (? , r , t )) | Q r , head | dMAP tail ( GP ( x , y ) r ( x , y ) ) = X ( h , r , ?) Q r , tail dAP ( GP ( x , y ) , ( h , r , ?)) | Q r , tail | We also define dMAP with for the filtered (Bor-des et al., 2013) rankings which are obtained from original rankings by eliminating entities whose corresponding triples (except the target triple) were included in the training dataset.", "Filtered dMAP (fdMAP) is the mean of the dAP of filtered rankings for each answer of queries.", "We refer to GPARs considered as entity ranking systems with these dMAPs or fdMAPs as the graph pattern entity ranking model (GRank).", "By using a graph pattern to rank entities, GRank is able to properly estimate GPARs where the number of matches is important as WN18 FB15k WN18RR FB15k-237 MRR HITS@ MRR HITS@ MRR HITS@ MRR HITS@ Model 1 3 10 1 3 10 1 3 10 1 3 10 TransE 0.397 0.040 0.745 0.923 0.414 0.247 0.534 0.688 0.182 0.027 0.295 0.444 0.257 0.174 0.284 0.420 TorusE 0.947 0.943 0.950 0.954 0.733 0.674 0.771 0.832 RESCAL 0.890 0.842 0.904 0.928 0.354 0.235 0.409 0.587 DistMult 0.822 0.728 0.914 0.936 0.654 0.546 0.733 0.824 0.43 0.39 0.44 0.49 0.241 0.155 0.263 0.419 ComplEx 0.941 0.936 0.945 0.947 0.692 0.599 0.759 0.840 0.44 0.41 0.46 0.51 0.240 0.152 0.263 0.419 R-GCN 0.814 0.686 0.928 0.955 0.651 0.541 0.736 0.825 0.248 0.153 0.258 0.417 ConvE 0.942 0.935 0.947 0.955 0.745 0.670 0.801 0.873 0.46 0.39 0.43 0.48 0.316 0.239 0.350 0.491 PRA 0.458 0.422 0.481 0.336 0.303 0.392 Node+LinkFeat 0.940 0.943 0.822 0.870 0.272 0.414 GPro 0.950 0.946 0.954 0.959 0.793 0.759 0.810 0.858 0.467 0.430 0.485 0.543 0.229 0.163 0.250 0.360 GRank (dMAP) 0.950 0.946 0.953 0.957 0.841 0.814 0.855 0.890 0.466 0.434 0.480 0.530 0.312 0.233 0.340 0.473 GRank(fdMAP) 0.950 0.946 0.954 0.958 0.842 0.816 0.856 0.891 0.470 0.437 0.482 0.539 0.322 0.239 0.352 0.489 Table 2: Mean Reciprocal Rank (MRR) and HITS@ n scores obtained for the link prediction tasks on the WN18, FB15k, WN18RR, and FB15k-237 datasets.", "shown in Section 4.2, unlike GPro.", "For example, dMAP tail ( AR 1 , ( x , y ) ) = 1, is the maximum value, while dMAP tail ( AR 2 , ( x , y ) ) = 1 / 2 in Figure", "1. Hence, GRank can answer the query (Team C , located in , ?) by applying AR 1 , ( x , y ) .", "Our proposed models, GPro (Section 4.2) and GRank (Section 5), are evaluated through link prediction tasks and compared with other state-of-the-art link prediction models.", "Experiments were conducted on four benchmark datasets: WN18, FB15k (Bordes et al., 2013), WN18RR (Dettmers et al., 2018), and FB15k-237 (Toutanova and Chen, 2015) (details of these datasets are provided in Table 1).", "These datasets have been widely used in previous studies for evaluating model performance in link prediction tasks.", "WN18 and FB15k were extracted from the real knowledge graphs WordNet (Miller, 1995) and Freebase (Bollacker et al., 2008), respectively.", "WordNet is a well-known human-curated lexical database, and hence, WN18 is an easy benchmark of link prediction because it is well constructed and there are few missing or wrong facts.", "Therefore, link prediction models should perform well on WN18.", "Freebase is a huge knowledge graph of general facts and there are many missing facts.", "It is known that WN18 and FB15k have redundancy in the form of reverse relations.", "For this reason, when WN18RR and FB15k-237 are extracted from WN18 and FB15k, the inverse relations of other relations are removed.", "We conducted the link prediction task following the same approach reported in (Bordes et al., 2013) to evaluate our models qualitatively and quantitatively.", "For each test triple ( h t , r t , t t ) in a dataset, two queries, ( h t , r t , ?) and (? , r t , t t ), were constructed in the same way as in Section 3.2.", "Then, we obtained the rankings of entities for each query from each model as outlined in the following paragraphs.", "The rankings were filtered by eliminating entities whose corresponding triples (except the target test triple) were included in the training, validation, or test dataset.", "The obtained rankings were scored by their mean reciprocal rank (MRR) and HITS@ n , where MRR is the mean of the inverse of the ranks of corresponding entities and HITS@ n is the proportion of test queries whose corresponding entities are ranked in the top n of the obtained rankings.", "Next, we describe how to obtain rankings from models.", "We restricted antecedent graph patterns of GPARs to connected and closed (Galarraga et al., 2013, 2015) patterns whose size | GP ( x , y ) | was less than or equal to L to restrict the search space.", "A connected and closed patterns is a pattern connecting x and y without branches, as shown in Figure", "2. L was chosen for each model among { 1 , 2 , 3 } by MRR from the validation triples of each dataset.", "It took about four days to evaluate all candidate GPARs for GRank with dMAPs in FB15k using an Intel Xeon Gold 6154 (3.00 GHz, 18 cores).", "We now explain how we obtained the rankings for queries with missing heads.", "For each relation r , we chose the top 1,000 GPARs in descending order of the standard confidence, the dMAP, or the fdMAP to predict the heads.", "Let GP i , ( x , y ) r ( x , y ) be the obtained GPAR, where i denotes the rank.", "We defined the ordering for two entities for query (? , r t , t t ) as follows: for entities e 1 and e 2 , we define e 1 > e 2 if there exists i for which score ( GP i , ( x , y ) , ( e 1 , t t )) = score ( GP i , ( x , y ) , ( e 2 , t t )) for i > i and score ( GP i , ( x , y ) , ( e 1 , t t )) > score ( GP i , ( x , y ) , ( e 2 , t t )).", "We obtained the entity rankings with this ordering for each query.", "Rankings for queries with missing tails were obtained in the same way.", "The results of the link prediction tasks for our proposed models, GPro, GRank with dMAP, and GRank with fdMAP, are shown in Tables 2, where the results reported in previous studies are included for comparison.", "In Table 2, the first seven models are knowledge graph embedding models and the following two models are observed feature models.", "Table 2 shows the effectiveness of the Node+LinkFeat model (Toutanova and Chen, 2015), although this model is very simple (high MRRs imply that the model also has high HITS@1s or HITS@3s).", "The Node+LinkFeat model performed well on WN18 and FB15k because these datasets often contain the reverse relations of other relations.", "In other words, it shows that knowledge graph embedding models failed to capture this redundancy.", "On the other hand, our proposed models, GPro and GRank, generally yield better results than the knowledge graph embedding models and results which are better than or comparable to Node+LinkFeat, which means that our models can also handle such redundancy.", "In particular, GRank with dMAP and fdMAP yielded the best results on FB15k.", "This indicates that taking the multiplicity of matchings and deeper information into account is important for knowledge graphs such as FreeBase that contain miscellaneous relations and are not well curated like WordNet.", "As a result, GRank performed well.", "Table 2 also shows GPro and GRank yield better results for the WN18RR dataset than the other models.", "For FB15k-237, the performance of Node+LinkFeat is comparable with most of the other more sophisticated knowledge graph models and GPro does not yield good results because FB15k-237 has less redundancy.", "GRank also performs better than the most other models for the FB15k-237 dataset for the same reason as the FB15k dataset.", "However, our models do not utilize the information related to the co-occurrence of entities and relations in triples (node features (Toutanova and Chen, 2015)), while ConvE, Node+LinkFeat, and other models do.", "We also limited the size and the shapes of graph patterns because of the calculation time; we will address these and improve our models further in our future work.", "Quality of Obtained Paths The examples of antecedent patterns ranked high by GRank with dMAP tail for FB15k are shown in Figure", "3. The patterns shown for predicting the sibling relation are all correct as the antecedents of GPARs; however, the MAP of GP 2 , ( x , y ) and GP 3 , ( x , y ) are low.", "The reason for this is that GP 2 , ( x , y ) works when an individual has more than two siblings.", "The MAP of GP 3 , ( x , y ) is low because individual's parents are often missing in FB15k.", "However, they are still ranked higher than other patterns.", "The produces film relation is the inverse relation of the exective produced by relation in FB15k.", "Such patterns are very helpful when performing link prediction tasks, and GRank is able to find them.", "However, the MAP is not as high because of missing facts.", "GRank is able to use majority rules such as GP 5 , ( x , y ) film produced by ( x , y ) instead in such cases.", "This rule can be interpreted as stating that a particular film was likely to have been produced by a person who produced many films in the same production company.", "Output triples of GRank (and GPAR-based models) are described by antecedent patterns unlike knowledge graph embedding models as shown here.", "In this paper, we first defined GPARs for a knowledge graph and the standard confidence measures of GPARs for link prediction.", "Then, we pointed out the problems with the standard confidence measures and we introduced a new perspective using GPARs to rank entities to overcome these problems.", "We also proposed distributed rank-sibling x y , (cid:3051),(cid:3052) sibling (cid:4666)(cid:3051),(cid:3052)(cid:4667) (cid:2869), (cid:3051),(cid:3052)", "(cid:2872), (cid:3051),(cid:3052)", "ings for situations where multiple entities have the same scores and defined metrics for them.", "This idea led us to propose the GRank model.", "GRank is easy to interpret because outputs are described by GPARs, unlike knowledge graph embedding models, and so efficient that it outperformed the state-of-the-art knowledge graph embedding models in link prediction tasks.", "In future work, we will extend GRank to use more complex patterns.", "We considered only antecedent graph patterns whose sizes were less than or equal to 3.If we allow antecedent graph patterns to have larger sizes, then we may find more useful GPARs.", "We also restricted graph patterns to contain only variables and not constants.", "Hence, we did not use all of the available information contained in the knowledge graph.", "We believe that using such complex graph patterns will improve GRank further.", "This work was partially supported by the New Energy and Industrial Technology Development Organization", "Organization (NEDO).", "We would like to thank Patrik Schneider for helpful writing advice." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "objective", "objective", "objective", "objective", "objective", "method", "abstain", "method", "method", "objective", "result", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "other", "method", "method", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "method", "result", "other", "other", "other" ]
[ "Document-level neural machine translation (NMT) has proven to be of profound value for its effectiveness on capturing contextual information.", "Nevertheless, existing approaches 1) simply introduce the representations of context sentences without explicitly characterizing the inter-sentence reasoning process; and 2) feed ground-truth target contexts as extra inputs at the training time, thus facing the problem of exposure bias.", "We approach these problems with an inspiration from human behavior human translators ordinarily emerge a translation draft in their mind and progressively revise it according to the reasoning in discourse.", "To this end, we propose a novel Multi-Hop Transformer (MHT) which offers NMT abilities to explicitly model the human-like draft-editing and reasoning process.", "Specifically, our model serves the sentence-level translation as a draft and properly refines its representations by attending to multiple antecedent sentences iteratively.", "Experiments on four widely used document translation tasks demonstrate that our method can significantly improve document-level translation performance and can tackle discourse phenomena, such as coreference error and the problem of polysemy.", "Neural machine translation (NMT) employs an end-to-end framework (Sutskever et al., 2014) and has advanced promising results on various sentence-level translation tasks (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017; Wan et al., 2020).", "However, most of NMT models handle sentences independently, regardless of the linguistic context that may appear outside the current sentence (Tiedemann and Scherrer, 2017a).", "This makes NMT insufficient to fully resolve the typical context-dependent phenomena problematic, These authors contributed equally to this work.", "e.g. coreference (Guillou, 2016), lexical cohesion (Carpuat, 2009), as well as lexical disambiguation (Gonzales et al., 2017).", "Recent studies (Tu et al., 2018; Maruf and Haffari, 2018; Maruf et al., 2019; Tan et al., 2019; Kim et al., 2019; Zheng et al., 2020; Chen et al., 2020; Sun et al., 2020; Ma et al., 2020) have proven to be effective on tackling discourse phenomena via feeding NMT with contextual information, e.g. source-side (Wang et al., 2017; Voita et al., 2018; Zhang et al., 2018) or target-side context sentences (Baw-den et al., 2018; Miculicich et al., 2018).", "Despite their successes, these methods simply merge the representations of context sentences together, lacking a mechanism to explicitly characterize the inter-sentence reasoning upon the context.", "Another shortage in existing document-level NMT is the problem of exposure bias.", "Most of methods utilized the ground-truth target context for training but the generated translations for inference, leading to inconsistent inputs at training and testing time (Ranzato et al., 2015; Koehn and Knowles, 2017).", "Intuitively, human translators tend to acquire useful context information from the reasoning process among sentences, thus figuring out the correct meaning when they encounter ambiguity during translation.", "Sukhbaatar et al. (2015) and Shen et al. (2017) empirically verified that modeling multi-hop reasoning among sentences benefits to the language understanding task, e.g text comprehension.", "Voita et al. (2019) showed that document-level NMT model can profit from relative positions with respect to context sentences, which to some extent confirms the importance of the relationship among sentences.", "Meanwhile, Xia et al. (2017) demonstrated that sentence-level NMT could be improved by a two-pass draft-editing process, of which the second-pass decoder refines the target sentence generated by a first-pass standard decoder.", "document-level NMT using a novel framework Multi-Hop Transformer, which imitates draft-editing and reasoning process of human translators.", "Specifically, we implement an explicit reasoning process by exploiting source and target antecedent sentences with concurrently stacked attention layers, thus performing the progressive refinement on the representations of the current sentence and its translation.", "Besides, we leverage the draft to present context information on the target side during both training and testing, alleviating the problem of exposure bias.", "We conduct experiments on four widely used document translation tasks: English-German and Chinese-English TED, English-Russian Opensubtitles, as well as English-German Europarl-7 datasets.", "Experimental results demonstrate that our method significantly outperforms both context-agnostic and context-aware methods.", "The qualitative analysis confirms the effectiveness of the proposed multihop reasoning mechanism on resolving many linguistic phenomena, such as word sense disambiguation and coreference resolution.", "Our contributions are mainly in: We propose the Multi-Hop Transformer.", "To the best of our knowledge, this is the first pi-oneer investigation that introduces multi-hop reasoning into document-level NMT.", "The proposed model takes target context drafts into account at the training time, which devotes to avoid the training-generation discrepancy.", "Our approach significantly improves document-level translation performance on four document-level translation tasks in terms of BLEU scores and solves some context-dependent phenomena, such as coreference error and polysemy.", "Transformer NMT is an end-to-end framework to build translation models.", "Vaswani et al. (2017) propose a new architecture called Transformer which adopts self-attention network for both encoding and decoding.", "Both its encoder and decoder consist of multiple layers, each of which includes a multi-head self-attention and a feed-forward sublayer.", "Additionally, each layer of the decoder ap-plys a multi-head cross attention to capture information from the encoder.", "Transformer has shown superiority in a variety of NLP tasks.", "Therefore, we construct our models upon this advanced architecture.", "Document-level NMT In order to correctly translate the sentence with discourse phenomena, NMT models need to look beyond the current sentence and integrate contextual sentences as auxiliary inputs.", "Formally, let X = ( x 1 , x 2 , ..., x I ) be a source-language document composed of I sentences, where x i = ( x i 1 , x i 2 , ..., x iN ) denotes the i th sentence containing N words.", "Correspondingly, the target-language document also consists of I sentences, Y = ( y 1 , y 2 , ..., y I ) , where y i = ( y i 1 , y i 2 , ..., y iM ) denotes the i th sentence involving M words.", "Document-level NMT incorporates contextual information from both source side and target side to autoregressively generate the best translation result that has highest probability: P ( y i | x i ) = M (cid:89) m =1 P ( y im | y i<m , x i , X i , Y i ) (1) where y i<m is the sequence of proceeding tokens before position m .", "Related Work Several studies have explored multi-input models to leverage the contextual information from source-side (Jean et al., 2017; Kuang and Xiong, 2018) or target-side sentences (Kuang et al., 2018; Miculicich et al., 2018).", "For the former, Zhang et al. (2018) propose a new encoder to represent document-level context from previous source-side sentences .", "Tiedemann and Scherrer (2017b) and Junczys-Dowmunt (2019) utilize the concatenation of previous source-side sentences as input, while Voita et al. (2018) make use of gate mechanism to balance the weight between current source sentence and its context.", "For the latter, Miculicich et al. (2018) propose a hierarchical attention (HAN) framework to capture the target contextual information in the decoder.", "Bawden et al. (2018), Maruf and Haffari (2018) and Maruf et al. (2019) take both source-side and target-side context into account.", "Motivation As seen, both of the existing methods simply introduce the context sentences without explicitly characterizing the inter-sentence reasoning.", "Intuitively, when humans have difficulty in translation like encountering ambiguity phenomenon, they could acquire more information Source-side Context Encoder Source Embedding K-th Hop Attention Layer Second Hop Attention Layer First Hop Attention Layer Self-AttentionLayer \u0000 Source-side Context Encoder Source-side Context Encoder Target Embedding K-th Hop Attention Layer Second Hop Attention Layer First Hop Attention Layer Draft-Attention Layer \u0000 1 \u0000 Encoder-Decoder Attention Layer + 1 \u0000 + Target-side Context Encoder Self-AttentionLayer Target-side Draft Encoder Softmax + + 6 6 Translation Source Sentence i Target Sentence i Draft Sentence Context Draft Sentence i-1 Context Draft Sentence i-k+1 Context Draft Sentence i-k Source Context Sentence i-1 Source Context Sentence i-k+1 Source Context Sentence i-k Source-side Sentence Encoder Multi-Hop Encoder Multi-Hop Decoder Target-side Sentence Encoder Target-side Context Encoder Target-side Context Encoder Figure 1: Illustration of Multi-Hop Transformer.", "from the contexts sentence by sentence and then perform reasoning to figure out the exact meaning.", "We attribute that such reasoning process is also beneficial to machine translation task.", "Recent successes in text comprehension communities have to some extent supported our hypothesis (Hill et al., 2015; Kumar et al., 2016).", "For example, Sukhbaatar et al. (2015) propose a multi-hop end-to-end memory network, which can renew the query representation with multiple computational steps (which they term hops).", "Dhingra et al. (2016) extend an attention-sum reader to multi-turn reasoning with a gating mechanism.", "In addition, Shen et al. (2017) introduce multi-hop attention, which used multiple turns to effectively exploit and reason over the relation among queries and documents.", "In this paper, we propose to bring the idea of multi-hop into document translation and aim at mimicking the multi-step comprehension and revising process of human translators.", "Contrast with those models for text comprehension which scan the query and document for multiple passes, our model iteratively focuses on different context sentences, which captures the inter-sentence reasoning semantics of contextual sentences to incrementally refine the representation of current sentence.", "With this mind, we propose a novel method called Multi-Hop Transformer, which models the reasoning process among multiple contextual sentences in both source side and target side.", "The source-side contexts are directly acquired from the document.", "The target-side contexts, called target-side drafts in this paper, are generated by a sentence-level NMT model.", "These contexts are fed into the Multi-Hop Transformer with pre-trained encoders.", "The overall architecture of our proposed model is illustrated in Figure 1, which consists of three components: Sentence Encoder: This component contains two pre-trained encoders, one of which is called source-side sentence encoder and the other is called target-side sentence encoder.", "These encoders generate representations for source-side contexts and target-side drafts respectively.", "Multi-Hop Encoder: We extend the original Transformer encoder with a novel multi-hop encoder to efficiently perform sentence-by-sentence reasoning on source-side contexts and generate the representation for the current sentence.", "Multi-Hop Decoder: Similarly, a multi-hop decoder is proposed to acquire information from the target-side drafts and models the translation probability distribution.", "We use multi-layer and multi-head self-attention architecture (Vaswani et al., 2017) to obtain the representations for source-side contexts and target-side drafts.", "Similar to the encoder of Transformer, sentence encoder contains a stack of six identical layers, each of which consists of two sub-layers.", "The first sub-layer is a multi-head attention( Q , K , V ), which takes a query Q , a key K and a value V as inputs.", "The second sub-layer is a fully connected feed-forward network (FFN).", "source-side contexts, as shown in Figure", "1. For the current sentence s = x i to be translated, we use the previous sentences X i = ( x i k , x i k +1 , ..., x i 1 ) in the same document as the source-side context, specially denoted as c i k s , c i k +1 s , ..., c i 1 s for clarity.", "k is the context window size.", "For the j th context, we obtain the A ( n ) c i j s which denotes the n th hidden layer representation of c i j s as follows: A ( n ) c i j s = MHA( H ( n 1) c i j s , H ( n 1) c i j s , H ( n 1) c i j s ) (2) where n = 1 , 2 , ..., 6 .", "MHA represents the standard Multi-Head Attention function (Vaswani et al., 2017).", "j denotes the distance between the context sentence and current sentence.", "Target-Side Sentence Encoder.", "Most existing works use ground-truth target-side contexts as the input of decoder during training (Voita et al., 2019).", "However, the target contexts at training and testing are drawn from different distributions, leading to the inconsistency between training and testing.", "To alleviate this problem, we instead make use of target-side context drafts generated from a pre-trained sentence-level translation model.", "Similar to source-side sentence encoder, this target-side context draft encoder is used to obtain the context representation A ( n ) c i j t of the j th target-side draft c i j t .", "Besides, we obtain a draft translation d of the current sentence from the pre-trained sentence-level translation model and use a target-side draft encoder to obtain the representation A ( n ) d .", "The multi-hop encoder contains a stack of 6 identical layers, each of which contains the following sub-layers:", "Self-Attention Layer.", "The first sub-layer makes use of multi-head self-attention to encode the information of current source sentence s and obtains the representation A ( n ) s .", "Multi-Hop Attention Layer.", "The second sublayer uses a multi-hop attention to perform sentence-by-sentence reasoning on c s in sentence order as shown in Figure", "1. Each reasoning step, also called a hop , is implemented by a multi-head attention layer.", "The first hop takes representation A ( n ) s as the query and the representation A ( n ) c i k s of the previous k th sentence as the key and value.", "B ( n ) s i k = MHA( A ( n ) s , A ( n ) c i k s , A ( n ) c i k s ) (3) The other hops are implemented: B ( n ) s i j = MHA( B ( n ) s i j 1 , A ( n ) c i j s , A ( n ) c i j s ) (4) where j = k 1 , k 2 , ..., 1 .", "j denotes the distance between the context sentence and current sentence.", "Context Gating.", "The information of current source sentence is crucial in translation while the contextual information is auxiliary.", "In order to avoid excessive utilization of contextual information, a context gating mechanism (Tu et al., 2017; Yang et al., 2017, 2019) is introduced to dynamically control the weight between context sentences and current sentence: = ( W a A ( n ) s + W b B ( n ) s i 1 ) , (5) where is the logistic sigmoid function and is the context gate.", "Similarly, the multi-hop decoder involves a stack of 6 identical layers.", "Each of them contains five sub-layers.", "Self-Attention Layer.", "The first sub-layer utilizes multi-head self-attention to encode the information of current target sentence t and obtains the representation A ( n ) t .", "Draft-Attention Layer.", "Inspired by Xia et al. (2017), we introduce the complete draft d translated from current source sentence by a sentence-level NMT.", "Then this draft representation A ( n ) d is encoded by the target-side draft encoder in Section 3.1.", "The draft attention is achieved by multi-head attention: F ( n ) t = MHA( A ( n ) t , A ( n ) d , A ( n ) d ) .", "Multi-Hop Attention Layer.", "Similar to the encoder, a multi-hop reasoning process is performed on the target-side contexts.", "The target-side drafts are generated from corresponding source sentences by a pre-trained sentence-level NMT model.", "The first hop takes representation F ( n ) t as the query and the representation A ( n ) c i k t of the previous k th draft as the key and value.", "The other hops are achieved:", "where is used to regulate the weight of target-side contextual information.", "Encoder-Decoder Attention Layer.", "Finally, we use an encoder-decoder attention layer to integrate the output of multi-hop encoder Enc s with the current target representation G ( n ) t .", "To evaluate the effectiveness of the proposed MHT, we conduct experiments on four widely used document translation tasks, including the TED Talk (Cettolo et al., 2012) with two language pairs, Opensubtitles (Maruf et al., 2018) and Europarl7 (Maruf et al., 2018).", "All datasets are tokenized and truecased with the Moses toolkit (Koehn et al., 2007), and splited into sub-word units with a joint BPE model (Sennrich et al., 2016) with 30K merge operations.", "The datasets are described as follows: TED Talk (English-German) : We use the dataset of IWSLT 2017 MT English-German track for training, which contains transcripts of TED talks aligned at sentence level.", "dev2010 is used for development and tst2016-2017 for evaluation.", "Statistically, there are 0.21M sentences in the training set, 9K sentences in the development set, and 2.3K sentences in the test set.", "TED Talk (Chinese-English) : We use the corpus consisting of 0.2M sentence pairs extracted from IWSLT 2014 and 2015 Chinese-English track for training.", "dev2010 involves 0.8K sentences for development and tst2010-2013 contains 5.5K sentences for test.", "Opensubtitles (English-Russian) : We make use of the parallel corpus from Maruf et al. (2018).", "The training set includes 0.3M sentence pairs.", "There are 6K sentence pairs in development set, and 9K in test set.", "Europarl7 (English-German) : The raw Eu-roparl v7 corpus (Koehn, 2005) contains SPEAKER and LANGUAGE tags where the latter indicates the language the speaker was actually using.", "We process the raw data and extract the parallel corpus as same as Maruf Method TED Opensubtitles Europarl7 Params AVG En De Zh En En Ru En De Transformer (cid:63) 24.55 18.36 19.46 30.18 50M 23.14 CA-Transformer 25.04 18.77 20.21 30.67 72M 23.67 (Maruf et al., 2018) -19.13 (cid:5) 26.49 (cid:5) -CA-HAN 25.70 18.79 20.08 26.61 70M 22.79 (Maruf et al., 2019) 24.62 (cid:5) -54M (cid:5) CADec 26.08 19.01 19.46 30.36 91M 23.98 MHT (Ours) 26.22 19.52 20.46 31.25 80M 24.36 Table 1: BLEU scores on TED Talk, Opensubtitles and Europarl7 tasks.", "et al. (2018).", "0.1M sentence pairs are used for training, 3K sentence pairs for development, and 5K sentence pairs for evaluation.", "context-agnostic NMT model (Vaswani et al., 2017).", "CA-Transformer : A context-aware transformer model (CA-Transformer) with an additional context encoder to incorporate document contextual information into model (Zhang et al., 2018).", "CA-HAN : A context-aware hierarchical attention networks (CA-HAN) which integrate document contextual information from both source side and target side (Miculicich et al., 2018).", "CADec : A two-pass machine translation model (Context-Aware Decoder, CADec) which first produces a draft translation of the current sentence, then corrects it using context (Voita et al., 2019).", "Our model is implemented on the open-source toolkit Thumt (Zhang et al., 2017).", "Adam optimizer (Kingma and Ba, 2014) is applied with an initial learning rate 0.1.", "The size of hidden dimension and feed-forward layer are set to 512 and 2048 respectively.", "Encoder and decoder have 6 layers with 8 heads multi-head attention.", "Dropout is 0.1 and batch size is set to 4096.", "Beam size is 4 for inference.", "Translation quality is evaluated by the traditional metric BLEU (Papineni et al., 2002) on tokenized text.", "Context window size is set to 3, consistent with the experiments in Section 5.2.", "To initialize the source-side sentence encoder in Section 3.1, a sentence-level NMT model is trained from source language to target language using the corresponding datasets without additional corpus.", "The encoder of this trained model is used to initialize the source-side context encoder.", "Also, we utilize the trained model to translate the source-side sentences and obtain the target-side drafts.", "Similarly, we train a sentence-level model from target language to source language to initialize the target-side encoders in Section 3.1.", "In order to reduce the computational overhead, we share the parameters among the sentence encoders on the same side.", "The settings of these two sentence-level NMT models are consistent with our baseline Transformer model.", "Table 1 summarizes the BLEU scores of different systems on four tasks.", "As seen, our baseline and re-implemented existing methods outperform the reported results on the same data, which we believe makes the evaluation convincing.", "Clearly, our model MHT significantly improves translation quality in terms of BLEU on these tasks, and obtains the best average results that gain 0.38, 0.69 and 1.57 BLEU points over CADec, CA-Transformer and CA-HAN respectively.", "These results demonstrate the universality and effectiveness of the proposed approach.", "Moreover, without in-0 1 2 3 4 window size 25.4 25.6 25.8 26.0 26.2 BLEUTED (en-de) 0 1 2 3 4 window size 18.8 18.9 19.0 19.1 19.2 19.3 19.4 19.5 BLEUTED (zh-en) Figure 2: The performance of the MHT model on TED (En-De) and TED (Zh-En) translation task using different context window sizes.", "troducing large-scale pre-trained language models, our translation systems achieve new state-of-the-art translation qualities across three examined translation tasks, which are TED (En-De), Opensubtitles (En-Ru) and Europarl7 (En-De).", "Overall, our experiments indicate the following two points: 1) explicitly modeling underlying reasoning semantics by a multi-hop mechanism indeed benefits neural machine translation, and 2) the improvements of our model are not from enlarging the network.", "In this section, to gain further insight, we explore the effectiveness of several factors of our model, including 1) multi-hop attention; 2) context window size; 3) reasoning direction; 4) sides for introducing context; and 5) target contexts.", "Moreover, we show qualitative analysis on discourse phenomena to better understand the advantage of our model.", "To further investigate the effect of multi-hop reasoning, we compare our multi-hop attention with two baseline context modeling methods, including Concat and Hierarchical Attention.", "Table 2 shows the results of three different context modeling modules on TED, which use same inputs containing original training data and drafts.", "Concat denotes the MHT model simply using the concatenation of the three context sentences representations to get the final context representation.", "Hierarchical Attention denotes the MHT model with a hierarchical attention to model context, which consists of a sentence-level attention and a token-level attention to capture information from the appropriate context sentences and tokens, as in Miculicich et al. (2018).", "As depicted in Table 2, we replace multi-hop attention with these two baseline modules for experiments.", "Hierarchical Attention slightly outperforms Concat, while multi-hop attention leads both of them by a much larger margin.", "The results demonstrate that multi-hop attention is capable of providing a more fine-grained representation of reasoning state over context and consequently capturing context semantic information more accurately.", "methods.", "As shown in Figure 2, we conduct experiments with different context window sizes to explore its effect.", "When the window size is less than 4, the model obtains more information from contexts and achieves better performance as the window size gets larger.", "However, when window size is increased to 4, we find that the performance doesn't improve further, but decreases slightly.", "This phenomenon shows that contexts far from the target sentence may be less relevant and cause noise (Kim et al., 2019).", "Therefore, we choose the window size 3 for our model MHT.", "In Table 3, we conduct an ablation study to investigate the effect of reasoning direction on MHT model.", "L2R denotes the MHT model with natural reasoning direction, which encodes context sentences from left to right by multi-hop layers, while Direction TED (En-De) TED (Zh-En) L2R 26.22 19.52 R2L 25.80 19.18 Table 3: The performance of the MHT model on TED (En-de) and TED (Zh-En) using different reasoning direction.", "R2L indicates the MHT model encoding context sentences with an opposite direction.", "We observe that integrating reasoning processes by multi-hop attention with both direction can improve the effect of Transformer due to the incorporation of extra context information.", "Besides, MHT model reasoning with natural sentence order outperforms the MHT model with an opposite reasoning direction.", "This is within our expectation since the L2R reasoning is consistent with the reading and reasoning direction of human being.", "As shown in Table 4, we conduct an ablation study to explore how MTH model benefits from contexts on source side and target side of MTH model.", "None indicates the MTH model without multihop attention module on any side of MHT model, but only the draft of the current sentence.", "Source, Target and Source & Target indicate the MHT models with multi-hop attention module to introducing context on only source side, only target side and both sides respectively.", "We find that integrating source-side context or target-side context into the model brings improvements over None that ignores context on both side.", "Besides, MHT with context on both sides achieves the best performance, indicating that the beneficial context information captured by multi-hop attention on the source side and the target side are divergent and complementary.", "In training, the context draft sentences can be the drafts from a pre-trained MT system or the context references, while only the generated drafts are accessible during inference.", "Table 5 shows the BLEU scores of the MHT models using generated drafts and context references during training.", "We can see that the MHT model using drafts as contexts outperforms the MHT model directly using target-side context references, possibly because using context references faces the problem of exposure bias and the drafts generated from pre-trained translation system can bridge the gap between training and testing data.", "We present the translated results from baselines and our model in Table 6 to explore how multihop reasoning mitigate the impact of common discourse phenomena in translation process.", "According to Case 1 in Table 6, the noun hum in source sentence is translated to der Summen by Transformer and CA-Transformer, which fail to understand the correct coreference.", "In German, der is a masculine article.", "The correct article is neutral article das because the hum is from a machine.", "MHT can perform a reasoning process to leverage the context information effectively and figure out the hum is from an engine according to Context", "2. Case 2 indicates that MHT can understand the exact meaning of a polysemous word, benefiting from the reasoning process among the contexts.", "In this case, Transformer, CA-Transformer and CA-HAN all translates the noun show into zeigt, which means display.", "The translation is clearly wrong in this context.", "The correct meaning of show is TV shows like Breaking Bad according to the Context", "1. In contrast, our model can take previous contexts in consideration and reason out the exact meaning of the polysemous word.", "In this paper, we propose a novel document-level translation model called Multi-Hop Transformer", "with an inspiration from human reasoning behavior to explicitly model the human-like draft-editing and reasoning process.", "Experimental results on four widely used tasks show that our model can achieve better performance than both context-agnostic and context-aware strong baseline.", "Furthermore, the qualitative analysis shows that the multi-hop reasoning mechanism is capable of solving some discourse phenomena by capturing context semantics more accurately.", "This work was supported by National Key R&D Program of China (2018YFB1403202).", "We thank the anonymous reviewers for their insightful comments." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "other", "other" ]
[ "The neural attention model has achieved great success in data-to-text generation tasks.", "Though usually excelling at producing fluent text, it suffers from the problem of information missing, repetition and hallucination.", "Due to the black-box nature of the neural attention architecture, avoiding these problems in a systematic way is non-trivial.", "To address this concern, we propose to explicitly segment target text into fragment units and align them with their data correspondences.", "The segmentation and correspondence are jointly learned as latent variables without any human annotations.", "We further impose a soft statistical constraint to regularize the segmental granularity.", "The resulting architecture maintains the same expressive power as neural attention models, while being able to generate fully interpretable outputs with several times less computational cost.", "On both E2E and WebNLG benchmarks, we show the proposed model consistently outperforms its neural attention counterparts.", "Data-to-text generation aims at automatically producing natural language descriptions of structured database (Reiter and Dale, 1997).", "Traditional statistical methods usually tackle this problem by breaking the generation process into a set of local decisions that are learned separately (Belz, 2008; Angeli et al., 2010; Kim and Mooney, 2010; Oya et al., 2014).", "Recently, neural attention models con-flate all steps into a single end-to-end system and largely simplify the training process (Mei et al., 2016; Lebret et al., 2016; Shen et al., 2017; Su et al., 2018, 2019; Chang et al., 2020).", "However, the black-box conflation also renders the generation uninterpretable and hard to control (Wiseman et al., 2018; Shen et al., 2019a).", "Verifying the generation correctness in a principled way is nontrivial.", "In practice, it often suffers from the problem of information missing, repetition and hallucina-tion (Dusek et al., 2018, 2020).", "In this work, we propose to explicitly exploit the segmental structure of text.", "Specifically, we assume the target text is formed from a sequence of segments.", "Every segment is the result of a two-stage decision: (1) Select a proper data record to be described and (2) Generate corresponding text by paying attention only to the selected data record .", "This decision is repeated until all desired records have been realized.", "Figure 1 illustrates this process.", "Compared with neural attention, the proposed model has the following advantages: (1) We can monitor the corresponding data record for every segment to be generated.", "This allows us to easily control the output structure and verify its correctness 1 .", "(2) Explicitly building the correspondence between segments and data records can potentially reduce the hallucination, as noted in (Wu et al., 2018; Deng et al., 2018) that hard alignment usually outperforms soft attention.", "(3) When decoding each segment, the model pays attention only to the 1 For example, we can perform a similar constrained decoding as in Balakrishnan et al. (2019) to rule out outputs with undesired patterns.", "selected data record instead of averaging over the entire input data.", "This largely reduces the memory and computational costs 2 .", "To train the model, we do not rely on any human annotations for the segmentation and correspondence, but rather marginalize over all possibilities to maximize the likelihood of target text, which can be efficiently done within polynomial time by dynamic programming.", "This is essentially similar to traditional methods of inducing segmentation and alignment with semi-markov models (Daume III and Marcu, 2005; Liang et al., 2009).", "However, they make strong independence assumptions thus perform poorly as a generative model (Angeli et al., 2010).", "In contrast, the transition and generation in our model condition on all previously generated text .", "By integrating an autoregressive neural network structure, our model is able to capture unbounded dependencies while still permitting tractable inference.", "The training process is stable as it does not require any sampling-based approximations.", "We further add a soft statistical constraint to control the segmentation granularity via posterior regularization (Ganchev et al., 2010).", "On both the E2E and WebNLG benchmarks, our model is able to produce significantly higher-quality outputs while being several times computationally cheaper.", "Due to its fully interpretable segmental structure, it can be easily reconciled with heuristic rules or hand-engineered constraints to control the outputs.", "Data-to-text generation is traditionally dealt with using a pipeline structure containing content planning, sentence planning and linguistic realization (Reiter and Dale, 1997).", "Each target text is split into meaningful fragments and aligned with corresponding data records, either by hand-engineered rules (Kukich, 1983; McKeown, 1992) or statistical induction (Liang et al., 2009; Koncel-Kedziorski et al., 2014; Qin et al., 2018).", "The segmentation and alignment are used as supervision signals to train the content and sentence planner (Barzilay and Lapata, 2005; Angeli et al., 2010).", "The linguistic realization is usually implemented by template mining from the training corpus (Kon-dadadi et al., 2013; Oya et al., 2014).", "Our model adopts a similar pipeline generative process, but 2 Coarse-to-fine attention (Ling and Rush, 2017; Deng et al., 2017) was proposed for the same motivation, but they resort to reinforcement learning which is hard to train, and the performance is sacrificed for efficiency.", "integrates all the sub-steps into a single end-to-end trainable neural architecture.", "It can be considered as a neural extension of the PCFG system in Konstas and Lapata (2013), with a more powerful transition probability considering inter-segment dependence and a state-of-the-art attention-based language model as the linguistic realizer.", "Wiseman et al. (2018) tried a similar neural generative model to induce templates.", "However, their model only captures loose data-text correspondence and adopts a weak markov assumption for the segment transition probability.", "Therefore, it underperforms the neural attention baseline as for generation.", "Our model is also in spirit related to recent attempts at separating content planning and surface realization in neural data-to-text models (Zhao et al., 2018; Puduppully et al., 2019; Moryossef et al., 2019; Ferreira et al., 2019).", "Nonetheless, all of them resort to manual annotations or hand-engineered rules applicable only for a narrow domain .", "Our model, instead, automatically learn the optimal content planning via exploring over exponentially many segmentation/correspondence possibilities.", "There have been quite a few neural alignment models applied to tasks like machine translation (Wang et al., 2018; Deng et al., 2018), character transduction (Wu et al., 2018; Shankar and Sarawagi, 2019) and summarization (Yu et al., 2016; Shen et al., 2019b).", "Unlike word-to-word alignment, we focus on learning the alignment between data records and text segments.", "Some works also integrate neural language models to jointly learn the segmentation and correspondence, e.g., phrase-based machine translation (Huang et al., 2018), speech recognition (Wang et al., 2017) and vision-grounded word segmentation (Kawakami et al., 2019).", "Data-to-text naturally fits into this scenario since each data record is normally verbalized in one continuous text segment.", "Let X, Y denote a source-target pair.", "X is structured data containing a set of records and Y corresponds to y 1 , y 2 , . . . , y m which is a text description of X .", "The goal of data-to-text generation is to learn a distribution p ( Y | X ) to automatically generate proper text describing the content of the data.", "The neural attention architecture handles this task with an encode-attend-decode process (Bah-danau et al., 2015).", "The input X is processed into a sequence of x 1 , x 2 , . . . , x n , normally by flatten-ing the data records (Wiseman et al., 2017).", "The encoder encodes each x i into a vector h i .", "At each time step, the decoder attends to encoded vectors and outputs the probability of the next token by p ( y t | y 1: t 1 , A t ) .", "A t is a weighted average of source vectors: A t = (cid:88) i t,i h i t,i = e f ( h i ,d t ) (cid:80) j e f ( h j ,d t ) (1) d t is the hidden state of the decoder at time step t .", "f is a score function to compute the similarity between h i and d t (Luong et al., 2015).", "Suppose the input data X contains a set of records r 1 , r 2 , ..., r K .", "Our assumption is that the target text y 1: m can be segmented into a sequence of fragments.", "Each fragment corresponds to one data record.", "As the ground-truth segmentation and correspondence are not available, we need to enumerate over all possibilities to compute the likelihood of y 1: m .", "Denote by S y the set containing all valid segmentation of y 1: m .", "For any valid segmentation s 1: s S y , ( s 1: s ) = y 1: m , where means concatenation and s is the number of segments.", "For example, let m = 5 and s = 3 .", "One possible segmentation would be s 1: s = {{ y 1 , y 2 , $ } , { y 3 , $ } , { y 4 , y 5 , $ }} .", "$ is the end-of-segment symbol and is removed when applying the operator.", "We further define c ( ) to be the corresponding data record(s) of .", "The likelihood of each text is then computed by enumerating over all possibilities of s 1: s and c ( s 1: s ) : p ( y 1: m | X ) = (cid:88) s 1: s S y p ( s 1: s | X ) = (cid:88) s 1: s S y s (cid:89) o =1 r K (cid:88) c ( s o )= r 1 p ( s o | ( s <o ) , c ( s o )) p ( c ( s o ) | ( s <o ) , c ( s <o )) (2) Every segment is generated by first selecting the data record based on the transition probability p ( c ( s o ) | ( s <o ) , c ( s <o )) , then generating tokens based on the word generation probability p ( s o | ( s <o ) , c ( s o )) .", "Figure 2 illustrates the generation process of our model.", "Generation Probability We base the generation probability on the same decoder as in neural attention models.", "The only difference is that the model A tt e n t i on C(S 1 ) C(S 2 ) C(S 3 ) y 1 y 2 y 3 y 4 y 5 y 6 y 7 y 8 ... ... $ $ S 1 S 2 S 3 $ $ $ X Figure 2: Generation process of our approach.", "can only pay attention to its corresponding data record .", "The attention scores of other records are masked out when decoding s o : t,i = e f ( h i ,d t ) 1 ( x i c ( s o )) (cid:80) j e f ( h j ,d t ) 1 ( x j c ( s o )) where 1 is the indicator function.", "This forces the model to learn proper correspondences and enhances the connection between each segment and the data record it describes.", "Following the common practice, we define the output probability with the pointer generator (See et al., 2017; Wiseman et al., 2017): p gen = ( MLP g ([ d t A t ])) p vocab = softmax ( W 1 d t + W 2 A t ) p ( y t | y <t ) = p gen p vocab ( y t ) + (1 p gen ) (cid:88) i : y t = x i a t,i d t is the decoder's hidden state at time step t .", "denotes vector concatenation.", "A t is the context vector.", "MLP indicates multi-layer perceptron and normalizes the score between (0 , 1) .", "W 1 and W 2 are trainable matrices.", "p gen is the probability that the word is generated from a fixed vocabulary distribution p vocab instead of being copied.", "The final decoding probability p ( y t ) is marginalized over p vocab and the copy distribution.", "The generation probability of s o factorizes over the words within it and the end-of-segment token: p ( s o | ( s <o ) , c ( s o )) = p ($ | y 1: t ) (cid:89) y t s o p ( y t | y <t ) Transition Probability We make a mild assumption that c ( s o ) is dependent only on c ( s o 1 ) and ( s 1: o 1 ) but irrelevant of c ( s <o 1 ) , which is a common practice when modelling alignment (Och et al., 1999; Yu et al., 2016; Shankar and Sarawagi, 2019).", "The transition probability is defined as: p ( c ( s o ) = r i | c ( s <o ) , ( s <o )) p ( c ( s o ) = r i | c ( s o 1 ) , ( s <o )) f ( r i ) T [ MTA s o 1 + NT d s o 1 ] (3) A softmax layer is finally applied to the above equation to normalize it as a proper probability distribution.", "f ( r i ) is a representation of r i , which is defined as a max pooling over all the word embed-dings contained in r i .", "A s o 1 is the attention context vector when decoding the last token in s o 1 , defined as in Equation", "1. It carries important information from c ( s o 1 ) to help predict c ( s o ) .", "d s o 1 is the hidden state of the neural decoder which goes through all history tokens ( s 1: o 1 ) .", "M, N are trainable matrices to project A s o 1 and d s o 1 into the same dimension as f ( r i ) .", "We further add one constraint to prohibit self-transition , which can be easily done by zeroing out the transition probability in Equation 3 when c ( s o ) = c ( s o 1 ) .", "This forces the model to group together text describing the same data record.", "Since Equation 3 conditions on all previously generated text, it is able to capture more complex dependencies as in semi-markov models (Liang et al., 2009; Wiseman et al., 2018).", "Null Record In our task, we find some frequent phrases, e.g., it is, and, tend to be wrongly aligned with some random records, similar to the garbage collection issue in statistical alignment (Brown et al., 1993).", "This hurt the model interpretability.", "Therefore, we introduce an additional null record r 0 to attract these non-content phrases.", "The context vector when aligned to r 0 is a zero vector so that the decoder will decode words based solely on the language model without relying on the input data.", "Training Equation 2 contains exponentially many combinations to enumerate over.", "Here we show how to efficiently compute the likelihood with the forward algorithm in dynamic programming (Rabiner, 1989).", "We define the forward variable ( i, j ) = p ( y 1: i , c ( y i ) = j | X ) .", "With the base (1 , j ) = p ( y 1 | c ( y 1 ) = j ) .", "The recursion goes as follows for i = 1 , 2 , . . . , m 1 : ( i + 1 , j ) = i (cid:88) p =1 r K (cid:88) q = r 0 ( p, q ) p ( c ( y p +1 ) = j | c ( y p ) = q, y 1: p ) p ( y p +1: i +1 | c ( y p +1: i +1 ) = q, y 1: p ) p ($ | c ( y p +1: i +1 ) = q, y 1: i +1 ) (4) The final likelihood of the target text can be computed as p ( y 1: m | X ) = (cid:80) r K j = r 0 ( m, j ) .", "As the forward algorithm is fully differentiable, we maximize the log-likelihood of the target text by backprop-agating through the dynamic programming.", "The process is essentially equivalent to the generalized EM algorithm (Eisner, 2016).", "By means of the modern automatic differentiation tools, we avoid the necessity to calculate the posterior distribution manually (Kim et al., 2018).", "To speed up training, we set a threshold L to the maximum length of a segment as in Liang et al. (2009); Wiseman et al. (2018).", "This changes the complexity in Equation 4 to a constant O ( LK ) instead of scaling linearly with the length of the target text.", "Moreover, as pointed out in Wang et al. (2017), the computation for the longest segment can be reused for shorter segments.", "We therefore first compute the generation and transition probability for the whole sequence in one pass.", "The intermediate results are then cached to efficiently proceed the forward algorithm without any re-computation.", "One last issue is the numerical precision, it is important to use the log-space binary operations to avoid underflow (Kim et al., 2017).", "Segmentation Granularity There are several valid segmentations for a given text.", "As shown in Table 1, when the segmentation (example 1) is too fine-grained, controlling the output information becomes difficult because the content of one data record is realized in separate pieces 3 .", "When it is too coarse, the alignment might become less accurate (as in Example 4, pub is wrongly merged with previous words and aligned together to the Food record).", "In practice, we expect the segmentation to stay with accurate alignment yet avoid being too brokenly separated.", "To control the granularity as we want, we utilize posterior regularization (Ganchev et al., 2010) to constrain the expected number of segments for each text 4 , which can be calculated by going through a similar forward pass as in Equation 4 (Eisner, 2002).", "Most computation is shared without significant extra burden.", "The final loss function is: log ES y p ( s 1: s | X )+max( (cid:12)(cid:12)(cid:12) ES y s (cid:12)(cid:12)(cid:12) , ) (5) log ES y p ( s 1: s | X ) is the log-likelihood of target text after marginalizing over all valid segmentations.", "ES y s is the expected number of segments and , are hyperparameters.", "We use the max-margin loss to encourage ES y s to stay close to under a tolerance range of .", "Decoding The segment-by-segment generation process allows us to easily constrain the output structure.", "Undesirable patterns can be rejected before the whole text is generated.", "We adopt three simple constraints for the decoder:", "1. Segments must not be empty.", "2. The same data record cannot be realized more than once (except for the null record).", "3. The generation will not finish until all data records have been realized.", "Constraint 2 and 3 directly address the information repetition and missing problem.", "When segments are incrementally generated, the constraints will be checked against for validity.", "Note that adding the constraints hardly incur any cost, the decoding process is still finished in one pass .", "No post-processing or reranking is needed.", "3 The finer-grained segmentation might be useful if the focus is on modeling the detailed discourse structure instead of the information accuracy (Reed et al., 2018; Balakrishnan et al., 2019), which we leave for future work.", "4 We can also utilize some heuristic rules to help segmentation.", "For example, we can prevent breaking syntactic elements obtained from an external parser (Yang et al., 2019) or match entity names with handcrafted rules (Chen et al., 2018).", "The interpretability of the segmental structure allows easy combination with these rules.", "We focus on a general domain-agnostic method in this paper, though heuristic rules might bring further improvement under certain cases.", "Computational Complexity Suppose the input data has M records and each record contains N tokens.", "The computational complexity for neural attention models is O ( MN ) at each decoding step where the whole input is retrieved.", "Our model, similar to chunkwise attention (Chiu and Raffel, 2018) or coarse-to-fine attention (Ling and Rush, 2017), reduces the cost to O ( M + N ) , where we select the record in O ( M ) at the beginning of each segment and attend only to the selected record in O ( N ) when decoding every word.", "For larger input data, our model can be significantly cheaper than neural attention models.", "Dataset We conduct experiments on the E2E (Novikova et al., 2017b) and WebNLG (Colin et al., 2016) datasets.", "E2E is a crowd-sourced dataset containing 50k instances in the restaurant domain.", "The inputs are dialogue acts consisting of three to eight slot-value pairs.", "WebNLG contains 25k instances describing entities belonging to fifteen distinct DBpedia categories.", "The inputs are up to seven RDF triples of the form (subject, relation, object) .", "Implementation Details We use a bi-directional LSTM encoder and uni-directional LSTM decoder for all experiments.", "Input data records are concatenated into a sequence and fed into the encoder.", "We choose the hidden size of encoder/decoder as 512 for E2E and 256 for WebNLG.", "The word embedding is with size 100 for both datasets and initialized with the pre-trained Glove embedding 5 (Pen-nington et al., 2014).", "We use a drop out rate of 0 .", "3 for both the encoder and decoder.", "Models are trained using the Adam optimizer (Kingma and Ba, 2014) with batch size 64.", "The learning rate is initialized to 0.01 and decays an order of magnitude once the validation loss increases.", "All hyperparameters are chosen with grid search according to the validation loss.", "Models are implemented based on the open-source library PyTorch (Paszke et al., 2019).", "We set the hyperparameters in Eq.", "5 as = K, = 1 (recall that K is the number of records in the input data).", "The intuition is that every text is expected to realize the content of all K input records.", "It is natural to assume every text can be roughly segmented into K fragments, each corresponding to one data record.", "A deviation of 5 nlp.stanford.edu/data/glove.6B.zip K 1 is allowed for noisy data or text with complex structures.", "Metrics We measure the quality of system outputs from three perspectives: (1) word-level overlap with human references, which is a commonly used metric for text generation.", "We report the scores of BLEU-4 (Papineni et al., 2002), ROUGE-L (Lin, 2004), Meteor (Banerjee and Lavie, 2005) and CIDEr (Vedantam et al., 2015) .", "(2) human evaluation .", "Word-level overlapping scores usually correlate rather poorly with human judgements on fluency and information accuracy (Reiter and Belz, 2009; Novikova et al., 2017a).", "Therefore, we passed the input data and generated text to human annotators to judge if the text is fluent by grammar (scale 1-5 as in Belz and Reiter (2006)), contains wrong fact inconsistent with input data, repeats or misses information.", "We report the averaged score for fluency and definite numbers for others.", "The human is conducted on a sampled subset from the test data.", "To ensure the subset covers inputs with all possible number of records ( K [3 , 8] for E2E and K [1 , 7] for WebNLG), we sample 20 instances for every possible K .", "Finally,we obtain 120 test cases for E2E and 140 for WebNLG 6 .", "(3) Diversity of outputs .", "Diversity is an important concern for many real-life applications.", "We measure it by the number of unique unigrams and trigrams over system outputs, as done in Dusek et al. (2020).", "In this section, we first show the effects of the granularity regularization we proposed, then compare model performance on two datasets and analyze the performance difference.", "Our model is compared against the neural attention-based pointer generator ( PG ) which does not explicit learn the segmentation and correspondence.", "To show the effects of the constrained decoding mentioned in 4, Decoding.", "we run our model with only the first constraint to prevent empty segments (denoted by ours in experiments), with the first two constraints to prevent repetition (denoted by ours (+R) ), and with all constraints to further reduce information missing (denoted by ours (+RM) ).", "6 The original human evaluation subset of WebNLG is randomly sampled, most of the inputs contain less than 3 records, so we opt for a new sample for a thorough evaluation.", "Granularity) in Fig", "3. When varying the model size, the segmentation granularity changes much if no regularization is imposed.", "Intuitively if the generation module is strong enough (larger hidden size), it can accurately estimate the sentence likelihood itself without paying extra cost of switching between segments, then it tends to reduce the number of transitions.", "Vice versa, the number of transitions will grow if the transition module is stronger (larger embedding size).", "With the regularization we proposed, the granularity remains what we want regardless of the hyperparameters.", "We can thereby freely decide the model capacity without worrying about the difference of segmentation behavior.", "Results on E2E On the E2E dataset, apart from our implementations, we also compare agianst outputs from the SLUG (Juraska et al., 2018), the overall winner of the E2E challenge (seq2seq-based), DANGNT (Nguyen and Tran, 2018), the best grammar rule based model, TUDA (Puzikov and Gurevych, 2018), the best template based model, and the autoregressive neural template model ( N TEMP ) from Wiseman et al. (2018).", "SLUG uses a heuristic slot aligner based on a set of handcrafted rules and combine a complex pipeline of data augmentation, selection, model ensemble and reranker, while our model has a simple end-to-end learning paradigm with no special delexical-izing, training or decoding tricks.", "Table 2 reports the evaluated results.", "Seq2seq-based models are more diverse than rule-based models at the cost of higher chances of making errors.", "As rule-based systems are by design always faithful to the in-Metrics Word Overlap Human Evaluation Diversity Models BLEU R-L Meteor CIDEr Fluent Wrong Repeat Miss Dist-1 Dist-3 SLUG 0.662 0.677 0.445 2.262 4.94 5 0 17 74 507 DANGNT 0.599 0.663 0.435 2.078 4.97 0 0 21 61 301 TUDA 0.566 0.661 0.453 1.821 4.98 0 0 10 57 143 N TEMP 0.598 0.650 0.388 1.950 4.84 19 3 35 119 795 PG 0.638 0.677 0.449 2.123 4.91 15 1 29 133 822 OURS 0.647 0.683 0.453 2.222 4.96 0 1 15 127 870 OURS (+R) 0.645 0.681 0.452 2.218 4.95 0 0 13 133 881 OURS (+RM) 0.651 0.682 0.455 2.241 4.95 0 0 3 135 911 Table 2: Automatic and human evaluation results on E2E dataset.", "put information, they made zero wrong facts in their outputs.", "Most models do not have the fact repetition issue because of the relatively simple patterns in the E2E dataset.", "therefore, adding the (+R) constraint only improves the performance mi-norly.", "The (+RM) constraint reduces the number of information missing to 3 without hurting the fluency.", "All the 3 missing cases are because of the wrong alignment between the period and one data record, which can be easily fixed by defining a simple rule.", "We put the error analysis in appendix A. N Temp performs worst among all seq2seq-based systems because of the restrictions we mentioned in", "2. As also noted by the author, it trades-off the generation quality for interpretability and controllability.", "In contrast, our model, despite relying on no heuristics or complex pipelines, made zero wrong facts with the lowest information missing rate, even surpassing rule-based models .", "It also maintains interpretable and controllable without sacrificing the generation quality.", "results from MELBOURNE , a seq2seq-based system achieving highest scores on automatic metrics in the WebNLG challenge and UPF-FORGE , a classic grammar-based system that wins in the human evaluation WebNLG contains significantly more distinct types of attributes than E2E, so the chance of making errors or repetitions increases greatly.", "Nevertheless, our model still performs on-par on automatic metrics with superior information adequacy and output diversity .", "The (+R) decoding constraint becomes important since the outputs in WebNLG are much longer than those in E2E, neural network models have problems tracking the history generation beyond certain range.", "Models might repeat facts that have been already generated long back before.", "The (+R) constraint effectively reduces the repetition cases from 19 to", "2. These 2 cases are intra-segment repetitions and failed to be detected since our model can only track inter-segment constraints (examples are in appendix A).", "The (+RM) constraint brings down the information missing cases to 5 with slightly more wrong and repeated facts compared with (+R).", "Forcing models Egg Harbor Township, New Jersey isPartOf New Jersey Atlantic City International Airport Location Identifier KACY ICAO Atlantic City International Airport location Egg Harbor Township, New Jersey Egg Harbor Township, New Jersey country United States Egg Harbor Township, New Jersey isPartOf Atlantic County, New Jersey PG Atlantic City International Airport is located in Egg Harbor Township , New Jersey , United States .", "Discussions In summary, our models generates most diverse outputs, achieves similar or better performances in word-overlap automatic metrics while significantly reduces the information hallucination, repetition and missing problems .", "An example of hallucination is shown in Table", "4. The standard PG model hallucinated the contents of low-priced, in the city center and delivers take-away.", "The visualized attention maps reveal that it failed to attend properly when decoding the word low.", "The decoding is driven mostly by language models instead of the contents of input data.", "In contrast, as we explicitly align each segment to one slot, the attention distribution of our model is concentrated on one single slot rather than averaged over the whole input , the chance of hallucinating is therefore largely reduced.", "Figure 4 shows some example generations from WebNLG.", "Without adding the decoding constraints, PG and our model both suffer from the problem of information repetition and missing.", "However, the interpretability of our model enables us to easily avoid these issues by constraining the segment transition behavior.", "For the attention-based PG model, there exists no simple way of applying these constraints.", "We can also explicitly control the output structure similar to Wiseman et al. (2018), examples are shown in appendix B. 7 Conclusion In this work, we exploit the segmental structure in data-to-text generation.", "The proposed model significantly alleviates the information hallucination, repetition and missing problems without sacrificing the fluency and diversity.", "It is end-to-end trainable, domain-independent and allows explicit control over the structure of generated text.", "As our model is interpretable in the correspondence between segments and input records, it can be easily combined with hand-engineered heuristics or user-specific requirements to further improve the performance.", "This research was funded in part by the DFG collaborative research center SFB 1102.", "Ernie Chang is supported by SFB 248 Foundations of Perspicuous Software Systems (E2); Xiaoyu Shen is supported by IMPRS-CS fellowship.", "We sincerely thank the anonymous reviewers for their insightful comments that helped us to improve this paper." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "objective", "result", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "objective", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "other", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "other", "other", "other" ]
[ "With the increasing popularity of posting multimodal messages online, many recent studies have been carried out utilizing both textual and visual information for multi-modal sarcasm detection.", "In this paper, we investigate multimodal sarcasm detection from a novel perspective by constructing a cross-modal graph for each instance to explicitly draw the ironic relations between textual and visual modalities.", "Specifically, we first detect the objects paired with descriptions of the image modality, enabling the learning of important visual information.", "Then, the descriptions of the objects are served as a bridge to determine the importance of the association between the objects of image modality and the contextual words of text modality, so as to build a cross-modal graph for each multi-modal instance.", "Furthermore, we devise a cross-modal graph convolutional network to make sense of the incongruity relations between modalities for multi-modal sarcasm detection.", "Extensive experimental results and in-depth analysis show that our model achieves state-of-the-art performance in multi-modal sarcasm detection 1 .", "Sarcasm is a peculiar form of sentiment expressions, allowing individuals to express contempt sentiment or intention that is converse to the authen-tic/apparent sentiment information (Gibbs, 1986; Dews and Winner, 1995; Gibbs, 2007).", "As such, accurately detecting satirical/ironic expression could The first two authors contribute equally to this work.", "potentially improve the performance of sentiment analysis and opinion mining (Pang and Lee, 2008; Kumar Jena et al., 2020; Pan et al., 2020).", "In today's fast growing social media platforms, it is common to post multi-modal messages.", "Therefore, in addition to developing sarcasm detection models for textual data (Riloff et al., 2013; Joshi et al., 2015), it is increasingly popular to explore sarcasm detection in multi-modal data such as text and images (Schifanella et al., 2016; Cai et al., 2019).", "Dealing with multimodal data requires an understanding of the information presented in different modalities.", "As the sarcastic example shown in Figure 1", "(a), text-only approaches may erroneously identify it as a positive sentiment expression due to the phrase wonderful weather .", "This post however contains a sarcastic expression with negative sentiment, because it is accompanied by an image with thunderstorm clouds .", "The key of effective multi-modal sarcasm detection is to accurately extract the incongruent sentiment cues from 1767 different modalities, allowing the detection of the true sentiment conveyed in the message.", "To perform multi-modal sarcasm detection on data composed of text and image, several related research efforts attempt to concatenate the textual and visual features to fuse sarcastic information (Schi-fanella et al., 2016), employ attention mechanism to implicitly fuse the features of different modalities based on external knowledge (Cai et al., 2019; Xu et al., 2020; Pan et al., 2020), or build interactive graphs to model the relations of different modalities (Liang et al., 2021a).", "Despite promising progress made by existing models, they still suffer from the following limitations: 1) Simply considering the whole image does not produce good results, mostly due to the intricate visual information presented in an image; not to mention that only particular visual patches are related to the text.", "As in the examples shown in Figure 1, the correct results can be easily obtained by only tracking the visual information in the bounding boxes.", "Therefore, discriminating key visual objects from the irrelevant ones could lead to improved learning of visual information.", "2) Crucial visual information that relates to the sarcastic cues of text modality may be scattered in an image (Figure 1", "(b)).", "As such, it is essential to focus on drawing the intricate sentiment connections between text and image modalities, allowing a good exploitation of the contradictory sentiment information between modalities for learning sarcastic clues.", "To this end, we propose a novel cross-modal graph convolutional networks (CMGCN) by constructing a cross-modal graph for each instance, where the important visual information and the related textual tokens are explicitly linked.", "This allows for the extraction of incongruous implications between two modalities in sarcasm detection.", "Concretely, instead of trying to produce a caption of the whole image, we first detect the objects of the image to capture the important visual regions and the corresponding attribute-object pairs via the approach proposed by Anderson et al. (2018).", "Then, we explore a novel solution to assign weights to the edges of the cross-modal graph by means of computing the word similarities between the object descriptors of the attribute-object pairs and textual words based on the WordNet (Miller, 1992).", "Further, to introduce the multi-modal sentiment relations into the cross-modal graphs, inspired by (Lou et al., 2021), we devise a modulating factor of sentiment relation for each edge by retrieving the affective weights of attribute descriptors (usually adjectives with affective information) and textual words from external affective knowledge (SenticNet (Cambria et al., 2020)).", "As such, the modulating factors can be adopted to refine the edge weights of word similarities, allowing the capture of sentiment incongruities of the cross-modal nodes in the graph.", "Further, in the light of cross-modal graphs, we deploy a GCN architecture to make sense of the incongruous relations across the modalities for multi-modal sarcasm detection.", "The main contributions of our work are summarized as follows: To the best of our knowledge, we are the first to explore the use of the graph model based on auxiliary object detection for modeling the contradictory sentiments between key textual and visual information in multi-modal sarcasm detection.", "Using the attribute-object pairs of the image objects as the bridge, a novel approach of constructing cross-modal graphs is developed to explicitly link the two modalities by edges with the varying degree of importance.", "A series of experiments on a publicly available multi-modal sarcasm detection benchmark dataset show that our proposed method achieves the state-of-the-art performance.", "Previous work of sarcasm detection has been applied to textual utterances information (Zhang et al., 2016; Tay et al., 2018; Babanejad et al., 2020).", "Different from text-based sarcasm detection, multimodal sarcasm detection aims to identify the sarcastic expression among different modalities (Schi-fanella et al., 2016; Castro et al., 2019).", "Schifanella et al. (2016) firstly tackled the multi-modal sarcasm detection task with text and image modalities by manually designed features.", "Cai et al. (2019) created a new dataset and proposed a hierarchical fusion model for multi-modal sarcasm detection.", "Xu et al. (2020) explored decomposition and relation network to model both cross-modality contrast and semantic association in sarcasm detection.", "Pan et al. (2020) proposed inter-modality attention and co-attention to learn the contradiction of sarcasm.", "For 1768 Object detection cloudy white green green green [CLS] what a wonderful weather !", "the graph-based methods, Liang et al. (2021a) deployed a heterogeneous graph structure to learn the sarcastic features from both intraand inter-modality perspectives.", "However, this method tried to grasp the visual information of the whole image, and meanwhile ignore the sentiment expression between different modalities.", "Therefore, different from (Liang et al., 2021a), we explore a novel cross-modal GCN model based on the important visual information and sentiment cues to leverage the inconsistent implications between different modalities and thus improve the performance of multimodal sarcasm detection.", "Models based on graph neural networks (GNN), including graph convolutional network (GCN) (Kipf and Welling, 2017) and graph attention network (GAT) (Velickovic et al., 2018), have achieved promising performance in many recent research studies, such as visual representation learning (Wu et al., 2019; Xie et al., 2021), text representation learning (Yao et al., 2019; Lou et al., 2021; Liang et al., 2021b, 2022), and recommendation systems (Ying et al., 2018; Tan et al., 2020).", "Further, there are also some research studies explored graph models to deal with the multi-modal tasks, such as multi-modal sentiment detection (Yang et al., 2021), multi-modal named entity recognition (Zhang et al., 2021), cross-modal video moment retrieval (Zeng et al., 2021), multi-modal neural machine translation (Yin et al., 2020), and multimodal sarcasm detection (Liang et al., 2021a).", "In this section, we describe our proposed Cross-Modal Graph Convolutional Networks ( CMGCN ) model for multi-modal sarcasm detection in details.", "As demonstrated in Figure 2, the architecture of the proposed CMGCN contains four main components: 1) Text-modality representation , which employs the pre-trained uncased BERT-base model (De-vlin et al., 2019) as the text encoder to capture the hidden representation of the text-modality; 2) Image-modality representation , which deploys the pre-trained Vision Transformer (ViT) (Dosovit-skiy et al., 2021) as the image encoder to capture the hidden representation of the image-modality with respect to each bounding box (visual region); 3) Cross-modal graph , which constructs a cross-modal graph for each multi-modal example based on the external affective knowledge source and the hidden representations of text and image modalities; 4) Multi-modal fusion , which fuses the representations from image and text modalities to capture the sarcastic features by means of a GCN structure and an attention mechanism.", "For text processing, given a sequence of words s = { w i } ni =1 , n is the length of the text s .", "We first adopt the pre-trained uncased BERT-base 1769 model (Devlin et al., 2019) to map each word w i into a d T -dimensional embedding: XT = [ x 1 , x 2 , , x n ] = BERT( [CLS] s [SEP] ) (1) Where XT is the embedding matrix of the input text.", "Here, the representations of tokens [CLS] and [SEP] are not utilized in constructing the cross-modal graph.", "Subsequently, to unify the dimensions of representations between different modalities and capture the sequential relations of the context, we utilize a bidirectional LSTM (Bi-LSTM) to learn the text-modality representation of the input text: T = { t 1 , t 2 , , t n } = Bi LSTM( XT ) (2) Where t j R 2 d h denotes the hidden state vector at time step j from the bidirectional LSTM, d h denotes the dimensionality of the text-modality hidden state representation.", "For image processing, given an image I , we first adopt a trained toolkit proposed by Anderson et al. (2018) to derive a series of bounding boxes (objects) paired with their attribute-object pairs.", "For each visual region of the bounding box I i RL h L w , following (Xu et al., 2020), we first resize it to 224 224 , i.e. L = L h = L w = 224 .", "Subsequently, following (Dosovitskiy et al., 2021), we reshape the region I i RL L into a sequence I i = { p j R L/p L/p } rj =1 , where r = p p is the number of patches.", "Then, we flatten and map each patch to a d I -dimensional vector with a trainable linear projection: z j = p j E .", "For each sequence of image patches, a [class] token embedding z [ class ] R d I is prepended for the sequence of embedded patches, and position embeddings are added to the patch embeddings to retain positional information.", "The input of each visual region I i is represented as: Z i = [ z [ class ] ; z 1 ; z 2 ; ; z r ] + E pos (3) Where Z i R ( r +1) d I is the input matrix of the image patches, and E pos R ( r +1) d I is the position embedding matrix.", "Then, we feed the input matrix Z i into the ViT encoder to acquire the representation h i of visual region I i : H i = ViT( Z i ) , h i = H i, [ class ] (4) We use the representation of the [class] token embedding to represent the visual region.", "Finally, the representation of the image I is defined as: XI = { h 1 , h 2 , , h m } (5) Where m is the number of visual regions.", "Subsequently, we employ a trainable Linear Projection to map each v i to a 2 d h -dimensional vector: V = { v 1 , v 2 , , v m } = XIWV (6) Where WV R d I 2 d h is a trainable parameter.", "In this section, we describe how to construct a cross-modal graph.", "To leverage the relations between multi-modal features, we employ a graph structure to link the textual words with the associated image objects.", "Here, the nodes of the cross-modal graph are the representations of text and image modalities.", "Many GCN-based approaches have demonstrated that the weights of the edges are crucial in graph information aggregation (Liang et al., 2021b; Yang et al., 2021; Lou et al., 2021).", "As such, constructing a cross-modal graph boils down to the setting of the edge weights in the graph.", "To this end, we explore a novel approach of setting the weights based on both word similarities and affective clues between textual words and the attribute-object pairs of the image regions, and the dependency tree of the text-modality.", "The adjacency matrix A R ( n + m ) ( n + m ) of the cross-modal graph is defined as: A i,j = 1 if D i,j and i < n , j < n i,j if i < n , j n 0 otherwise (7) i,j = Sim ( w i , o j ) i,j + 1 (8) i,j = ( w i ) ( a j ) | ( w i ) ( a j ) | (9) Where D i,j indicates that there is a relation between w i and w j in the dependency tree of the sentence.", "Sim ( ) represents the computation of word similarity 2 .", "We set Sim ( ) = 0 if the return value is None .", "i,j is a modulating factor refers to the sentiment relation (sentiment incongruity) between an image region and a text token.", "( w i ) [ 1 , 1] represents the affective weight of 2 We employ the NLTK toolkit ( http://www.nltk. org/ ) to compute the similarity of a word pair based on the WordNet.", "word w i retrieved from SenticNet (Cambria et al., 2020).", "We set ( w i ) = 0 if w i cannot be found in SenticNet.", "|| represents absolute value calculation.", "a j and o j respectively denote the attribute and the object of the bounding box j .", "Inspired by Kipf and Welling (2017), we construct the cross-modal graph as an undirected graph, A i,j = A j,i , and set a self-loop for each node, A i,i = 1 .", "The intention of the cross-modal graph construction (Equations 7 and 9) is that: 1) As in the examples shown in Figure 1, the sarcastic information of text-modality may be expressed by multiple words, such as wonderful weather .", "Therefore, we incorporate the syntax-aware relations over the dependency tree of the sentence into the cross-modal graph to advance the learning of the contextual dependencies 3 .", "2) We devise a coefficient i,j , which is associated with the affective weights, to modulate the influence of contrary sentiment relations.", "Here, > 1 is a tuned hyper-parameter to regulate the bias of inconsistent sentiment relations.", "That is, if the polarities of ( w i ) and ( a j ) are opposite, the value of is boosted, otherwise the value is shrunk.", "Especially, the greater the affective weights, the higher the confidence that the value of is boosted or shrunk.", "3) We add 1 to the cross-modal edges to pay more attention to the cross-modal nodes aggregation.", "For each instance, we explore a graph architecture to extract the crucial sarcastic clues by aggregating the correlation of nodes in the cross-modal graph.", "Concretely, we feed the adjacency matrix of the cross-modal graph A and the corresponding nodes' representations R of each multi-modal example into a multi-layers GCNs architecture to derive the graph representation.", "For each graph convolutional operation, each node in the l -th GCN layer is updated according to the hidden representations of its neighborhoods according to the adjacency matrices of the cross-modal graph, which is defined as: G l = ReLU( AG l 1 W l + b l ) (10) Where A = D 12 AD 12 is the normalized symmetric adjacency matrix.", "D is the degree matrix of A , where D ii = (cid:80) j A i,j .", "G l 1 is the hidden graph representation evolved from the preceding GCN layer.", "W l R 2 d h 2 d h , b l R 2 d h 3 We employ the spaCy toolkit ( https://spacy.io/ ) to derive the dependency tree of a sentence.", "are the trainable parameters of the l -th GCN layer.", "The nodes input of the first GCN layer are the concatenation of text-modality and image-modality representations: G 0 = R .", "Here, R = { r 1 , r 2 , , r n + m } = { t 1 , , t n , v 1 , , v m } .", "Subsequently, inspired by (Zhang et al., 2019), we employ a retrieval-based attention mechanism to capture the graph-oriented attention information from the concatenation of text and image representations R = { r 1 , r 2 , , r n + m } by means of the graph representation g derived from the final GCN layer.", "The intention is to retrieve crucially associated cross-modal features that are explicitly connected in the cross-modal graph.", "The attention weights are computed as: t = exp( t ) (cid:80) n + m i =1 exp( i ) , t = (cid:88) i C r t g i (11) Where C denotes a set of indices in which nodes contain cross-modal edges in the graph.", "represents the matrix transposition.", "The final sarcastic representation is defined as: f = n + m (cid:88) t =1 t r t (12) Then, the final sarcastic representation is fed into a fully-connected layer with a softmax function to capture a probability distribution y R d p in the sarcasm decision space: y = softmax( W o f + b o ) (13) Where d p is the dimensionality of sarcasm labels.", "We minimize the cross-entropy loss via the standard gradient descent algorithm to train the model:", "where N is the training data size.", "y i and y i respectively represent the ground-truth and estimated label distribution of instance i .", "denotes all trainable parameters of the model, represents the coefficient of L 2 -regularization.", "We conduct experiments on a publicly available multi-modal sarcasm detection benchmark dataset collected by Cai et al. (2019).", "This dataset contains English tweets expressing sarcasm as Positive examples and those expressing non-sarcasm as Negative examples.", "Each example in the dataset consists of a text and an associated image.", "The statistics of the dataset are shown in Table 1.", "For a fair comparison, the data preprocessing follows (Cai et al., 2019).", "We set the maximum number of visual regions as 10 for object detection results.", "That is, we select the top 10 bounding boxes with highest scores if the objects are greater than 10.", "We utilize the pre-trained uncased BERT-base (De-vlin et al., 2019) module to embed each word of text-modality as a 768-dimensional embedding and employ the pre-trained ViT 4 (Dosovitskiy et al., 2021) to embed each visual region patch as a 768-dimensional embedding, i.e. d T = d I = 768 .", "The resolution of visual region patch is set to L p = 32 , correspondingly, p = 7 , r = 49 .", "5 The number of GCN layers is set to 2, which is the optimal depth in the pilot experiments.", "The dimensionality of hidden representations is set to d h = 512 .", "The coefficient is set to 0 .", "00001 .", "Adam is utilized as the optimizer with a learning rate of 0 .", "00002 , and the mini-batch size is 32.", "The dropout rate with 0 .", "1 is utilized to avoid overfitting.", "We use early-stopping with patience of 5.", "We set = 3 to compute the modulating factor of incongruous multi-modal sentiment relations, which is the optimal hyper-parameter in the pilot experiments.", "Following (Cai et al., 2019), we use Accuracy , Precision , Recall , and F1-score to measure the model performance.", "Since the label distribution of the dataset is imbalanced, following (Pan et al., 4 https://github.com/lukemelas/ PyTorch-Pretrained-ViT 5 We also tried other division resolutions, and found that the fluctuation of performance is negligible over different resolutions of image patches. 2020), we also report Macro-average results.", "The experimental results of our models are averaged over 10 runs with different random seeds to ensure the final reported results are statistically stable.", "1) Image-modality methods : These models use only visual information for sarcasm detection, including Image (Cai et al., 2019), which employs ResNet (He et al., 2016) to train a sarcasm classifier; and ViT (Dosovitskiy et al., 2021), which utilizes the [class] ' token representation of the", "pre-trained ViT to detect the sarcasm.", "2) Text-modality methods : These models use only textual information, including TextCNN (Kim, 2014), a deep learning model based on CNN for text classification; Bi-LSTM , a bidirectional LSTM network for text classification; SIARN (Tay et al., 2018), adopting inner-attention for textual sarcasm detection; SMSD (Xiong et al., 2019), exploring a self-matching network to capture textual incongruity information; and BERT (Devlin et al., 2019), the vanilla pre-trained uncased BERT-base taking [CLS] text [SEP] ' as input.", "3) Multi-modal methods : These models take both textand image-modality information.", "Including HFM (Cai et al., 2019), a hierarchical multimodal features fusion model for multi-modal sarcasm detection; D&R Net (Xu et al., 2020), a Decomposition and Relation Network modeling both cross-modality contrast and semantic association; Res-BERT (Pan et al., 2020), concatenating image features and BERT-based text features for sarcasm prediction; Att-BERT (Pan et al., 2020), exploring an inter-modality attention and a co-attention to model the incongruity of multi-modal sarcasm detection; and InCrossMGs (Liang et al., 2021a), a graph-based model to leverage the sarcastic relations from both intraand inter-modal perspectives.", "We also explore several variants of CMGCN to analyze the impact of different components in the ablation study: 1) w/o G denotes without cross-modal graph, which only concatenates the representations of [class] ' and [CLS] ' tokens from ViT and BERT for sarcasm detection; 2) w/o O denotes without object detection.", "The whole image is input into the image encoder, and the edge weights are set to 1 in the cross-modal graphs; 3) w/o S denotes without using external knowledge.", "All weights of edges are set to 1 in the cross-modal graph.", "Further, 4) w/o S w represents without using affective knowledge; 5) w/o D denotes without using syntax-aware information of text-modality in graph construction.", "Further, to investigate the effectiveness of our CMGCN when used with different pre-trained models, we also set the following variants: 1) -GloVe+ResNet : We replace BERT with GloVe (Pennington et al., 2014) to initialize each word into a 300-dimensional embedding and ViT with ResNet-152 (He et al., 2016) to embed each image patch as a 2048-dimensional vector.", "2) -GloVe+ViT : We use GloVe as text encoder and use ViT as image encoder.", "3) -BERT+ResNet : We use BERT as text encoder and use ResNet-152 as image encoder.", "We report the comparison results regarding Text-modality , Image-modality , and Text+Image modalities in Table 2. From the results, we can draw the following conclusions.", "1) Our proposed CMGCN outperforms existing baselines across all metrics.", "This verifies the effectiveness of our proposed model in multi-modal sarcasm detection.", "2) We conduct significance tests of our CMGCN over the baseline models, the results show that our CMGCN significantly outperforms the baseline models in terms of most of the evaluation metrics (with p value < 0 . 05 ).", "3) Our CMGCN model performs consistently better than the previous graph-based method (InCrossMGs), which MODEL Acc.", "demonstrates that recognizing significant visual regions and modeling sentiment relations can lead to improved performance.", "4) The methods based on text modality achieve consistently better performance than the methods based on image modality, which shows that the expression of sarcastic/non-sarcastic information primarily resides in the text modality.", "5) Methods based on both image and text modalities perform better than the unimodal baselines overall.", "This implies that leveraging the information of both image and text modalities is more effective for multi-modal sarcasm detection.", "6) The results of macro metrics are better than other commonly used metrics overall, which indicates that models perform better in the negative class due to the imbalanced class distribution.", "To analyze the impact of different components of our proposed CMGCN , we conduct an ablation study and report the results in Table 3. Note that removal of cross-modal graph (w/o G ) sharply degrades the performance, which verifies the significance of cross-modal in multi-modal features fusion for learning sarcastic expressions in multimodal sarcasm detection.", "Removal of object de-1773", "tection (w/o O ) leads to considerable performance degradation, which demonstrates that adopting object detection to track important visual information is effective for constructing crucial relations between visual and textual information in the cross-modal graphs.", "From the results of w/o S and S w , we conclude that exploiting the attribute-object pair as a bridge to set edge weights based on word similarity is effective when constructing cross-modal graphs.", "Further, leveraging affective clues to capture multi-modal sentiment incongruity between textand image-modality is effective in sarcasm detection, and thus leads to improved performance.", "In addition, removal of syntax-aware information of text-modality leads to slight performance degradation, which indicates that incorporating syntactic information in the graph makes better learning of dependency relations of textual words and thus improves the performance of sarcasm detection.", "To investigate the generalizability and effectiveness of our proposed cross-modal graph when used with different pre-trained methods, we conduct experiments with five variants of our proposed CMGCN by using different text and image encoders.", "The experimental results are shown in Figure 3", "(a).", "Note that the proposed cross-modal graph can directly work with various pre-trained models and performs consistently better than that without cross-modal graph (w/o G ).", "This demonstrates the generalizability and effectiveness of our proposed cross-modal graph in multi-modal sarcasm detection.", "Further, from the results, we can also conclude that superior performance is obtained when using more powerful pre-trained methods, such as ViT and BERT.", "In this section, we analyze the impact of the number of GCN layers on the performance of our pro-branches", "posed CMGCN .", "We vary the layer number from 1 to 6 and report the results in Figure 3", "(b).", "Note that the 2-layer GCN architecture performs better than others overall, and thus the number of GCN layers is set to 2 in our model.", "Model with one layer performs worse, which indicates that a shallow graph network structure is not able to learn sarcastic features well.", "When the number of layers is greater than 2, the performance tends to decline.", "This shows that further increasing the number of layers beyond 2 degrades the model performance possibly due to the sharp increase of parameters.", "To qualitatively investigate how the proposed CMGCN works in multi-modal sarcasm detection, we present a visualization of cross-modal graph construction and attention values of a multi-modal sarcasm example.", "The results are shown in Figure 4. We first show a sarcasm example and its corresponding object detection results in Figure 4", "(a).", "Note that the correct label of this example can be easily inferred if the relations of crucial sarcastic clues of text (marked by the light red color) and the corresponding visual regions are captured by the model.", "To demonstrate how the proposed CMGCN identifies the important sarcastic clues, we show the adjacency matrix of the cross-modal graph of this example in Figure 4", "(b).", "Note that highly correlated sarcastic clues in different modalities are connected by edges with large weights in the graph.", "This verifies the effectiveness of the proposed cross-modal graph in learning multi-modal sarcastic information.", "Further, based on the cross-1774 modal graph, we show the attention visualization of this example in Figure 4", "(c).", "The crucial textual tokens and the related image regions are highly attended by our proposed CMGCN , which helps identify the incongruity among the learned important features for learning sarcastic expressions and thus leads to improved performance of multi-modal sarcasm detection.", "This paper has proposed a novel cross-modal graph architecture for multi-modal sarcasm detection, in which the crucial visual regions can be explicitly connected to the highly correlated textual tokens for learning the incongruity sentiment of sarcastic expression.", "Specifically, unlike previous research efforts that simply consider the visual information of the whole image, we attempt to recognize the important visual regions via object detection results, and further devise a novel cross-modal graph to explicitly establish the connections of scattered visual regions and the associated textual tokens.", "More concretely, owing to the object detection results, the attribute-object pair descriptors of the objects are served as a bridge to track the highly related sarcastic cues between image and text modalities and their connection weights, and then deriving the cross-modal graphs based on external knowledge bases.", "Afterwards, a GCNs architecture based on a retrieval-based attention mechanism is employed to capture the key incongruity sentiment expressions across different modalities for multi-modal sarcasm detection.", "To the best of our knowledge, it is the first study of utilizing a cross-modal graph to extract intricate multi-modal sarcastic relations via object detection and sentiment cues from external knowledge bases.", "Extensive experiments on a public benchmark dataset show that our proposed approach significantly outperforms state-of-the-art baseline methods.", "As described in Section 3.3, the weights of edges in the cross-modal graph are computed based on both word similarities and affective clues between textual words and the attribute-object pairs of the image regions, and the dependency tree of the text-modality.", "The approach can be easily generalized to other sentiment-related multi-modal learning scenarios.", "Nevertheless, the cross-graph solution might not be generalized well to other multi-modal tasks or data genres, if there is a lack of affective knowledge or a difficulty in deriving dependency trees in low-resource settings.", "Therefore, future research can consider exploiting alternatively approaches to automatically learn the weights of edges in the cross-modal graph without relying on external knowledge sources.", "This work was partially supported by the National Natural Science Foundation of China (61876053, 62006062, 62176076, 62006060), UK Engineering and Physical Sciences Research Council (grant no. EP/V048597/1, EP/T017112/1), Natural Science Foundation of Guangdong Province of China (No. 2019A1515011705), Shenzhen Foundational Research Funding (JCYJ20200109113441941, JCYJ20210324115614039), Shenzhen Science and Technology Innovation Program (Grant No. KQTD20190929172835662), Joint Lab of Lab of HITSZ and China Merchants Securities.", "Yulan He is supported by a Turing AI Fellowship funded by the UK Research and Innovation (UKRI) (grant no. EP/V020579/1)." ]
[ "abstain", "objective", "objective", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Peer reviewing is a central component in the scientific publishing process.", "We present the first public dataset of scientific peer reviews available for research purposes (PeerRead v1), 1 providing an opportunity to study this important artifact.", "The dataset consists of 14.7K paper drafts and the corresponding accept / reject decisions in top-tier venues including ACL, NIPS and ICLR.", "The dataset also includes 10.7K textual peer reviews written by experts for a subset of the papers.", "We describe the data collection process and report interesting observed phenomena in the peer reviews.", "We also propose two novel NLP tasks based on this dataset and provide simple baseline models.", "In the first task, we show that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline.", "In the second task, we predict the numerical scores of review aspects and show that simple models can outperform the mean baseline for aspects with high variance such as originality' and impact'.", "Prestigious scientific venues use peer reviewing to decide which papers to include in their journals or proceedings.", "While this process seems essential to scientific publication, it is often a subject of debate.", "Recognizing the important consequences of peer reviewing, several researchers studied various aspects of the process, including consistency, bias, author response and general review quality (e.g., Greaves et al., 2006; Ragone et al., 2011; De Silva and Vance, 2017).", "For example, the organizers of 1 https://github.com/allenai/PeerRead the NIPS 2014 conference assigned 10% of conference submissions to two di erent sets of reviewers to measure the consistency of the peer reviewing process, and observed that the two committees disagreed on the accept / reject decision for more than a quarter of the papers (Langford and Guzdial, 2015).", "Despite these e orts, quantitative studies of peer reviews had been limited, for the most part, to the few individuals who had access to peer reviews of a given venue (e.g., journal editors and program chairs).", "The goal of this paper is to lower the barrier to studying peer reviews for the scientific community by introducing the first public dataset of peer reviews for research purposes: PeerRead.", "We use three strategies to construct the dataset:", "(i) We collaborate with conference chairs and conference management systems to allow authors and reviewers to opt-in their paper drafts and peer reviews, respectively.", "(ii) We crawl publicly available peer reviews and annotate textual reviews with numerical scores for aspects such as clarity' and im-pact'.", "(iii) We crawl arXiv submissions which coincide with important conference submission dates and check whether a similar paper appears in proceedings of these conferences at a later date.", "In total, the dataset consists of 14.7K paper drafts and the corresponding accept / reject decisions, including a subset of 3K papers for which we have 10.7K textual reviews written by experts.", "We plan to make periodic releases of PeerRead, adding more sections for new venues every year.", "We provide more details on data collection in 2.", "The PeerRead dataset can be used in a variety of ways.", "A quantitative analysis of the peer reviews can provide insights to help better understand (and potentially improve) various nuances of the review process.", "For example, in 3, we analyze correlations between the overall recommendation score and individual aspect scores (e.g., clarity, impact and originality) and quantify how reviews recom-1647 Section #Papers #Reviews Asp.", "mending an oral presentation di er from those recommending a poster.", "Other examples might include aligning review scores with authors to reveal gender or nationality biases.", "From a pedagogical perspective, the PeerRead dataset also provides inexperienced authors and first-time reviewers with diverse examples of peer reviews.", "As an NLP resource, peer reviews raise interesting challenges, both from the realm of sentiment analysispredicting various properties of the reviewed paper, e.g., clarity and novelty, as well as that of text generationgiven a paper, automatically generate its review.", "Such NLP tasks, when solved with su ciently high quality, might help reviewers, area chairs and program chairs in the reviewing process, e.g., by lowering the number of reviewers needed for some paper submission.", "In 4, we introduce two new NLP tasks based on this dataset:", "(i) predicting whether a given paper would be accepted to some venue, and", "(ii) predicting the numerical score of certain aspects of a paper.", "Our results show that we can predict the accept / reject decisions with 621% error reduction compared to the majority reject-all baseline, in four di erent sections of PeerRead.", "Since the baseline models we use are fairly simple, there is plenty of room to develop stronger models to make better predictions.", "Here we describe the collection and compilation of PeerRead, our scientific peer-review dataset.", "For an overview of the dataset, see Table 1. 2.1 Review Collection Reviews in PeerRead belong to one of the two categories: Opted-in reviews.", "We coordinated with the Softconf conference management system and the conference chairs for CoNLL 2016 2 and ACL 2017 3 conferences to allow authors and reviewers to opt-in their drafts and reviews, respectively, to be included in this dataset.", "A submission is included only if", "(i) the corresponding author opts-in the paper draft, and", "(ii) at least one of the reviewers opts-in their anonymous reviews.", "This resulted in 39 reviews for 22 CoNLL 2016 submissions, and 275 reviews for 137 ACL 2017 submissions.", "Reviews include both text and aspect scores (e.g., calrity) on a scale of 15.", "Peer reviews on the web.", "In 2013, the NIPS conference 4 began attaching all accepted papers with their anonymous textual review comments, as well as a confidence level on a scale of 13.", "We collected all accepted papers and their reviews for NIPS 20132017, a total of 9,152 reviews for 2,420 papers.", "Another source of reviews is the OpenReview platform: 5 a conference management system which promotes open access and open peer reviewing.", "Reviews include text, as well as numerical recommendations between 110 and confidence level between 15.", "We collected all submissions to the ICLR 2017 conference, 6 a total of 1,304 o cial, anonymous reviews for 427 papers (177 accepted and 255 rejected).", "7 2.2 arXiv Submissions arXiv 8 is a popular platform for pre-publishing research in various scientific fields including physics, computer science and biology.", "While arXiv does not contain reviews, we automatically label a subset of arXiv submissions in the years 20072017 (inclusive) 9 as accepted or probably-rejected, with 2 The 20 th SIGNLL Conference on Computational Natural Language Learning; http://www.conll.org/2016 3 The 55 th Annual Meeting of the Association for Computational Linguistics; http://acl2017.org/ 4 The Conference on Neural Information Processing Systems; https://nips.cc/ 5 http://openreview.net 6 The 5 th International Conference on Learning Representations; https://iclr.cc/archive/www/2017.html 7 The platform also allows any person to review the paper by adding a comment, but we only use the o cial reviews of reviewers assigned to review that paper.", "8 https://arxiv.org/ 9 For consistency, we only include the first arXiv version of each paper (accepted or rejected) in the dataset.", "Accepted papers.", "In order to assign accepted' labels, we use the dataset provided by Sutton and Gong (2017) who matched arXiv submissions to their bibliographic entries in the DBLP directory 10 by comparing titles and author names using Jac-card's distance.", "To improve our coverage, we also add an arXiv submission if its title matches an accepted paper in one of our target venues with a relative Levenshtein distance (Levenshtein, 1966) of < 0.1.", "This results in a total of 2,891 accepted papers.", "Probably-rejected papers.", "We use the following criteria to assign a probably-rejected' label for an arXiv submission: The paper wasn't accepted to any of the target venues.", "11 The paper was submitted to one of the arXiv categories cs.cl , cs.lg or cs.ai .", "12 The paper wasn't cross-listed in any noncs categories.", "The submission date 13 was within one month of the submission deadlines of our target venues (before or after).", "The submission date coincides with at least one of the arXiv papers accepted for one of the target venues.", "This process results in 8,887 probably-rejected' papers.", "Data quality.", "We did a simple sanity check in order to estimate the number of papers that we labeled as probably-rejected', but were in fact accepted to one of the target venues.", "Some authors add comments to their arXiv submissions to indicate the publication venue.", "We identified arXiv papers with a comment which matches the term accept along with any of our target venues (e.g., nips), but not the term workshop.", "We found 364 papers which matched these criteria, 352 out of which were labeled as accepted'.", "Manual inspection of the remaining 12 papers showed that one of the papers was indeed a false negative (i.e., labeled as probably-rejected' but accepted to one of the target venues) due to a significant change in 10 http://dblp.uni-trier.de/ 11 Note that some of the probably-rejected' papers may be published at workshops or other venues.", "12 See https://arxiv.org/archive/cs for a description of the computer science categories in arXiv.", "13 If a paper has multiple versions, we consider the submission date of the first version.", "We organize v1.0 of the PeerRead dataset in five sections: CoNLL 2016, ACL 2017, ICLR 2017, NIPS 20132017 and arXiv 20072017.", "14 Since the data collection varies across sections, di erent sections may have di erent license agreements.", "The papers in each section are further split into standard training, development and test sets with 0.9:0.05:0.05 ratios.", "In addition to the PDF file of each paper, we also extract its textual content using the Science Parse library.", "15 We represent each of the splits as a json-encoded text file with a list of paper objects, each of which consists of paper details, accept / reject / probably-reject decision, and a list of reviews.", "In many publication venues, reviewers assign numeric aspect scores (e.g., clarity, originality, substance) as part of the peer review.", "Aspect scores could be viewed as a structured summary of the strengths and weaknesses of a paper.", "While aspect scores assigned by reviewers are included in the opted-in sections in PeerRead, they are missing from the remaining reviews.", "In order to increase the utility of the dataset, we annotated 1.3K reviews with aspect scores, based on the corresponding review text.", "Annotations were done by two of the authors.", "In this subsection, we describe the annotation process in detail.", "Feasibility study.", "As a first step, we verified the feasibility of the annotation task by annotating nine reviews for which aspect scores are available.", "The annotators were able to infer about half of the aspect scores from the corresponding review text (the other half was not discussed in the review text).", "This is expected since reviewer comments often focus on the key strengths or weaknesses of the paper and are not meant to be a comprehensive assessment of each aspect.", "On average, the absolute di erence between our annotated scores and the gold scores originally provided by reviewers is 0.51 (on a 15 scale, considering only those cases where the aspect was discussed in the review text).", "Data preprocessing.", "We used the o cial reviews in the ICLR 2017 section of the dataset for this annotation task.", "We excluded uno cial comments contributed by arbitrary members of the community, comments made by the authors in response to other comments, as well as meta-reviews which state the final decision on a paper submission.", "The remaining 1,304 o cial reviews are all written by anonymous reviewers assigned by the program committee to review a particular submission.", "We randomly reordered the reviews before annotation so that the annotator judgments based on one review are less a ected by other reviews of the same paper.", "Annotation guidelines.", "We annotated seven aspects for each review: appropriateness, clarity, originality, soundness / correctness, meaningful comparison, substance, and impact.", "For each aspect, we provided our annotators with the instructions given to ACL 2016 reviewers for this aspect.", "16 Our annotators' task was to read the detailed review text (346 words on average) and select a score between 15 (inclusive, integers only) for each aspect.", "17 When review comments do not address a specific aspect, we do not select any score for that aspect, and instead use a special not discussed value.", "Data quality.", "In order to assess annotation consistency, the same annotators re-annotated a random sample consisting of 30 reviews.", "On average, 77% of the annotations were consistent (i.e., the re-annotation was exactly the same as the original annotation, or was o by 1 point) and 2% were inconsistent (i.e., the re-annotation was o by 2 points or more).", "In the remaining 21%, the aspect was marked as not discussed in one annotation but not in the other.", "We note that di erent aspects are discussed in the textual reviews at di erent rates.", "For example, about 49% of the reviews discussed the originality' aspect, while only 5% discussed appropriateness'.", "In this section, we showcase the potential of using PeerRead for data-driven analysis of peer reviews.", "Overall recommendation vs. aspect scores.", "A critical part of each review is the overall recommendation score, a numeric value which best 16 Instructions are provided in Appendix B. 17 Importantly, our annotators only considered the review text, and did not have access to the papers.", "characterizes a reviewer's judgment of whether the draft should be accepted for publication in this venue.", "While aspect scores (e.g., clarity, novelty, impact) help explain a reviewer's assessment of the submission, it is not necessarily clear which aspects reviewers appreciate the most about a submission when considering their overall recommendation.", "To address this question, we measure pair-wise correlations between the overall recommendation and various aspect scores in the ACL 2017 section of PeerRead and report the results in Table 2. Aspect Substance 0.59 Clarity 0.42 Appropriateness 0.30 Impact 0.16 Meaningful comparison 0.15 Originality 0.08 Soundness / Correctness 0.01 Table 2: Pearson's correlation coe cient between the overall recommendation and various aspect scores in the ACL 2017 section of PeerRead.", "The aspects which correlate most strongly with the final recommendation are substance (which concerns the amount of work rather than its quality) and clarity.", "In contrast, soundness / correctness and originality are least correlated with the final recommendation.", "These observations raise interesting questions about what we collectively care about the most as a research community when evaluating paper submissions.", "Oral vs. poster.", "In most NLP conferences, accepted submissions may be selected for an oral presentation or a poster presentation.", "The presentation format decision of accepted papers is based on recommendation by the reviewers.", "In the o cial blog of ACL 2017, 18 the program chairs recommend that reviewers and area chairs make this decision based on the expected size of interested audience and whether the ideas can be grasped without back-and-forth discussion.", "However, it remains unclear what criteria are used by reviewers to make this decision.", "To address this question, we compute the mean aspect score in reviews which recommend an oral vs. poster presentation in the ACL 2017 section of 18 https://acl2017.wordpress.com/2017/03/23/ conversing-or-presenting-poster-or-oral/ 1650 PeerRead, and report the results in Table 3. Notably, the average overall recommendation' score in reviews recommending an oral presentation is 0.9 higher than in reviews recommending a poster presentation, suggesting that reviewers tend to recommend oral presentation for submissions which are holistically stronger.", "ACL 2017 vs. ICLR 2017.", "Table 4 reports the sample mean and standard deviation of various measurements based on reviews in the ACL 2017 and the ICLR 2017 sections of PeerRead.", "Most of the mean scores are similar in both sections, with a few notable exceptions.", "The comments in ACL 2017 reviews tend to be about 50% longer than those in the ICLR 2017 reviews.", "Since review length is often thought of as a measure of its quality, this raises interesting questions about the quality of reviews in ICLR vs. ACL conferences.", "We note, however, that ACL 2017 reviews were explicitly opted-in while the ICLR 2017 reviews include all o cial reviews, which is likely to result in a positive bias in review quality of the ACL reviews included in this study.", "Another interesting observation is that the mean appropriateness score is lower in ICLR 2017 compared to ACL 2017.", "While this might indicate that ICLR 2017 attracted more irrelevant submissions, this is probably an artifact of our annotation process: reviewers probably only address appropriateness explicitly in their review if the paper is inappropriate, which leads to a strong negative bias against this category in our ICLR dataset.", "NLP tasks.", "In this section, we introduce two novel tasks based on the PeerRead dataset.", "In the first task, given a paper draft, we predict whether the paper will be accepted to a set of target conferences.", "In the second task, given a textual review, we predict the aspect scores for the paper such as novelty, substance and meaningful comparison.", "19 Both these tasks are not only challenging from an NLP perspective, but also have potential applications.", "For example, models for predicting the accept / reject decisions of a paper draft might be used in recommendation systems for arXiv submissions.", "Also, a model trained to predict the aspect scores given review comments using thousands of training examples might result in better-calibrated scores.", "Paper acceptance classification is a binary classification task: given a paper draft, predict whether the paper will be accepted or rejected for a predefined set of venues.", "Models.", "We train a binary classifier to estimate the probability of accept vs. reject given a paper, i.e., P ( accept=True | paper ).", "We experiment with di erent types of classifiers: logistic regression, SVM with linear or RBF kernels, Random Forest, Nearest Neighbors, Decision Tree, Multilayer Perceptron, AdaBoost, and Naive Bayes.", "We use hand-engineered features, instead of neural models, because they are easier to interpret.", "19 We also experiment with conditioning on the paper itself to make this prediction.", "We use 22 coarse features, e.g., length of the title and whether jargon terms such as deep' and neural' appear in the abstract, as well as sparse and dense lexical features.", "The full feature set is detailed in Appendix A. Experimental setup.", "We experiment with the ICLR 2017 and the arXiv sections of the PeerRead dataset.", "We train separate models for each of the arXiv category: cs.cl , cs.lg , and cs.ai .", "We use python's sklearn's implementation of all models (Pedregosa et al., 2011).", "20 We consider various regularization parameters for SVM and logistic regression (see Appendix A.1 for a detailed description of all hyperparameters).", "We use the standard test split and tune our hyperparameters using 5-fold cross validation on the training set.", "Results.", "Table 5 shows our test accuracies for the paper acceptance task.", "Our best model outperforms the majority classifier in all cases, with up to 22% error reduction.", "Since our models lack the sophistication to assess the quality of the work discussed in the given paper, this might indicate that some of the features we define are correlated with strong papers, or bias reviewers' judgments.", "We run an ablation study for this task for the ICLR and arXiv sections.", "We train only one model for all three categories in arXiv to simplify our analysis.", "Table 6 shows the absolute degradation in test accuracy of the best performing model when we remove one of the features.", "The table shows that some features have a large contribution on the classification decision: adding an appendix, a large number of theorems or equations, the average length of the text preceding a citation, the number of papers cited by this paper that were published in the five years before the submission of this paper, whether the abstract contains a phrase state of the art for ICLR or neural for arXiv, and length of title.", "21 20 http://scikit-learn.org/stable/ 21 Coe cient values of each feature are provided in Appendix A. ICLR % Best model 65.3 appendix 5.4 num_theorems 3.8 num_equations 3.8 avg_len_ref 3.8 abstract state-of-the-art 3.5 #recent_refs 2.5 arXiv % Best model 79.1 avg_len_ref 1.4 num_uniq_words 1.1 num_theorems 1.0 abstract neural 1.0 num_refmentions 1.0 title_length 1.0 Table 6: The absolute % di erence in accuracy on the paper acceptance prediction task when we remove only one feature from the full model.", "The second task is a multi-class regression task to predict scores for seven review aspects: im-pact', substance', appropriateness', comparison', soundness', originality' and clarity'.", "For this task, we use the two sections of PeerRead which include aspect scores: ACL 2017 and ICLR 2017.", "22 Models.", "We use a regression model which predicts a floating-point score for each aspect of interest given a sequence of tokens.", "We train three variants of the model to condition on", "(i) the paper text only,", "(ii) the review text only, or", "(iii) both paper and review text.", "We use three neural architectures: convolutional neural networks (CNN, Zhang et al., 2015), recurrent neural networks (LSTM, Hochreiter and Schmidhuber, 1997), and deep averaging networks (DAN, Iyyer et al., 2015).", "In all three architectures, we use a linear output layer to make the final prediction.", "The loss function is the mean squared error between predicted and gold scores.", "We compare against a baseline which always predicts the mean score of an aspect, computed on the training set.", "23 Experimental setup.", "select the best performing model on the standard development set.", "We use a single 100 dimension layer LSTM and CNN, and a single output layer of 100 dimensions for all models.", "We use GloVe 840B embeddings (Pennington et al., 2014) as input word representations, without tuning, and keep the 35K most frequent words and replace the rest with an UNK vector.", "The CNN model uses 128 filters and 5 kernels.", "We use an RMSProp optimizer (Tieleman and Hinton, 2012) with 0.001 learning rate, 0.9 decay rate, 5.0 gradient clipping, and a batch size of 32.", "Since scientific papers tend to be long, we only take the first 1000 and 200 tokens of each paper and review, respectively, and concatenate the two prefixes when the model conditions on both the paper and review text.", "24 Results.", "Figure 1 shows the test set root mean square error (RMSE) on the aspect prediction task (lower is better).", "For each section (ACL 2017 and ICLR 2017), and for each aspect, we report the results of four systems: Mean' (baseline), Paper', Review' and Paper;Review' (i.e., which information the model conditions on).", "For each variant, the model which performs best on the development set is selected.", "We note that aspects with higher RMSE scores for the Mean' baseline indicate higher variance among the review scores for this aspect, so we focus our discussion on these aspects.", "In the ACL 2017 section, the two aspects with the highest variance are originality' and clarity'.", "In the ICLR 2017 section, the two aspects with the highest variance are appropriateness' and meaningful com-parison'.", "Surprisingly, the Paper;Review' model outperforms the Mean' baseline in all four aspects, and the Review' model outperforms the Mean' 24 We note that the goal of this paper is to demonstrate potential uses of PeerRead, rather than develop the best model to address this task, which explains the simplicity of the models we use.", "baseline in three out of four.", "On average, all models slightly improve over the Mean' baseline.", "Several e orts have recently been made to collect peer reviews.", "Publons 25 consolidates peer reviews data to build public reviewer profiles for participating reviewers.", "Crossref maintains the database of DOIs for its 4000 + publisher members.", "They recently launched a service to add peer reviews as part of metadata for the scientific articles.", "26 Surprisingly, however, most of the reviews are not made publicly available.", "In contrast, we collected and organized PeerRead such that it is easy for other researchers to use it for research purposes, replicate experiments and make a fair comparison to previous results.", "There have been several e orts to analyze the peer review process (e.g., Bonaccorsi et al., 2018; Rennie, 2016).", "Editors of the British Journal of Psychiatry found di erences in courtesy between signed and unsigned reviews (Walsh et al., 2000).", "Ragone et al. (2011) and Birukou et al. (2011) analyzed ten CS conferences and found low correlation between review scores and the impact of papers in terms of future number of citations.", "Fang et al. (2016) presented similar observations for NIH grant application reviews and their productivity.", "Langford and Guzdial (2015) pointed to inconsistencies in the peer review process.", "Several recent venues had single vs. double blind review experiments, which pointed to single-blind reviews leading to increased biases towards male authors (Roberts and Verhoef, 2016) and famous institutions (Tomkins et al., 2017).", "Further, Le Goues et al. (2017) showed that reviewers are unable to 25 publons.com/dashboard/records/review/ 26 https://www.crossref.org/blog/ peer-reviews-are-open-for-registering-at-crossref/ 1653 successfully guess the identity of the author in a double-blind review.", "Recently, there have been several initiatives by program chairs in major NLP conferences to study various aspects of the review process, mostly author response and general review quality.", "27 In this work, we provide a large scale dataset that would enable the wider scientific community to further study the properties of peer review, and potentially come up with enhancements to current peer review model.", "Finally, the peer review process is meant to judge the quality of research work being disseminated to the larger research community.", "With the ever-growing rates of articles being submitted to top-tier conferences in Computer Science and pre-print repositories (Sutton and Gong, 2017), there is a need to expedite the peer review process.", "Bal-achandran (2013) proposed a method for automatic analysis of conference submissions to recommend relevant reviewers.", "Also related to our acceptance predicting task are (Tsur and Rappoport, 2009) and Ashok et al. (2013), both of which focuses on predicting book reviews.", "Various automatic tools like Grammerly 28 can assist reviewers in discovering grammar and spelling errors.", "Tools like Citeomatic 29 (Bhagavatula et al., 2018) are especially useful in finding relevant articles not cited in the manuscript.", "We believe that the NLP tasks presented in this paper, predicting the acceptance of a paper and the aspect scores of a review, can potentially serve as useful tools for writing a paper, reviewing it, and deciding about its acceptance.", "We introduced PeerRead, the first publicly available peer review dataset for research purposes, containing 14.7K papers and 10.7K reviews.", "We analyzed the dataset, showing interesting trends such as a high correlation between overall recommendation and recommending an oral presentation.", "We defined two novel tasks based on PeerRead:", "(i) predicting the acceptance of a paper based on textual features and", "(ii) predicting the score of each aspect in a review based on the paper and review contents.", "Our experiments show that certain properties of a 27 See https://nlpers.blogspot.com/2015/06/ some-naacl-2013-statistics-on-author.html and https://acl2017.wordpress.com/2017/03/27/ author-response-does-it-help/ 28 https://www.grammarly.com/ 29 http://allenai.org/semantic-scholar/ citeomatic/ paper, such as having an appendix, are correlated with higher acceptance rate.", "Our primary goal is to motivate other researchers to explore these tasks and develop better models that outperform the ones used in this work.", "More importantly, we hope that other researchers will identify novel opportunities which we have not explored to analyze the peer reviews in this dataset.", "As a concrete example, it would be interesting to study if the accept / reject decisions reflect author demographic biases (e.g., nationality).", "This work would not have been possible without the e orts of Rich Gerber and Paolo Gai (develop-ers of the softconf.com conference management system), Stefan Riezler, Yoav Goldberg (chairs of CoNLL 2016), Min-Yen Kan, Regina Barzilay (chairs of ACL 2017) for allowing authors and reviewers to opt-in for this dataset during the o cial review process.", "We thank the openreview.net , arxiv.org and semanticscholar.org teams for their commitment to promoting transparency and openness in scientific communication.", "We also thank Peter Clark, Chris Dyer, Oren Etzioni, Matt Gardner, Nicholas FitzGerald, Dan Jurafsky, Hao Peng, Minjoon Seo, Noah A. Smith, Swabha Swayamdipta, Sam Thomson, Trang Tran, Vicki Zayats and Luke Zettlemoyer for their helpful comments." ]
[ "abstain", "objective", "abstain", "abstain", "result", "objective", "objective", "result", "abstain", "abstain", "abstain", "other", "abstain", "objective", "method", "abstain", "abstain", "other", "method", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "other", "other", "abstain", "other", "other", "abstain", "abstain", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "method", "other", "other", "abstain", "other", "other", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "other", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "other", "other", "method", "objective", "result", "objective", "abstain", "abstain", "other", "objective", "objective", "abstain", "other", "other", "other" ]
[ "Pre-trained language models have shown remarkable success in improving various downstream NLP tasks due to their ability to capture dependencies in textual data and generate natural responses.", "In this paper, we leverage the power of pre-trained language models for improving video-grounded dialogue, which is very challenging and involves complex features of different dynamics: (1) Video features which can extend across both spatial and temporal dimensions; and (2) Dialogue features which involve semantic dependencies over multiple dialogue turns.", "We propose a framework by extending GPT-2 models to tackle these challenges by formulating video-grounded dialogue tasks as a sequence-to-sequence task, combining both visual and textual representation into a structured sequence, and fine-tuning a large pre-trained GPT-2 network.", "Our framework allows fine-tuning language models to capture dependencies across multiple modalities over different levels of information: spatio-temporal level in video and token-sentence level in dialogue context.", "We achieve promising improvement on the AudioVisual Scene-Aware Dialogues (AVSD) benchmark from DSTC7, which supports a potential direction in this line of research.", "Recent work in large-scale pre-training transformer-based neural networks (Liu et al., 2019; Devlin et al., 2019; Radford et al., 2019) has boosted the performance in various NLP tasks.", "The transformer-based architecture of these models allows them to capture various dependencies when trained on very large datasets.", "The pre-trained models are adapted into downstream tasks to generate text that is more natural, fluent, and richer than This work was mostly done when Hung Le was an intern at Salesforce Research Asia, Singapore.", "models not initialized with pre-trained weights.", "Similar to pre-trained CNN-based neural networks developed in computer vision research (He et al., 2016; Huang et al., 2017) which can learn high-resolution features in images, pre-trained language models (LMs) are capable of capturing fine-grain textual dependencies in text data of rich semantics.", "While the benefits of pre-trained language models present in many downstream NLP tasks such as machine translation and question answering (QA) (Devlin et al., 2019; Lan et al., 2020), they are particularly suitable to adapt to dialogue response generation tasks for two major reasons: (1) Dialogue response generation usually involves more complex dynamics between input and output text sequences.", "The input typically involves dialogue history, including conversational exchanges between users and dialogue agents.", "A dialogue agent needs to capture relevant dependencies along each dialogue turns to generate a sensible response.", "(2) Compared to other NLP tasks, it is very challenging to collect and create large-scale dialogue datasets.", "Adopting pre-training approaches could ameliorate the limited dialogue datasets by leveraging rich linguistic dependencies learned from other available text data.", "We are motivated by these observations to adapt pre-trained language models into a dialogue task and improve the quality of generated responses.", "Along the line of research that combines both vision and language (Antol et al., 2015; Hori et al., 2019), transformer-based neural networks can also be applied to capture various dependencies across different types of input modalities (text and image) with appropriate objective loss functions (Alberti et al., 2019; Su et al., 2020; Chen et al., 2019).", "The multi-head attention mechanism of these models can detect long-range dependencies between each token in the input text and each image patch or spatial objects in the input image.", "We extend this framework to a video-dialogue task and fully lever-Video how many people are in the video SEP SEP there person one just is Pre-trained Transformer usr usr usr usr usr usr usr sys usr sys sys sys sys sys Dialogue History Response 8 9 10 11 12 13 14 7 40 Word Level Modality Level Position Level there is person one just 5 5 5 5 5 5 5 5 5 5 5 5 5 5 Turn Level SEP sys 2 3 4 5 6 1 1 a man is standing SEP cap cap cap cap cap 0 0 0 0 0 42 43 44 45 41 Caption 52 vis vis vis 47 48 49 50 51 46 -1 -1 -1 vis vis vis vis vis -2 -2 -2 -1 -2 53 Spatial Level Modality Level Position Level Temporal Level MASK ... ... ... ...", "age the power of pre-trained models to obtain linguistic and visual representations in dialogues and videos.", "Specifically, we tackle the Audio-Visual Scene-aware Dialogues (AVSD) task (Hori et al., 2019) which aims to generate dialogue responses grounded on both visual and audio features of the video.", "The dialogue agent needs to create responses that not only match the dialogue flow but also address user questions about a given video over multiple dialogue turns.", "First, we detail how to formulate input components of a video-grounded dialogue as a downstream task of pre-trained language models.", "We follow the general sequence-to-sequence framework, whereby the input components are combined to a structured sequence of multiple modalities and output is a system response.", "We then apply pre-trained models (Radford et al., 2019) to leverage the deep attention neural networks to capture text and video dependencies with fine granularity.", "Specifically, we propose to capture dependencies between each token in text data and each spatial feature along the temporal dimension of the input video.", "Lastly, we present a multi-task learning framework that includes additional learning objectives in addition to dialogue response generation objective.", "Our promising results on the AVSD benchmark demonstrate the efficacy of our proposed framework.", "We briefly describe related work in two major lines of research: dialogues and vision-text modeling.", "Whang et al. (2019) applies pre-trained language models for response selection tasks in open-domain dialogues.", "The output of the language model (e.g. [CLS] token in BERT) is used as a contextual representation of each pair of dialogue context and candidate response.", "Budzianowski and Vulic (2019) assumes access to ground-truth dialogue states and generates responses in task-oriented dialogues by combining input components into a single sequence.", "As dialogue states and database states are used as raw text input, the models can be fine-tuned from a deep pre-trained language model such as GPT.", "Chao and Lane (2019) and Lai et al. (2020) use pre-trained LMs to track dialogue states in task-oriented dialogues by utilizing the output representations to predict slot values.", "In this work, we aim to address video-grounded dialogue tasks and generate natural responses in an end-to-end manner.", "The transformer-based neural architecture of pretrained language models has been used to learn cross-modal representations for vision-text NLP tasks.", "Li et al. (2019) uses a BERT-based architecture to improve linguistic and visual representations for image captioning tasks.", "Lu et al. (2019) follows a similar approach to tackle visual QA but segregates the visual and text input components rather combining both into a single sequence.", "Alberti et al. (2019) leverages a pre-trained BERT model to improve cross-modal representations in either early fusion or late fusion approach.", "We are motivated to extend this line of research to a video-based setting.", "Video is considered much more complicated than image due to the additional temporal variation across video frames.", "A related work to ours is VideoBERT (Sun et al., 2019) which utilizes BERT models for video captioning.", "Instead of using visual features to represent video frames, VideoBERT transforms frame-level features into visual tokens and uses them as raw text input to a BERT-based architecture.", "Our model architecture can be seen in Figure 1.", "We are inspired by Transformer-based LM approaches that leverage different levels of features in text, such as word, character, and position levels.", "We apply this principle and technique to overcome the challenge in AVSD which involves multi-turn dialogue input combined with video input with spatial-temporal variations.", "We propose to decompose videos into patches but maintain a structured temporal sequence.", "This sequence is then directly combined with text inputs of dialogue which are also arranged in a temporally ordered manner.", "This kind of feature reformulation is simple yet powerful as it allows explicit dependency learning across all pairs of text tokens and video patches.", "Therefore, it can facilitate stronger signals to answer human queries in greater granularities.", "We trained a GPT model based on the GPT-2 (Radford et al., 2019) architecture.", "The GPT-2 model is based on the transformer network (Vaswani et al., 2017) which includes 12 to 24 layers of masked multi-head attention on very large text data.", "Following the success of GPT-2 in generation-based tasks, we adapt the power of GPT-2 pre-trained models to generate video-grounded dialogue responses and call our framework VGD-GPT2.", "First, we modify the input components as a long sequence of video frames or video segments and dialogue turns.", "Video Representations .", "Each video frame or video segment is further structured as a sequence of spatial regions, which can be extracted using a pre-trained video model.", "For an input video V , we denote the output of a pre-trained 2D CNN or 3D CNN video model as Z pre V RF P d emb where d emb is the feature dimension of the pre-trained video model, F is the resulting number of sampled video frames or video segments, and P is the number of spatial regions in each video frame.", "We reshape ZV as a sequence of image patches and pass it through a linear transformation with ReLU activation to match the feature dimension d of pretrained language model: Z spatial V = ReLU( Z pre VWV ) RFP d (1) where WV R d emb d .", "We denote this as spatial-level features of input video.", "As can be seen from Figure 1, we inject different types of input attributes into XV by adding three additional encoding layers: (1) Modality-level encoding that informs the type of information.", "We use a modality token vis to uniformly represent visual information type.", "(2) Temporal-level encoding that informs model the frame-level (or segment-level) position of input features.", "(3) Position-level encoding that incorporates the spatial-level ordering.", "This is equivalent to the positional encoding of tokens in sentences seen in BERT-based language models.", "All the three layers are trainable parameters to enable models to learn the dynamics of input features.", "All encoding layers are modeled to have the same feature dimension d of the pre-trained model.", "We combine all encoding layers through element-wise summation, resulting in a rich video representation: ZV = Z spatialV + Z modV + Z temporalV + Z posV (2) Text Representations .", "Similarly, we break down dialogue history H as sequence of dialogue turns H = ( H 1 , H 2 , ..., H t ) where t is the current dialogue turn.", "Each dialogue turn is represented as a pair of user utterance U and system response S concatenated sequentially H = (( U 1 , S 1 ) , ( U 2 , S 2 ) , ..., U t )) ( S t is the target response that need to be generated by the models).", "Each utterance is then represented as a sequence of tokens x so the dialogue history can be represented as XH = ( x 1 , x 2 , ..., x LH ) and Y = S t = ( y 1 , y 2 , ..., y LY ) where LH and LY are the total number of tokens in the dialogue history and target response respectively.", "Following the AVSD setting (Hori et al., 2019), we utilize the text input of video caption C .", "The video caption typically provides a linguistic summary of the video in one or two sentences.", "The caption can be represented as a sequence of tokens XC = ( x 1 , x 2 , ..., x LC ) .", "We combine all text input sequences to form a single sequence XT = ( XC , XH , Y 1 ) as input to the models.", "Y 1 is the target response sequence shifted left by one position to enable auto-regressive prediction of output tokens.", "We denote embedded features as Z tokenT as the token-level encoding layer of the text input.", "Similar to video features, we add additional layers to inject different attributes of XT (See Figure 1): (1) Modality-level encoding that differentiates segments in XT .", "We use 3 different modality tokens: cap, sys, and usr to specify whether the token in the corresponding position is part of input caption, system responses, or user utterances.", "(2) Turn-level encoding that encodes the turn number of the token in the corresponding position.", "(3) Position-level encoding that is used to inject signals of the token ordering.", "Similar to video representation, the encoded input is combined through element-wise summation: ZT = Z tokenT + Z modT + Z turnT + Z posT (3) We concatenated both ZV and ZT to create a single input sequence ZV T of length ( F P + LC + LH + LY ) and embedding dimension d .", "ZV T is used as input to a pre-trained GPT-2 for fine-tuning.", "Following a similar strategy adopted by Wolf et al.", "(2019), we fine-tune the models in a multi-task", "setting with the following objectives: (1) Response Generation : this is a typical objective function that maximizes the likelihood of output target response conditioned on the source sequence.", "(2) Masked Multi-modal Modeling : we explore two loss functions: masked language modeling (MLM) and masked visual modeling (MVM).", "We mask both tokens and spatial regions in video frames in training instances and require the model to regenerate them with the remaining inputs.", "MLM is learned similarly as response generation by passing through a linear layer with softmax.", "MVM is learned by minimizing the L1 loss in feature space between the output representation of the masked visual region and the original input representation.", "Both are passed through a linear transformation to the same dimensional space.", "This is similar to the perceptual loss proposed by (Johnson et al., 2016; Dosovitskiy and Brox, 2016) for image style transfer and image resolution tasks.", "We follow BERT (Devlin et al., 2019) and replace about 15% of tokens and image region inputs in each training instance at random with a [MASK] token.", "The corresponding output representations are then used to recover the original tokens or image regions.", "(3) Matching Video-Text Pair (MVT): for about 15% of training instances, we adapt the pretrained language model to the dialogue domain by replacing the original input with an incorrect dialogue or video input at random.", "We use a special token [CLS] concatenated to the input sequence to learn the contextual representation.", "The vector integrates contextual cues through Transformer attention layers and the corresponding output representation is used to predict if the input video-text pair is correct.", "We use the open-source implementation of the GPT2 architecture and obtain pre-trained model checkpoints 1 .", "We experiment with two pre-trained GPT2 models: small (S) and medium (M) (Radford et al., 2019).", "We use Adam optimizer with a learning rate of 5e-5 based on grid search.", "We adopt a learning rate decay schedule as similarly used by Vaswani et al. (2017).", "we set the weight on the response generation loss to be 1.5 times higher than the other losses.", "We experiment with the the video-grounded dialogue task in the large-scale AVSD benchmark in DSTC7 (Hori et al., 2019).", "The AVSD benchmark contains dialogues grounded on the Charades videos (Sigurdsson et al., 2016).", "Each dialogue consists of up to 10 dialogue turns, each turn including a user utterance and system response (See Table 1 for more details of the dataset).", "To extract visual features, we used the 3D CNN-based ResNext-101 (Xie et al., 2017) pre-trained on Kinetics (Hara et al., 2018) to obtain the spatiotemporal video features.", "We fixed the batch size to 16 and the maximum sequence length compatible with the corresponding GPT2 models.", "We sampled video features every 16 frames without overlapping.", "We trained up to 50 epochs on 4 GPUs.", "We report the objective scores, including BLEU, METEOR, ROUGE-L, and CIDEr.", "We compare system-generated responses with 6 reference ground-truth responses.", "We compare the proposed VGD-GPT2 model with the following baseline models: (1) Baseline (Hori et al., 2019) proposes a novel", "sequence-to-sequence approach with question-guided LSTM on both video visual and audio temporal features.", "Dialogue history is encoded by a hierarchical LSTM and the final representation is concatenated with question and video representations as input to decode dialog responses.", "(2) AVSD Winner (Sanabria et al., 2019) extends the previous work with more refined visual features and transfer learning from a video summary task.", "(3) MTN (Le et al., 2019) adopts a transformer-based approach with question-guided attention on visual features formulated as an auto-encoding module.", "Table 2 shows the details of our results.", "Our VGD-GPT2 model outperforms the existing approaches across all the automated metrics.", "The results show that fine-tuning a language model with video-grounded dialogues can help to generate quality responses and improve model performance.", "By initializing our models with a language model pre-trained on massive text data, we obtain richer feature representations that capture more complex dependencies between inputs.", "Compared with the baseline with Transformer-based neural networks (Le et al., 2019), our model treats both visual and text features with equal importance at different levels of different dimensions.", "Specifically, we aligned the token level with spatial level and turn level with temporal level between visual and text features.", "By contrast, MTN only considers the temporal variation of the visual features and mainly focuses on text-based attention.", "Our early fusion strategy with a multi-level alignment approach of multi-modal inputs allows higher resolution relations between all feature representations in later layers of neural networks.", "Besides, Table 2 also shows that fine-tuning a pretrained model with both spatial-temporal information and multi-task objectives can benefit the main task of response generation.", "To obtain spatial-only and temporal-only features, we follow a similar approach from (Jang et al., 2017) by using average pooling to pool the visual features along the temporal or spatial dimensions.", "Considering CIDEr as the evaluation measure, learning dependencies in both spatial and temporal dimensions can improve the performance by 0.01 absolute score from spatial-only feature and 0.008 absolute score from temporal-only feature.", "Our proposed auxiliary objectives also help to improve model performance by adapting the pretrained model to the current data domain, video-based dialogues.", "MLM and MVM are used to improve learning of local dependencies in token and spatial levels, while MVT is used to support learning global dependencies between text and visual modalities.", "We observe that adding MVM objective function can increase the CIDEr score the most, by 0.043 absolute score, as compared to adding MVT (0.023 absolute score) or MLM (0.004 absolute score) objective function.", "We also found moderate performance improvements in BLEU3, BLEU4, and ROUGE-L, when increasing GPT-2 from small to medium size.", "We note that the increasing model parameters in GPT2 may require longer fine-tuning procedure or a larger dialogue training dataset to fully optimize the models in the dialogue domain.", "In this work, we leverage pre-trained language models for a video-grounded dialogue task.", "We propose a sequence-to-sequence framework and a multitask fine-tuning approach to adapt the pre-trained models to the video dialogue domain.", "Despite using GPT-2 models, our framework can be extended with other language models and similarly adopted to improve other multi-modal dialogues.", "Our early fusion strategy effectively unifies different levels of features in both dialogues and video without complicating the network architecture References Chris Alberti, Jeffrey Ling, Michael Collins, and David Reitter." ]
[ "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "method", "method", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "other", "other", "other", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "objective", "objective", "abstain" ]
[ "Abstract", "The Neural Machine Translation (NMT) model is essentially a joint language model conditioned on both the source sentence and partial translation.", "Therefore, the NMT model naturally involves the mechanism of the Language Model (LM) that predicts the next token only based on partial translation.", "Despite its success, NMT still suffers from the hallucination problem, generating fluent but inadequate translations.", "The main reason is that NMT pays excessive attention to the partial translation while neglecting the source sentence to some extent, namely overconfidence of the LM.", "Accordingly, we define the Margin between the NMT and the LM , calculated by subtracting the predicted probability of the LM from that of the NMT model for each token.", "The Margin is negatively correlated to the overconfidence degree of the LM.", "Based on the property, we propose a Margin -based Token-level Objective (MTO) and a Margin -based Sentence-level Objective (MSO) to maximize the Margin for preventing the LM from being overconfident.", "Experiments on WMT14 English-to-German, WMT19 Chinese-to-English, and WMT14 English-to-French translation tasks demonstrate the effectiveness of our approach, with 1.36, 1.50, and 0.63 BLEU improvements, respectively, compared to the Transformer baseline.", "The human evaluation further verifies that our approaches improve translation adequacy as well as fluency.", "1 1 Introduction Neural Machine Translation (NMT) has achieved great success in recent years (Sutskever et al., 2014; Equal contribution. This work was done when Mengqi Miao was interning at Pattern Recognition Center, WeChat AI, Tencent Inc, China. Corresponding author. 1 Code is available at https://github.com/Mlair 77/nmt adequacy Cho et al., 2014; Bahdanau et al., 2014; Luong et al., 2015; Vaswani et al., 2017; Meng and Zhang, 2019; Zhang et al., 2019a; Yan et al., 2020b), which generates accurate and fluent translation through modeling the next word conditioned on both the source sentence and partial translation.", "However, NMT faces the hallucination problem, i.e., translations are fluent but inadequate to the source sentences.", "One important reason is that the NMT model pays excessive attention to the partial translation to ensure fluency while failing to translate some segments of the source sentence (Weng et al., 2020b), which is actually the overconfidence of the Language Model (LM).", "In the rest of this paper, the LM mentioned refers to the LM mechanism involved in NMT.", "Many recent studies attempt to deal with the inadequacy problem of NMT from two main aspects.", "One is to improve the architecture of NMT, such as adding a coverage vector to track the attention history (Tu et al., 2016), enhancing the cross-attention module (Meng et al., 2016, 2018; Weng et al., 2020b), and dividing the source sentence into past and future parts (Zheng et al., 2019).", "The other aims to propose a heuristic adequacy metric or objective based on the output of NMT.", "Tu et al. (2017) and Kong et al. (2019) enhance the model's reconstruction ability and increase the coverage ratio of the source sentences by translations, respectively.", "Although some researches (Tu et al., 2017; Kong et al., 2019; Weng et al., 2020b) point out that the lack of adequacy is due to the overconfidence of the LM, unfortunately, they do not propose effective solutions to the overconfidence problem.", "From the perspective of preventing the overconfidence of the LM, we first define an indicator of the overconfidence degree of the LM, called the Margin between the NMT and the LM , by subtracting the predicted probability of the LM from that of the NMT model for each token.", "A small Margin implies that the NMT might concentrate on the partial translation and degrade into the LM, i.e., the LM is overconfident.", "Accordingly, we propose a M argin -based T oken-level O bjective (MTO) to maximize the Margin .", "Furthermore, we observe a phenomenon that if target sentences in the training data contain many words with negative Margin , they always do not correspond to the source sentences.", "These data are harmful to model performance.", "Therefore, based on the MTO, we further propose a M argin -based S entence-level O bjective (MSO) by adding a dynamic weight function to alleviate the negative effect of these dirty data.", "We validate the effectiveness and superiority of our approaches on the Transformer (Vaswani et al., 2017), and conduct experiments on large-scale WMT14 English-to-German, WMT19 Chinese-to-English, and WMT14 English-to-French translation tasks.", "Our contributions are: We explore the connection between inadequacy translation and the overconfidence of the LM in NMT, and thus propose an indicator of the overconfidence degree, i.e., the Margin between the NMT and the LM.", "Furthermore, to prevent the LM from being overconfident, we propose two effective optimization objectives to maximize the Margin , i.e., the Margin-based Token-level Objective (MTO) and the Margin-based Sentence-level Objective (MSO).", "Experiments on WMT14 English-to-German, WMT19 Chinese-to-English, and WMT14 English-to-French show that our approaches bring in significant improvements by +1.36, +1.50, +0.63 BLEU points, respectively.", "Additionally, the human evaluation verifies that our approaches can improve both translation adequacy and fluency.", "where y <t = { y 1 , y 2 , ..., y t 1 } is the partial translation before y t .", "From Eq.", "1, the source sentence x and partial translation y <t are considered in the meantime, suggesting that the NMT model is essentially a joint language model and the LM is instinctively involved in NMT.", "Based on the encoder-decoder architecture, the encoder of NMT maps the input sentence x to hidden states.", "At time step t , the decoder of NMT employs the output of the encoder and y <t to predict y t .", "The training objective of NMT is to minimize the negative log-likelihood, which is also known as the cross entropy loss function: L NMTce = T (cid:88) t =1 log p ( y t | y <t , x ) .", "(2) The LM measures the probability of a target sentence similar to NMT but without knowledge of the source sentence x : P ( y ) = T (cid:89) t =1 p ( y t | y <t ) .", "(3) The LM can be regarded as the part of NMT decoder that is responsible for fluency, only takes y <t as input.", "The training objective of the LM is almost the same as NMT except for the source sentence x : L LMce = T (cid:88) t =1 log p ( y t | y <t ) .", "(4) The NMT model predicts the next word y t according to the source sentence x and meanwhile ensures that y t is fluent with the partial translation y <t .", "However, when NMT pays excessive attention to translation fluency, some source segments may be neglected, leading to inadequacy problem.", "This is exactly what we aim to address in this paper.", "In this section, we firstly define the Margin between the NMT and the LM (Section 3.1), which reflects the overconfidence degree of the LM.", "Then we put forward the token-level (Section 3.2) and sentence-level (Section 3.3) optimization objectives to maximize the Margin .", "Finally, we elaborate our two-stage training strategy (Section 3.4).", "When the NMT model excessively focuses on partial translation, i.e., the LM is overconfident, the NMT model degrades into the LM, resulting in hallucinated translations.", "To prevent the overconfidence problem, we expect that the NMT model outperforms the LM as much as possible in predicting golden tokens.", "Consequently, we define the Margin between the NMT and the LM at the t -th time step by the difference of the predicted probabilities of them: ( t ) = p NMT ( y t | y <t , x ) p LM ( y t | y <t ) , (5) where p NMT denotes the predicted probability of the NMT model, i.e., p ( y t | y <t , x ) , and p LM denotes that of the LM, i.e., p ( y t | y <t ) .", "The Margin ( t ) is negatively correlated to the overconfidence degree of the LM, and different values of the Margin indicate different cases: If ( t ) is big, the NMT model is apparently better than the LM, and y t is strongly related to the source sentence x .", "Hence the LM is not overconfident.", "If ( t ) is medium, the LM may be slightly overconfident and the NMT model has the potential to be enhanced.", "If ( t ) is small, the NMT model might degrade to the LM and not correctly translate the source sentence, i.e., the LM is overconfident.", "2 Note that sometimes, the model needs to focus more on the partial translation such as the word to be predicted is a determiner in the target language.", "In this case, although small ( t ) does not indicate the LM is overconfident, enlarging the ( t ) can still enhance the NMT model.", "Based on the Margin , we firstly define the Margin loss LM and then fuse it into the cross entropy loss function to obtain the M argin -based T oken-evel Optimization O bjective (MTO).", "Formally, we define the Margin loss LM to maximize the Margin as follow: LM = T (cid:88) t =1 (1 p NMT ( t )) M (( t )) , (6) where we abbreviate p NMT ( y t | y <t , x ) as p NMT ( t ) .", "M (( t )) is a function of ( t ) , namely Margin function, which is monotonically decreasing (e.g., 1 ( t ) ).", "Moreover, when some words have the same ( t ) but different p NMT ( t ) , their meanings are quite different: (1) If p NMT ( t ) is big, the NMT model learns the token well and does not need to focus on the Margin too much; (2) If p NMT ( t ) is 2 In addition, if p NMT ( y t | y <t , x ) is large, less attention will be paid to this data because y t has been learned well, which will be described in detail in Section 3.2.", "small, the NMT model is urgently to be optimized on the token thus the weight of M (( t )) should be enlarged.", "Therefore, as the weight of M (( t )) , 1 p NMT ( t ) enables the model treat tokens wisely.", "Variations of M () .", "We abbreviate Margin function M (( t )) as M () hereafter.", "A simple and intuitive definition is the Linear function: M () = 1 , which has the same gradient for different .", "However, as illustrated in Section 3.1, different has completely various meaning and needs to be treated differently.", "Therefore, we propose three non-linear Margin functions M () as follows: Cube : (1 3 ) / 2 .", "Quintic (fifth power): (1 5 ) / 2 .", "Log : 1 log ( 1 1+ ) + 0 .", "5 .", "where is a hyperparamater for Log .", "As shown in Figure 1, the four variations 3 have quite different slopes.", "Specifically, the three nonlinear functions are more stable around = 0 (e.g., [ 0 . 5 , 0 . 5] ) than Linear , especially Quintic .", "We will report the performance of the four M () concretely and analyze why the three non-linear M () perform better than Linear in Section 5.4.", "Finally, based on LM , we propose the M argin based T oken-level O bjective (MTO): LT = L NMTce + MLM , (7) where L NMTce is the cross-entropy loss of the NMT model defined in Eq.", "2 and M is the hyperparameter for the Margin loss LM .", "3 In order to keep the range of M () roughly [0,1], we set Linear function to (1 ) / 2 .", "Furthermore, through analyzing the Margin distribution of target sentences, we observe that the target sentences in the training data which have many tokens with negative Margin are almost halluci-nations of the source sentences (i.e., dirty data), thus will harm the model performance.", "Therefore, based on MTO, we further propose the M argin based S entence-level O bjective (MSO) to address this issue.", "Compared with the LM, the NMT model predicts the next word with more prior knowledge (i.e., the source sentence).", "Therefore, it is intuitive that when predicting y t , the NMT model should predict more accurately than the LM, as follow: p NMT ( y t | y <t , x ) > p LM ( y t | y <t ) .", "Actually, the above equation is equivalent to ( t ) > 0 .", "The larger ( t ) is, the more the NMT model exceeds the LM.", "However, there are many tokens with negative Margin through analyzing the Margin distribution.", "We conjecture the reason is that the target sentence is not corresponding to the source sentence in the training corpus, i.e., the target sentence is a hallucination.", "Actually, we also observe that if a large proportion of tokens in a target sentence have negative Margin (e.g., 50%), the sentence is probably not corresponding to the source sentence, such as the case in Figure 2. These dirty data will harm the performance of the NMT model.", "where # { y t y : ( t ) < 0 } denotes the number of tokens with negative ( t ) in y , and # { y t : y t y } is the length of the target sentence y .", "When R ( x , y ) is larger than a threshold k (e.g., k =50%), the target sentence may be desperately inadequate, or even completely unrelated to the source sentence, as shown in Figure 2. In order to eliminate the impact of these seriously inadequate sentences, we ignore their loss during training by the M argin -based S entence-level O bjective (MSO): LS = IR ( x , y ) <k LT , (10) where IR ( x , y ) <k is a dynamic weight function in sentence level.", "The indicative function IR ( x , y ) <k equals to 1 if R ( x , y ) < k , else 0, where k is a hyperparameter.", "LT is MTO defined in Eq.", "7.", "IR ( x , y ) <k is dynamic at the training stage.", "During training, as the model gets better, its ability to distinguish hallucinations improves thus IR ( x , y ) <k becomes more accurate.", "We will analyze the changes of IR ( x , y ) <k in Section 5.4.", "Jointly Pretraining.", "The language model mechanism in NMT cannot be directly evaluated, thus we train an auxiliary LM to represent it.", "We pretrain them together using a fusion loss function: L pre = L NMTce + LML LMce , (11) where L NMTce and L LMce are the cross entropy loss functions of the NMT model and the LM defined in Eq.", "2 and Eq.", "4, respectively.", "LM is a hyperparameter.", "Specifically, we jointly train them through sharing their decoders' embedding layers and their pre-softmax linear transformation layers (Vaswani et al., 2017).", "There are two reasons for joint training: (1) making the auxiliary LM as consistent as possible with the language model mechanism in NMT; (2) avoiding abundant extra parameters.", "Finetuning.", "We finetune the NMT model by minimizing the MTO ( LT in Eq.", "7) and MSO ( LS in Eq. 10).", "4 Note that the LM is not involved at the inference stage.", "4 The LM can be fixed or trained along with the NMT after pretraining.", "Our experimental results show that continuous training the LM and fixing the LM have analogous performance during the finetuning stage.", "Therefore, we only report the results of keeping the LM fixed in this paper.", "We conduct experiments on three large-scale NMT tasks, i.e., WMT14 English-to-German (En De), WMT14 English-to-French (En Fr), and WMT19 Chinese-to-English (Zh En).", "Datasets.", "For En De, we use 4.5M training data.", "Following the same setting in (Vaswani et al., 2017), we use newstest2013 as validation set and newstest2014 as test set, which contain 3000 and 3003 sentences, respectively.", "For En Fr, the training dataset contains about 36M sentence pairs, and we use newstest2013 with 3000 sentences as validation set and newstest2014 with 3003 sentences as test set.", "For Zh En, we use 20.5M training data and use newstest2018 as validation set and new-stest2019 as test set, which contain 3981 and 2000 sentences, respectively.", "For Zh En, the number of merge operations in byte pair encoding (BPE) (Sennrich et al., 2016a) is set to 32K for both source and target languages.", "For En De and En Fr, we use a shared vocabulary generated by 32K BPEs.", "Evaluation.", "We measure the case-sensitive BLEU scores using multi-bleu.perl 5 for En De and En Fr.", "For Zh En, case-sensitive BLEU scores are calculated by Moses mteval-v13a.pl script 6 .", "Moreover, we use the paired bootstrap resampling (Koehn, 2004) for significance test.", "We select the model which performs the best on the validation sets and report its performance on the test sets for evaluation.", "Model and Hyperparameters.", "We conduct experiments based on the Transformer (Vaswani et al., 2017) and implement our approaches with the open-source tooklit Opennmt-py (Klein et al., 2017).", "Following the Transformer-Base setting in (Vaswani et al., 2017), we set the hidden size to 512 and the encoder/decoder layers to 6. All three tasks are trained with 8 NVIDIA V100 GPUs, and the batch size for each GPU is 4096 tokens.", "The beam size is 5 and the length penalty is 0.6.", "Adam optimizer (Kingma and Ba, 2014) is used in all the models.", "The LM architecture is the decoder of the Transformer excluding the cross-attention layers, sharing the embedding layer and the pre-softmax 5 https://github.com/moses-smt/mosesde coder/blob/master/scripts/generic/multi-bleu.perl 6 https://github.com/moses-smt/mosesde coder/blob/mast-er/scripts/generic/mteval-v13a.pl linear transformation with the NMT model.", "For En De, Zh En, and En Fr, the number of training steps is 150K for jointly pretraining stage and 150K for finetuning 7 .", "During pretraining, we set LM to 0.01 for all three tasks 8 .", "Experimental results shown in Appendix A indicate that the LM has converged after pretraining for all the three tasks.", "During finetuning, the Margin function M () in Section 3.2 is set to Quintic , and we will analyze the four M () in Section 5.4.", "M in Eq.", "7 is set to 5, 8, and 8 on En De, En Fr and Zh En, respectively.", "For MSO, the threshold k in Eq.", "10 is set to 30% for En De and Zh En, 40% for En Fr.", "The two hyperparameters (i.e., M and k ) are searched on validation sets, and the selection details are shown in Appendix B. The baseline model (i.e., vanilla Transformer) is trained for 300k steps for En De, En Fr and Zh En.", "Moreover, we use a joint training model as our secondary baseline, namely NMT+LM, by jointly training the NMT model and the LM throughout the training stage with 300K steps.", "The training steps of all the models are consistent, thus the experiment results are strictly comparable.", "We first evaluate the main performance of our approaches (Section 5.1 and 5.2).", "Then, the human evaluation further confirms the improvements of translation adequacy and fluency (Section 5.3).", "Finally, we analyze the positive impact of our models on the distribution of Margin and explore how each fragment of our method works (Section 5.4).", "The results on WMT14 English-to-German (En De) are summarized in Table 1. We list the results from (Vaswani et al., 2017) and several related competitive NMT systems by various methods, such as Minimum Risk Training (MRT) objective (Shen et al., 2016), Simple Fusion of NMT and LM (Stahlberg et al., 2018), optimizing adequacy metrics (Kong et al., 2019; Feng et al., 2019) and improving the Transformer architecture (Yang et al., 2018; Zheng et al., 2019; Yang et al., 2019; Weng et al., 2020b; Yan et al., 2020a).", "We re-7 The LM does not need to be state-of-the-art.", "The previous study of (Baziotis et al., 2020) has shown that a more powerful LM does not lead to further improvements to NMT.", "8 The experimental results show that the model is insensitive to LM .", "Therefore we make LM consistent for all the three tasks.", "Similarly, we re-implement the Simple Fusion (Stahlberg et al., 2018) model.", "9 Finally, the results of the joint training model NMT+LM, and models with our MTO and MSO objectives are reported.", "Compared with the baseline, NMT+LM yields +0.75 BLEU improvement.", "Based on NMT+LM, our MTO achieves further improvement with +0.50 BLEU scores, indicating that preventing the LM from being overconfident could significantly enhance model performance.", "Moreover, MSO performs better than MTO by +0.11 BLEU scores, which implies that the dirty data in the training dataset indeed harm the model performance, and the dynamic weight function IR ( x , y ) <k in Eq.", "10 could reduce the negative impact.", "In conclusion, our approaches improve up to +1.36 BLEU scores on En De compared with the Transformer baseline and substantially outperforms the existing NMT systems.", "The results demonstrate the effectiveness and superiority of our approaches.", "The results on WMT14 English-to-French (En Fr) and WMT19 Chinese-to-English (Zh En) are shown in Table 2. We also list the results of (Vaswani et al., 2017) and our reimplemented Transformer as the baselines.", "On En Fr, our reimplemented result is higher than the result of (Vaswani et al., 2017), since we update 300K steps while Vaswani et al. (2017) only update 100K steps.", "Many studies obtain similar results to ours (e.g., 41.1 BLEU scores from (Ott et al., 2019)).", "Compared with the baseline, NMT+LM yields +0.07 and +0.15 BLEU improvements on En Fr and Zh En, respectively.", "The improvement of NMT+LM on En De in Table 1 (i.e., +0.75) is greater than these two datasets.", "We conjecture the reason is that the amount of training data of En De is much smaller than that of En Fr and Zh En, thus NMT+LM is more likely to improve the model performance on En De.", "Compared with NMT+LM, our MTO achieves further improvements with +0.42 and +1.04 BLEU scores on En Fr and Zh En, respectively, which demonstrates the performance improvement is mainly due to our Margin -based objective rather than joint training.", "Moreover, based on MTO, our MSO further yields +0.14 and +0.31 BLEU improvements.", "In summary, our approaches improve up to +0.63 and +1.50 BLEU scores on En Fr and Zh En compared with the baselines, respectively, which demonstrates the effectiveness and generalizability of our approaches.", "We conduct the human evaluation for translations in terms of adequacy and fluency.", "Firstly, we ran-Model Adequacy Fluency Ave.", "domly sample 100 sentences from the test set of WMT19 Zh En.", "Then we invite three annotators to evaluate the translation adequacy and fluency.", "Five scales have been set up, i.e., 1, 2, 3, 4, 5. For adequacy, 1 means totally irrelevant to the source sentence, and 5 means equal to the source sentence semantically.", "For fluency, 1 represents not fluent and incomprehensible; 5 represents very native.", "Finally, we take the average of the scores from the three annotators as the final score.", "The results of the baseline and our approaches are shown in Table 3.", "Compared with the NMT baseline, NMT+LM, MTO and MSO improve adequacy with 0.08, 0.22, and 0.37 scores, respectively.", "Most improvements come from our Margin -based methods MTO and MSO, and MSO performs the best.", "For fluency, NMT+LM achieves 0.2 improvement compared with NMT.", "Based on NMT+LM, MTO and MSO yield further improvements with 0.01 and 0.05 scores, respectively.", "Human evaluation indicates that our MTO and MSO approaches remarkably improve translation adequacy and slightly enhance translation fluency.", "Margin between the NMT and the LM.", "Firstly, we analyze the distribution of the Margin between the NMT and the LM (i.e., in Eq. 5).", "As shown in Figure 3, for the joint training model NMT+LM, although most of the Margin s are positive, there are still many tokens with negative Margin and a large amount of Margin s around 0.", "This indicates that the LM is probably overconfident for many tokens, and addressing the overconfidence problem is meaningful for NMT.", "By comparison, the Margin distribution of MSO is dramatically different with NMT+LM: the tokens with Margin around 0 are significantly reduced, and the tokens with Margin in [0 . 75 , 1 . 0] are increased apparently.", "More precisely, we list the percentage of tokens with negative Margin and the average Margin for each model in Table 4. Compared with NMT+LM, MTO and MSO reduce the percentage of negative Margin by 2.28 and 1.56 points, respectively.", "We notice MSO performs slightly worse than MTO, because MSO neglects the hallucinations during training.", "As there are many tokens with negative Margin in hallucinations, the ability of MSO to reduce the proportion of < 0 is weakened.", "We further analyze effects of MTO and MSO on the average of Margin .", "Both MTO and MSO improve the average of the Margin by 33% (from 0.33 to 0.44).", "In conclusion, MTO and MSO both indeed increase the Margin between the NMT and the LM.", "Variations of M () .", "We compare the performance of the four Margin functions M () defined in Section 3.2.", "We list the BLEU scores of the Transformer baseline, NMT+LM and our MTO approach with the four M () in Table 5. All the four variations bring improvements over NMT and NMT+LM.", "The results of Log with different are similar to Linear , while far lower than Cube and Quintic .", "And Quintic performs the best among all the four variations.", "We speculate the reason is that Function BLEU NMT (Transformer) 25.75 ref + LM 25.90 +0.15 + Linear 26.13 +0.38 + Cube 26.45 +0.60 + Quintic 26.94 +1.19 + Log ( = 5 ) 26.12 +0.37 + Log ( = 10 ) 26.07 +0.32 + Log ( = 20 ) 26.24 +0.49 Table 5: Case-sensitive BLEU scores (%) on Zh En test set of MTO with several variations of M () .", "Effects of the Weight of M () .", "In MTO, we propose the weight 1 p NMT ( t ) of the Margin function M () in Eq.", "6. To validate the importance of it, we remove the weight and the Margin loss degrades to LM = (cid:80) Tt =1 M (( t )) .", "The results are listed in Table 6. Compared with NMT+LM, MTO without weight performs worse with 0.25 and 0.05 BLEU decreases on the validation set and test set, respectively.", "Compared with MTO with weight, it decreases 0.73 and 1.09 BLEU scores on the validation set and test set, respectively.", "This demonstrates that the weight 1 p NMT ( t ) is indispensable for our approach.", "Changes of IR ( x , y ) <k During Training.", "In MSO, we propose a dynamic weight function IR ( x , y ) <k in Eq.", "10.", "Figure 4 shows the changes of IR ( x , y ) <k in MSO and the BLEU scores of MSO and MTO during finetuning.", "As the training continues, our model gets more competent, and the proportion of sentences judged to be dirty data by our model increases rapidly at first and then 23.0 23.2 23.4 23.6 23.8 24.0 24.2 24.4 24.6 24.8 25.0 0.056 0.058 0.060 0.062 0.064 0.066 0.068 0.070 0.072 155K 175K 195K 215K 235K 255K 275K 295K BLEUP r o p o r t i o n Training steps Propotion of I=0 MTOMSO Figure 4: Changes of the proportion of IR ( x , y ) < 30% = 0 on Zh En during finetuning for MSO, and BLEU scores (%) on the validation set of Zh En for MTO and MSO.", "flattens out, which is consistent with the trend of BLEU of MSO.", "Moreover, by adding the dynamic weight function, MSO outperforms MTO at most steps.", "Case Study.", "To better illustrate the translation quality of our approach, we show several translation examples in Appendix C. Our approach grasps more segments of the source sentences, which are mistranslated or neglected by the Transformer.", "Translation Adequacy of NMT.", "NMT suffers from the hallucination and inadequacy problem for a long time (Tu et al., 2016; Muller et al., 2020; Wang and Sennrich, 2020; Lee et al., 2019).", "Many studies improve the architecture of NMT to alleviate the inadequacy issue, including tracking translation adequacy by coverage vectors (Tu et al., 2016; Mi et al., 2016), modeling a global representation of source side (Weng et al., 2020a), dividing the source sentence into past and future parts (Zheng et al., 2019), and multi-task learning to improve encoder and cross-attention modules in decoder (Meng et al., 2016, 2018; Weng et al., 2020b).", "They inductively increase the translation adequacy, while our approaches directly maximize the Margin between the NMT and the LM to prevent the LM from being overconfident.", "Other studies enhance the translation adequacy by adequacy metrics or additional optimization objectives.", "Tu et al. (2017) minimize the difference between the original source sentence and the reconstruction source sentence of NMT.", "Kong et al. (2019) propose a coverage ratio of the source sentence by the model translation.", "Feng et al. (2019) evaluate the fluency and adequacy of translations with an evaluation module.", "However, the metrics or objectives in the above approaches may not wholly represent adequacy.", "On the contrary, our approaches are derived from the criteria of the NMT model and the LM, thus credible.", "Language Model Augmented NMT.", "Language Models are always used to provide more information to improve NMT.", "For low-resource tasks, the LM trained on extra monolingual data can rerank the translations by fusion (Gulcehre et al., 2015; Sriram et al., 2017; Stahlberg et al., 2018), enhance NMT's representations (Clinchant et al., 2019; Zhu et al., 2020), and provide prior knowledge for NMT (Baziotis et al., 2020).", "For data augmentation, LMs are used to replace words in sentences (Kobayashi, 2018; Wu et al., 2018; Gao et al., 2019).", "Differently, we mainly focus on the Margin between the NMT and the LM, and no additional data is required.", "Stahlberg et al. (2018) propose the Simple Fusion approach to model the difference between NMT and LM.", "Differently, it is trained to optimize the residual probability, positively correlated to p NMT /p LM which is hard to optimize and the LM is still required in inference, slowing down the inference speed largely.", "Data Selection in NMT.", "Data selection and data filter methods have been widely used in NMT.", "To balance data domains or enhance the data quality generated by back-translation (Sennrich et al., 2016b), many approaches have been proposed, such as utilizing language models (Moore and Lewis, 2010; van der Wees et al., 2017; Zhang et al., 2020), translation models (Junczys-Dowmunt, 2018; Wang et al., 2019a), and curriculum learning (Zhang et al., 2019b; Wang et al., 2019b).", "Different from the above methods, our MSO dynamically combines language models with translation models for data selection during training, making full use of the models.", "We alleviate the problem of inadequacy translation from the perspective of preventing the LM from being overconfident.", "Specifically, we firstly propose an indicator of the overconfidence degree of the LM in NMT, i.e., Margin between the NMT and the LM.", "Then we propose Margin -based Token-level and Sentence-level objectives to maximize the Margin .", "Experimental results on three large-scale translation tasks demonstrate the effectiveness and superiority of our approaches.", "The human evaluation further verifies that our methods can improve translation adequacy and fluency.", "The research work descried in this paper has been supported by the National Nature Science Foundation of China (No. 12026606).", "The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "abstain", "objective", "objective", "objective", "result", "other", "other" ]
[ "We present a novel method to extract parallel sentences from two monolingual corpora, using neural machine translation.", "Our method relies on translating sentences in one corpus, but constraining the decoding by a prefix tree built on the other corpus.", "We argue that a neural machine translation system by itself can be a sentence similarity scorer and it efficiently approximates pairwise comparison with a modified beam search.", "When benchmarked on the BUCC shared task, our method achieves results comparable to other submissions.", "Having large and high-quality parallel corpora is critical for neural machine translation (NMT).", "One way to create such a resource is to mine the web (Resnik and Smith, 2003).", "Once texts are crawled from the web, they form large collections of data in different languages.", "To find parallel sentences, a natural way is to score sentence similarity between all possible sentence pairs and extract the top-scoring ones.", "This poses two major challenges:", "1. Accurately determining the semantic similarity of a sentence pair in two languages.", "2. Efficiently scoring sentence similarity for all possible pairs across two languages.", "Scoring each source sentence against each target sentence results in unaffordable quadratic time complexity.", "A typical workflow reduces the search complexity in a coarse-to-fine manner by aligning documents then aligning sentences within documents (Uszkoreit et al., 2010).", "However, translated websites may not have matching document structures.", "Comparable Corpora (BUCC) shared task show that direct sentence alignment can be done by sentence-level lexical comparison, neural comparison or a combination of the two (Zweigenbaum et al., 2017, 2018).", "A state-of-the-art method maps all sentences to multilingual sentence embeddings and compares them using vector similarity (Artetxe and Schwenk, 2019).", "Such sentence embeddings are produced by neural encoders, but the rise of the attention mechanism demonstrates that sentence embeddings alone are insufficient to obtain full translation quality (Bahdanau et al., 2015).", "To exploit quality gains from the attention mechanism, we propose to use a full NMT system with attention to score potentially parallel sentences.", "The way we avoid pairwise scoring is inspired by constrained decoding in NMT, where the choice of output tokens is constrained to a pre-defined list (Hokamp and Liu, 2017).", "Our method works as follows: We designate one language as source and one language as target, and build a trie over all target sentences.", "Then we translate each source sentence to the target language, but constrain left-to-right beam search to follow the trie.", "In other words, every translation hypothesis is a prefix of some sentence in the target language.", "Rather than freely choosing which token to extend by, a hypothesis is limited to extensions that exist in the target language corpus.", "In effect, we are using beam search to limit target language candidates for each source sentence.", "Our work makes two contributions to parallel sentence mining.", "First, instead of comparing translated text or neural similarity, we use an NMT model to directly score and retrieve sentences on-the-fly during decoding.", "Second, we approximate pairwise comparison with beam search, so only the top-scoring hypotheses need to be considered at each decoding step.", "NMT systems can assign a conditional translation probability to an arbitrary sentence pair.", "Filtering based on this (Junczys-Dowmunt, 2018) won the WMT 2018 shared task on parallel corpus filtering (Koehn et al., 2018).", "Intuitively, we could score every pair of source and target sentences using a translation system in quadratic time, then return pairs that score highly for further filtering.", "We approximate this with beam search.", "We build a prefix tree (trie) containing all sentences in the target language corpus (Figure 1).", "Then we translate each sentence in the source language corpus using the trie as a constraint on output in the target language.", "NMT naturally generates translations one token at a time from left to right, so it can follow the trie of target language sentences as it translates.", "Formally, translation typically uses beam search to approximately maximise the probability of a target language sentence given a source language sentence.", "We modify beam search to restrict partial translations to be a prefix of at least one sentence in the target language.", "The trie is merely an efficient data structure with which to evaluate this prefix constraint; partial translations are augmented to remember their position in the trie.", "We consider two places to apply our constraint.", "In post-expansion pruning, beam search creates hypotheses for the next word, prunes hypotheses to fit in the beam size, and then requires they be prefixes of a target language sentences.", "In practice, most sentences are do not have translations in the corpus and search terminates early if all hypotheses are pruned.", "In pre-expansion pruning, a hypothesis in the beam generates a probability distribution over all tokens, but only the tokens corresponding to children of the trie node can be expanded by the hypothesis.", "The search process is guaranteed to find at least one target sentence for each source sentence.", "Downstream filtering removes false positives.", "Algorithm 1 presents both variants of our modified beam search algorithm.", "Besides canonical beam search, v1 indicates post-expansion pruning while v2 indicates pre-expansion pruning.", "Figure 2 visualises trie-constrained beam search with post-expansion pruning.", "The modified beam search algorithm allows us to efficiently approximate the comparison between a source sentence and M target sentences.", "We let B denote beam size and L denote maximum output length.", "Given each source sentence, our NMT decoder only expands the top B hypotheses intersecting with the trie, for at most L times, regardless of M .", "With N source sentences, our proposed method will reduce the comparison complexity from O ( MN ) to O ( BLN ) , where BL (cid:28) M .", "Pre-expansion pruning leaves each source sentence with an output, which needs to be filtered out if not parallel.", "We propose to use two methods.", "When NMT generates an output, a sentence level cross-entropy score is computed too.", "One way to perform filtering is to only keep sentences with a better per-word cross-entropy than a certain threshold.", "Another way is to use Bicleaner, an off-the-shelf tool which scores sentence similarity at sentence pair level (Sanchez-Cartagena et al., 2018).", "Filtering is optional for post-expansion pruning.", "The trie used in our NMT decoding should be fast to query and small enough to fit in memory.", "We use an array of nodes as the basic data structure.", "Each node contains a key corresponding to a vocabulary item, as well as a pointer to another array containing all possible continuations in the next level.", "Binary search is used to find the correct continuations to the next level.", "With byte pair encoding (BPE) (Sennrich et al., 2016), we can always keep the maximum vocabulary size below 65535, which allows us to use 2-byte integers as keys, minimising memory usage.", "To integrate the trie into the decoder, we maintain external pointers to possible children nodes in the trie for each active hypothesis.", "When the hypotheses are expanded at each time step, the pointers are advanced to the next trie depth level.", "This ensures that cross-referencing the trie has a negligible effect on decoding speed.", "We evaluate our method on the BUCC shared task, which requires participants to extract parallel sentences from large monolingual data of English and other languages (Zweigenbaum et al., 2017, 2018).", "Monolingual and parallel sentences come from Wikipedia and News Commentary respectively.", "Data are divided into sample, train and test sets at a ratio of 1:10:10.", "The gold alignments for the test set are not public.", "Evaluation metrics adopted are precision, recall and F1 score.", "When inspecting the BUCC shared task data, we discovered overlapping parallel sentences in the sample, train and test sets.", "For example, more than 60% of the German-English gold pairs in the test set appear in the train set too.", "We apply our methods on English (En) paired with German (De), French (Fr) and Russian (Ru) on BUCC sample data initially.", "We train separate translation models for each language into English.", "All models are Transformer-Base (Vaswani et al., 2017), trained using Marian (Junczys-Dowmunt et al., 2018) with BPE applied.", "We use parallel data from WMT news translation task (Bojar et al., 2015), excluding News Commentary to prevent our systems from memorising the gold parallel sentences given the overlap issue.", "We choose beam size 90 by performing a grid search on De-En pair and keep it unchanged.", "Regarding the filtering for pre-expansion pruning, per-word conditional cross-entropy thresholds are tuned separately for each pair, because languages inherently have different (cross-)entropies.", "For Bicleaner, we stick to its default settings, except that we disable the language model filter.", "All our models translate into English, but our method is actually language-agnostic.", "Hence, we train a separate En De model, which will allow us to compare our method in inverse translation directions.", "Table 1 reports the performance of our systems on the sample data.", "Our method exhibits a much higher precision than recall.", "We hypothesise that if the systems in inverse directions retrieve different sentence pairs, then taking a union will sacrifice some precision for recall, consequently a higher F1.", "Thus, we present in the same table the results of taking the union of outputs from En De and De En systems, labelled as (3) (4).", "Likewise, we also take the union of the results from cross-entropy and Bicleaner filtering and report scores in the same table.", "It turns out that pre-expansion works better than post-expansion.", "In order to directly compare with previous work, we tune parameters of its filtering thresholds on train data for De-En pair, and apply the pre-expansion variant on the test data.", "Our results, evaluated by the BUCC organisers, are reported in Table 2 together with other submissions.", "1 The shared task organisers confirmed the issue after we pointed it out.", "They re-evaluated previous submissions without overlapping parallel sentences.", "On average, recall drops by 2% with the largest being 4%.", "data.", "We fine-tune our De En and En De systems on News Commentary, excluding the sentence pairs which appear in BUCC train or test sets.", "As BUCC submissions are asked not to use News Commentary, this is only used to contrast with our own results on the train set.", "Experiments on the sample data in Table 1 show that pre-expansion pruning outperforms post-expansion by about 10 F1 points.", "This can be explained by the fact that the decoder has a better chance to generate the correct target sentence if the available vocabulary is constrained.", "For both variants, the high precision reflects the effectiveness of using NMT as a sentence similarity scorer.", "Regarding filtering methods, we notice that Bicleaner achieves a more balanced precision and recall, while filtering by per-word cross-entropy leads to very high precision but lower recall.", "Generally, the latter does better in terms of F1.", "Taking a union of the output from the two filtering methods results in a even more balanced precision and recall, without damaging F1.", "This implies that the two filtering techniques keep different sentence pairs.", "Moreover, our models are trained using a vanilla Transformer-Base architecture on WMT data.", "Without data or model wise techniques (e.g. in-domain fine-tuning), they are nowhere close to state-of-the-art NMT systems (Barrault et al., 2019).", "Contrasting Table 1 and Table 2 reveals a discrepancy between our method's F1 scores on the sample and train sets.", "We suspect that when there are more possible target sentences, our model will have more choices, leading to a lower performance.", "The same behaviour is also observed in other BUCC 2018 submissions which report their scores on the sample data (Azpeitia et al., 2018; Leong et al., 2018).", "Overall our method does not outperform state-of-the-art which leverages neural embeddings.", "We identify several weaknesses: beam search can only find local optima, and a genuine parallel sentence cannot be recovered once it is pruned.", "Thus the method is vulnerable when parallel sentences have different word ordering.", "For example, Por el momento, estoy bebiendo un cafe (English: At the moment, I am drinking a coffee) can hardly match I am drinking a coffee at the moment, because an NMT system will have very low probability of generating a reordered translation, unless using an undesirably large beam size.", "Moreover, compared to methods that consider textual overlap, NMT is sensitive to domain mismatch and rare words (Koehn and Knowles, 2017).", "When a system is confused by rare words in the source, we observe that the overly zealous language model in the decoder generates a fluent sentence in the trie rather than a translation.", "This problem is alleviated when our systems are fine-tuned on in-domain data, as shown in Table 2 that there is a gain in F1.", "Finally we discuss the limitation of evaluating our method on the BUCC task.", "First, our method based on NMT can be liable to favour machine-translated texts, whereas the BUCC data is unlikely to contain those.", "Next, we notice that some parallel sentences in BUCC data are not included in the gold alignments.", "For instance, in De-En train set, de-000081259 and de-000081260 are the same German sentence, and so are en-000036940 and en-000036941 on the English side.", "Gold alignments only include (de-000081259, en-000036940) and (de-000081260, en-000036941), but not the other two.", "Lastly, it still remains unknown if a system optimised for F1 will produce the sentences that can truly improve NMT performance.", "A typical parallel corpus mining workflow first aligns parallel documents to limit the search space for sentence alignment.", "Early methods rely on webpage structure (Resnik and Smith, 2003; Shi et al., 2006).", "Later, Uszkoreit et al. (2010) translate all documents into a single language, and shortlist candidate document pairs based on TF-IDF-weighted n-grams.", "Recently, Guo et al. (2019) suggest a neural method to compare document embeddings obtained from sentence embeddings .", "With the assumption that matched documents are parallel (no cross-alignment), sentence alignment can be done by comparing sentence length in words (Brown et al., 1991) or characters (Gale and Church, 1993), which is then improved by adding lexical features (Varga et al., 2005).", "After translating texts into the same language, BLEU can also be used to determine parallel texts, by anchoring the most reliable alignments first (Sen-nrich and Volk, 2011).", "Most recently, Thompson and Koehn (2019) propose to compare bilingual sentence embeddings with dynamic programming in linear runtime.", "There are also research efforts on parallel sentence extraction without the reliance on document alignment.", "Munteanu and Marcu (2002) acquire parallel phrases from comparable corpora using bilingual tries and seed dictionaries.", "Azpeitia et al. (2018) computes Jaccard similarity of lexical translation overlap.", "Leong et al. (2018) use an autoencoder and a maximum entropy classifier.", "Bouamor and Sajjad (2018) consider cosine similarity between averaged multilingual word embeddings.", "Guo et al. (2018) design a dual encoder model to learn multilingual sentence embeddings directly with added negative examples.", "Wieting et al. (2019) obtain sentence embeddings from sub-word embeddings and train a simpler model to distinguish positive and negative examples.", "Artetxe and Schwenk (2019) refine Guo et al. (2018)'s work and achieve state-of-the-art by looking at the margins of cosine similarities between pairs of nearest neighbours.", "In our work, using NMT as a similarity scorer relies on constrained decoding (Hokamp and Liu, 2017), which has been applied on image captioning (Anderson et al., 2017) and keyword generation (Lian et al., 2019).", "We bring a new insight into using NMT as a similarity scorer for sentences in different languages.", "By constraining on a target side trie during decoding, beam search can approximate pairwise comparison between source and target sentences.", "Thus, overall we present an interesting way of finding parallel sentences through trie-constrained decoding.", "Our method achieves a comparable F1 score to existing systems with a vanilla architecture and data.", "Maximising machine translation scores is biased towards finding machine translated text produced by a similar model.", "More research is needed on this problem given the prevalent usage of NMT.", "We hypothesise that part of the success of dual conditional cross-entropy filtering (Junczys-Dowmunt, 2018) is checking that scores in both directions are approximately equal, whereas a machine translation would be characterised by a high score in one direction.", "Finally, scalability is a key issue in large-scale mining of parallel corpora, where both quantity and quality are of concern.", "The scalability of direct sentence alignment without a document aligner has not been thoroughly investigated in our work, as well as other related work.", "This work has received funding from the European Union under grant agreement INEA/CEF/ICT/A2017/1565602 through the Connecting Europe Facility.", "This paper reflects the authors' views; INEA is not responsible for any use that may be made of the information contained in this paper." ]
[ "objective", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "method", "objective", "objective", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other" ]
[ "Sentences with gapping, such as Paul likes coffee and Mary tea , lack an overt predicate to indicate the relation between two or more arguments.", "Surface syntax representations of such sentences are often produced poorly by parsers, and even if correct, not well suited to downstream natural language understanding tasks such as relation extraction that are typically designed to extract information from sentences with canonical clause structure.", "In this paper, we present two methods for parsing to a Universal Dependencies graph representation that explicitly encodes the elided material with additional nodes and edges.", "We find that both methods can reconstruct elided material from dependency trees with high accuracy when the parser correctly predicts the existence of a gap.", "We further demonstrate that one of our methods can be applied to other languages based on a case study on Swedish.", "Sentences with gapping (Ross, 1970) such as Paul likes coffee and Mary tea are characterized by having one or more conjuncts that contain multiple arguments or modifiers of an elided predicate.", "In this example, the predicate likes is elided for the relation Mary likes tea .", "While these sentences appear relatively infrequently in most written texts, they are often used to convey a lot of factual information that is highly relevant for language understanding (NLU) tasks such as open information extraction and semantic parsing.", "For example, consider the following sentence from the WSJ portion of the Penn Treebank (Marcus et al., 1993).", "To extract the information about unemployment rates in the various countries, an NLU system has to identify that the percentages indicate unemployment rates and the locational modifiers indicate the corresponding country.", "Given only this sentence, or this sentence and a strict surface syntax representation that does not indicate elided predicates, this is a challenging task.", "However, given a dependency graph that reconstructs the elided predicate for each conjunct, the problem becomes much easier and methods developed to extract information from dependency trees of clauses with canonical structures are much more likely to extract the correct information from a gapped clause.", "While gapping constructions receive a lot of attention in the theoretical syntax literature (e.g., Ross 1970; Jackendoff 1971; Steedman 1990; Coppock 2001; Osborne 2006; Johnson 2014; Toosarvandani 2016; Kubota and Levine 2016), they have been almost entirely neglected by the NLP community so far.", "The Penn Treebank explicitly annotates gapping constructions, by co-indexing arguments in the clause with a predicate and the clause with the gap, but these co-indices are not included in the standard parsing metrics 1156 and almost all parsers ignore them.", "1 Despite the sophisticated analysis of gapping within CCG (Steedman, 1990), sentences with gapping were deemed too difficult to represent within the CCG-Bank (Hockenmaier and Steedman, 2007).", "Similarly the treebanks for the Semantic Dependencies Shared Task (Oepen et al., 2015) exclude all sentences from the Wall Street Journal that contain gapping.", "Finally, while the tectogrammatical layer of the Prague Dependency Treebank (Bejcek et al., 2013) as well as the enhanced Universal Dependencies (UD) representation (Nivre et al., 2016) provide an analysis with reconstructed nodes for gapping constructions, there exist no methods to automatically parse to these representations.", "Here, we provide the first careful analysis of parsing of gapping constructions, and we present two methods for reconstructing elided predicates in sentences with gapping within the UD framework.", "As illustrated in Figure 1, we first parse to a dependency tree and then reconstruct the elided material.", "The methods differ in how much information is encoded in the dependency tree.", "The first method adapts an existing procedure for parsing sentences with elided function words (Seeker et al., 2012), which uses composite labels that can be deterministically turned into dependency graphs in most cases.", "The second method is a novel procedure that relies on the parser only to identify a gap, and then employs an unsupervised method to reconstruct the elided predicates and reattach the arguments to the reconstructed predicate.", "We find that both methods can reconstruct elided predicates with very high accuracy from gold standard dependency trees.", "When applied to the output of a parser, which often fails to identify gapping, our methods achieve a sentence-level accuracy of 32% and 34%, significantly outperforming the recently proposed constituent parser by Kummerfeld and Klein (2017).", "Gapping constructions in English come in many forms that can be broadly classified as follows.", "1 To the best of our knowledge, the parser by Kummerfeld and Klein (2017) is the only parser that tries to output the co-indexing of constituents in clauses with gapping but they lack an explicit evaluation of their co-indexing prediction accuracy.", "Single predicate gaps : John bought books, and Mary flowers.", "(3) Contiguous predicate-argument gap (including ACCs) : Eve gave flowers to Al and Sue to Paul.", "Eve gave a CD to Al and roses to Sue.", "(4) Non-contiguous predicate-argument gap : Arizona elected Goldwater Senator , and Pennsylvania Schwelker .", "(Jackendoff, 1971) (5) Verb cluster gap : I want to try to begin to write a novel and ...", "Mary a play.", "...", "Mary to write a play.", "...", "Mary to begin to write a play.", "...", "Mary to try to begin to write a play.", "(Ross, 1970)", "The defining characteristic of gapping constructions is that there is a clause that lacks a predicate (the gap ) but still contains two or more arguments or modifiers of the elided predicate (the remnants or orphans ).", "In most cases, the remnants have a corresponding argument or modifier (the correspondent ) in the clause with the overt predicate.", "These types of gapping also make up the majority of attested constructions in other languages.", "However, Wyngaerd (2007) notes that Dutch permits gaps in relative clauses, and Farudi (2013) notes that Farsi permits gaps in finite embedded clauses even if the overt predicate is not embedded.", "2 2.2 Target representation We work within the UD framework, which aims to provide cross-linguistically consistent dependency annotations that are useful for NLP tasks.", "UD defines two types of representation: the basic UD representation which is a strict surface syntax dependency tree and the enhanced UD representation (Schuster and Manning, 2016) which may be a graph instead of a tree and may contain additional nodes.", "The analysis of gapping in the enhanced representation makes use of copy nodes for elided predicates and additional edges for elided arguments, which we both try to automatically reconstruct in this paper.", "In the simple case in which only one predicate was elided, there is exactly one 2 See Johnson (2014) or Schuster et al. (2017) for a more comprehensive overview of cross-linguistically attested gapping constructions.", "The motivation behind this analysis is that the semantically empty markers to are not needed for interpreting the sentence and minimizing the number of copy nodes leads to less complex graphs.", "Finally, if a core argument was elided along with the predicate, we introduce additional dependencies between the copy nodes and the shared arguments, as for example, the open clausal complement ( xcomp ) dependency between the copy node and Senator in the following example.", "The rationale for not copying all arguments is again to keep the graph simple, while still encoding all relations between content words.", "Arguments can be arbitrarily complex and it seems misguided to copy entire subtrees of arguments which, e.g., could contain multiple adverbial clauses.", "Note that linking to existing nodes would not work in the case of verb clusters because they do not satisfy the subtree constraint.", "Our first method adapts one of the procedures by Seeker et al. (2012), which represents gaps in dependency trees by attaching dependents of an elided predicate with composite relations.", "These relations represent the dependency path that would 3 To enhance the readability of our examples, we place the copy node in the sentence where the elided predicate would have been pronounced.", "However, as linear order typically does not matter for extracting information with dependency patterns, our procedures only try to recover the structure of canonical sentences but not their linear order.", "have existed if nothing had been elided.", "For example, in the following sentence, the verb bought , which would have been attached to the head of the first conjunct with a conj relation, was elided from the second conjunct and hence all nodes that would have depended on the elided verb, are attached to the first conjunct using a composite relation consisting of conj and the type of argument.", "The major advantage of this approach is that the dependency tree contains information about the types of arguments and so it should be straightforward to turn dependency trees of this form into enhanced UD graphs.", "For most dependency trees, one can obtain the enhanced UD graph by splitting the composite relations into its atomic parts and inserting copy nodes at the splitting points.", "4 At the same time, this approach comes with the drawback of drastically increasing the label space.", "For sentences with more complex gaps as in (5), one has to use composite relations that consist of more than two atomic relations and theoretically, the number of composite relations is unbounded: ... and Mary a play det conj > xcomp > xcomp > xcomp > obj conj > nsubj conj > cc 3.2 Orphan procedure Our second method also uses a two-step approach to resolve gaps, but compared to the previous method, it puts less work on the parser.", "We first parse sentences to the basic UD v2 representation, which analyzes gapping constructions as follows.", "One remnant is promoted to be the head of the clause and all other remnants are attached to the promoted phrase.", "For example, in this sentence, the subject of the second clause, Mary , is the head of the clause and the other remnant, flowers , is attached to Mary with the special orphan relation: John bought books and Mary flowers nsubj obj cc conj orphan 4 Note that this representation does not indicate conjunct boundaries, and for sentences with multiple gapped conjuncts, it is thus unclear how many copy nodes are required.", "This analysis can also be used for more complex gaps, as in the example with a gap that consists of a chain of non-finite embedded verbs in (5).", "... and Mary a play cc conj det orphan When parsing to this representation, the parser only has to identify that there is a gap but does not have to recover the elided material or determine the type of remnants.", "As a second step, we use an unsupervised procedure to determine which nodes to copy and how and where to attach the remnants.", "In developing this procedure, we made use of the fact that in the vast majority of cases, all arguments and modifiers that are expressed in gapped conjunct are also expressed in the full conjunct.", "The problem of determining which nodes to copy and which relations to use can thus be reduced to the problem of aligning arguments in the gapped conjunct to arguments in the full conjunct.", "We apply the following procedure to all sentences that contain at least one orphan relation.", "1. Create a list F of arguments of the head of the full conjunct by considering all core argument dependents of the conjunct's head as well as clausal and nominal non-core dependents, and adverbial modifiers.", "2. Create a list G of arguments in the gapped conjunct that contains the head of the gapped conjunct and all its orphan dependents.", "3. Find the highest-scoring monotonic alignment of arguments in G to arguments in F .", "4. Copy the head of the full conjunct and attach the copy node c to the head of the full conjunct with the original relation of the head of the gapped conjunct (usually conj ).", "5. For each argument g G that has been aligned to f F , attach g to c with the same relation as the parent relation of f , e.g., if f is attached to the head of the full conjunct with an nsubj relation, also attach g to c with an nsubj relation.", "Attach arguments g 0 G that were not aligned to any token in F to c using the general dep relation.", "6. For each copy node c , add dependencies to all core arguments of the original node which do not have a corresponding remnant in the gapped clause.", "For example, if the full conjunct contains a subject, an object, and an oblique modifier but the clause with the gap, only a subject and an oblique modifier, add an object dependency between the copy node and the object in the full conjunct.", "A crucial step is the third step, determining the highest-scoring alignment.", "This can be done straightforwardly with the sequence alignment algorithm by Needleman and Wunsch (1970) if one defines a similarity function sim ( g, f ) that returns a similarity score between the arguments g and f .", "We defined sim based on the intuitions that often, parallel arguments are of the same syntactic category, that they are introduced by the same function words (e.g., the same preposition), and that they are closely related in meaning.", "The first intuition can be captured by penalizing mismatching POS tags, and the other two by computing the distance between argument embeddings.", "We compute these embeddings by averaging over the 100-dim.", "pretrained GloVe (Pennington et al., 2014) embeddings for each token in the argument.", "Given the POS tags t g and t f and the argument embeddings v g and v f , sim is defined as follows.", "5 sim ( g, f ) = k v g v f k 2 + 1 [ t g = t f ] pos _ mismatch _ penalty We set pos _ mismatch _ penalty , a parameter that penalizes mismatching POS tags, to 2 .", "6 This procedure can be used for almost all sentences with gapping constructions.", "However, if parts of an argument were elided along with the main predicate, it can become necessary to copy multiple nodes.", "We therefore consider the alignment not only between complete arguments in the full clause and the gapped clause but also between partial arguments in the full clause and the complete arguments in the gapped clause.", "For example, for the sentence Mary wants to write a play and Sue a book the complete arguments of the full clause are { Mary , to write a play } and the arguments of the gapped clause are { Sue , a book }.", "In this case, we also consider the partial arguments { Mary , a play } and if the arguments of the gapped 5 As suggested by one of the reviewers, we also ran a posthoc experiment with a simpler similarity score function without the embedding distance term, which only takes into account whether the POS tags match.", "We found that quantitatively, the embeddings do not lead to significant better scores on the test set according to our metrics but qualitatively, they lead to better results for the examples with verb cluster gaps.", "6 We optimized this parameter on the training set by trying integer values from 1 to 15.", "conjunct align better to the partial arguments, we use this alignment.", "However, now that the token write is part of the dependency path between want and play , we also have to make a copy of write to reconstruct the UD graph of the gapped clause.", "Both methods rely on a dependency parser followed by a post-processing step.", "We evaluated the individual steps and the end-to-end performance.", "We used the UD English Web Treebank v2.1 (henceforth EWT ; Silveira et al., 2014; Nivre et al., 2017) for training and evaluating parsers.", "As the treebank is relatively small and therefore only contains very few sentences with gapping, we also extracted gapping constructions from the WSJ and Brown portions of the PTB (Marcus et al., 1993) and the GENIA corpus (Ohta et al., 2002).", "Further, we copied sentences from the Wikipedia page on gapping 7 and from published papers on gapping.", "The sentences in the EWT already contain annotations with the orphan relation and copy nodes for the enhanced representation, and we manually added both of these annotations for the remaining examples.", "The composite relations can 7 https://en.wikipedia.org/wiki/Gapping , accessed on Aug 24, 2017.", "be automatically obtained from the enhanced representation by removing the copy nodes and concatenating the dependency labels, which we did to build the training and test corpus for the composite relation procedure.", "Table 1 shows properties of the data splits of the original treebank, the additional sentences with gapping, and their combination; Table 2 shows the number of sentences in our corpus for each of the gap types.", "Parser We used the parser by Dozat and Manning (2017) for parsing to the two different intermediate dependency representations.", "This parser is a graph-based parser (McDonald et al., 2005) that uses a biLSTM to compute token representations and then uses a multi-layer perceptron with biaffine attention to compute arc and label scores.", "Setup We trained the parser on the COMBINED training corpus with gold tokenization, and predicted fine-grained and universal part-of-speech tags, for which we used the tagger by Dozat et al. (2017).", "We trained the tagger on the COMBINED training corpus.", "As pre-trained embeddings, we used the word2vec (Mikolov et al., 2013) embeddings that were provided for the CoNLL 2017 Shared Task (Zeman et al., 2017), and we used the same hyperparameters as Dozat et al. (2017).", "Evaluation We evaluated the parseability of the two dependency representations using labeled and unlabeled attachment scores (LAS and UAS).", "Further, to specifically evaluate how well parsers are able to parse gapping constructions according to the two annotation schemes, we also computed the LAS and UAS just for the head tokens of remnants (LAS g and UAS g ).", "For all our metrics, we excluded punctuation tokens.", "To determine sta-1160 EWT GAPPINGUAS LAS UAS LAS Dev ORPHAN 90.57 87.32 89.34 85.69** COMPOSITE 90.46 87.37 88.86 84.21 Test ORPHAN 90.42 87.06 87.44 83.97** COMPOSITE 90.54 87.33 86.51 81.69 Table 3: Labeled (LAS) and unlabeled attachment score (UAS) of parsers trained and evaluated on the UD representation ( ORPHAN ) and the composite relations representation ( COMPOSITE ) on the development and test sets of the EWT and the GAPPING treebank.", "tistical significance of pairwise comparisons, we performed two-tailed approximate randomization tests (Noreen, 1989; Yeh, 2000) with an adapted version of the sigf package (Pad, 2006).", "Results Table 3 shows the overall parsing results on the development and test sets of the two treebanks.", "There was no significant difference between the parser that was trained on the UD representation ( ORPHAN ) and the parser trained on the composite representation ( COMPOSITE ) when tested on the EWT data sets, which is not surprising considering that there is just one sentence with gapping each in the development and the test split.", "When evaluated on the GAPPING datasets, the ORPHAN parser performs significantly better ( p < 0 . 01 ) in terms of labeled attachment score, which suggests that the parser trained on the COMPOSITE representation is indeed struggling with the greatly increased label space.", "This is further con-firmed by the attachment scores of the head tokens of remnants (Table 4).", "The labeled attachment score of remnants is significantly higher for the ORPHAN parser than for the COMPOSITE parser.", "Further, the unlabeled attachment score on the test set is also higher for the ORPHAN parser, which suggests that the COMPOSITE parser is sometimes struggling with finding the right attachment for the multiple long-distance composite dependencies.", "Our second set of experiments concerns the recovery of the elided material and the reattachment of the orphans.", "We conducted two experiments: an oracle experiment that used gold standard dependency trees and an end-to-end experiment that used the output of the parser as input.", "For all experiments, we used the COMBINED treebank.", "Evaluation Here, we evaluated dependency graphs and therefore used the labeled and unlabeled precision and recall metrics.", "However, as our two procedures are only changing the attachment of orphans, we only computed these metrics for copy nodes and their dependents.", "Further, we excluded punctuation and coordinating conjunctions as their attachment is usually trivial and including them would inflate scores.", "Lastly, we computed the sentence-level accuracy for all sentences with gapping.", "For this metric, we considered a sentence to be correct if all copy nodes and their dependents of a sentence were attached to the correct head with the correct label.", "Oracle results The top part of Table 5 shows the results for the oracle experiment.", "Both methods are able to reconstruct the elided material and the canonical clause structure from gold dependency trees with high accuracy.", "This was expected for the COMPOSITE procedure, which can make use of the composite relations in the dependency trees, but less so for the ORPHAN procedure which has to recover the structure and the types of relations.", "The two methods work equally well in terms of all metrics except for the sentence-level accuracy, which is significantly higher for the COMPOSITE procedure.", "This difference is caused by a difference in the types of mistakes.", "All errors of the COMPOSITE procedure are of a structural nature and stem from copying the wrong number of nodes while the dependency labels are always correct because they are part of the dependency tree.", "The majority of errors of the ORPHAN procedure stem from incorrect dependency labels, and these mistakes are scattered across more examples, which leads to the lower sentence-level accuracy.", "End-to-end results The middle part of Table 5 shows the results for the end-to-end experiment.", "The performance of both methods is considerably lower than in the oracle experiment, which is pri-1161 Development Test UP UR LP LR SAcc.", "marily driven by the much lower recall.", "Both methods assume that the parser detects the existence of a gap and if the parser fails to do so, neither method attempts to reconstruct the elided material.", "In general, precision tends to be a bit higher for the ORPHAN procedure whereas recall tends to be a bit higher for the COMPOSITE method but overall and in terms of sentence-level accuracy both methods seem to perform equally well.", "Error analysis For both methods, the primary issue is low recall, which is a result of parsing errors.", "When the parser correctly predicts the orphan relation, the main sources of error for the ORPHAN procedure are missing correspondents for remnants (e.g., [for good] has no correspondent in They had left the company, many for good ) or that the types of argument of the remnant and its correspondent differ (e.g., in She was convicted of selling unregistered securities in Florida and of unlawful phone calls in Ohio , [of selling unregistered securities] is an adverbial clause whereas [of unlawful phone calls] is an oblique modifier).", "Apart from the cases where the COMPOSITE procedure leads to an incorrect structure, the remaining errors are all caused by the parser predicting the wrong composite relation.", "Kummerfeld and Klein (henceforth K&K; 2017) recently proposed a one-endpoint-crossing graph parser that is able to directly parse to PTB-style trees with traces.", "They also briefly discuss gapping constructions and their parser tries to output the co-indexing that is used for gapping constructions in the PTB.", "The EWT and all the sentences that we took from the WSJ, Brown, and GENIA treebanks already come with constituency tree annotations, and we manually annotated the remaining sentences according to the PTB guidelines (Bies et al., 1995).", "This allowed us to train the K&K parser with exactly the same set of sentences that we used in our previous experiments.", "As this parser outputs constituency trees, we could not compute dependency graph metrics for this method.", "For the sentence-level accuracy, we considered an example to be correct if", "a) each argument in the gapped conjunct was the child of a single constituent node, which in return was the sibling of the full clause/verb phrase, and", "b) the co-indexing of each argument in the gapped conjunct was correct.", "For example, the following bracketing would be considered correct despite the incorrect internal structure of the first conjunct: [ S [ S [ NP-1 Al ] likes [ NP-2 coffee ]] and [ S [ NP=1 Sue ][ NP=2 tea ]]] The last row of Table 5 shows the results of the K&K parser.", "The parser failed to output the correct constituency structure or co-indexing for every single example in the development and test sets.", "The parser struggled in particular with outputting the correct co-indices: For 32.5% of the test sentences with gapping, the bracketing of the gapped clause was correct but one or more of the co-indices were missing from the output.", "Overall these results suggest that our depend-ency-based approach is much more reliable at identifying gapping constructions than the parser by K&K, which, in their defense, was optimized to output traces for other phenomena.", "Our method is also faster and took only seconds to parse the test set, while the K&K parser took several hours.", "One of the appeals of the ORPHAN procedure is that it can be easily applied to other languages even if there exist no annotated enhanced dependency graphs.", "8 On the one hand, this is because 8 There is no theoretical reason that would prevent one from using the COMPOSITE procedure for other languages 1162 our method does not make use of lexical information, and on the other hand, this is because we developed our method on top of the UD annotation scheme, which has already been applied to many languages and for which many treebanks exist.", "Currently, all treebanks but the English one lack copy nodes for gapping constructions and many of them incorrectly use the orphan relation (Droganova and Zeman, 2017) and therefore we could not evaluate our method on a large variety of languages.", "In order to demonstrate that our method can be applied to other languages, we therefore did a case study on the Swedish UD treebank.", "The Swedish UD treebank is an automatic conversion from a section of the Talbanken (Einarsson, 1976) with extensive manual corrections.", "While the treebank is overall of high quality, we noticed conversion errors that led to incorrect uses of the orphan relation in 11 of the 29 sentences with orphan relations, which we excluded from our evaluation.", "We applied our gapping resolution procedure without any modi-fications to the remaining 18 sentences.", "We used the Swedish word2vec embeddings that were prepared for the CoNLL 2017 Shared Task.", "Our method correctly predicts the insertion of 29 copy nodes and is able to predict the correct structure of the enhanced representation in all cases, including complex ones with elided verb clusters such as the example in Figure 2. It also predicts the correct dependency label for 108/110 relations, leading to a labeled precision and labeled recall of 98.18%, which are both higher than the English numbers despite the fact that we optimized our procedure for English.", "The main reason for the higher performance seems to be that many of the Swedish examples come from informational texts from public organizations, which are more likely to be written to be clear and unambiguous.", "Further, the Swedish data does not contain challenging examples from the linguistic literature.", "As Swedish is a Germanic language like English and thus shares many structural properties, we cannot conclude that our method is applicable to any language based on just this experiment.", "However, given that our method does not rely on language-specific structural patterns, we expect it to work well for a wide range of languages.", "but given that UD treebanks are annotated with orphan relations, using the the COMPOSITE procedure would require additional manual annotations in practice.", "Gapping constructions have been little studied in NLP, but several approaches (e.g., Dukes and Habash 2011; Simk and Vincze 2017) parse to dependency trees with empty nodes.", "Seeker et al. (2012) compared three ways of parsing with empty heads: adding a transition that inserts empty nodes, using composite relation labels for nodes that depend on an elided node, and pre-inserting empties before parsing.", "These papers all focus on recovering nodes for elided function words such as auxiliaries; none of them attempt to recover and resolve the content word elisions of gapping.", "Ficler and Goldberg (2016) modified PTB annotations of argument-cluster coordinations (ACCs), i.e., gapping constructions with two post-verbal orphan phrases, which make up a subset of the gapping constructions in the PTB.", "While the modified annotation style leads to higher parsing accuracy of ACCs, it is specific to ACCs and does not generalize to other gapping constructions.", "Moreover, they did not reconstruct gapped ACC clauses.", "Traditional grammar-based chart parsers (Kay, 1980; Klein and Manning, 2001) did handle empty nodes and so could in principle provide a parse of gapping sentences though additional mechanisms would be needed for reconstruction.", "In practice, though, dealing with gapping in a grammar-based framework is not straightforward and can lead to a combinatorial explosion that slows down parsing in general, as has been noted for the English Resource Grammar (Flickinger, 2017, p.c.) and for an HPSG implementation for Norwegian (Haugereid, 2017).", "The grammar-based parser built with augmented transition networks (Woods, 1970) provided an extension in the form of the SYSCONJ operation (Woods, 1973) to parse some gapping constructions, but also this approach lacked explicit recon-struction mechanisms and provided only limited coverage.", "There also exists a long line of work on postprocessing surface-syntax constituency trees to recover traces in the PTB (Johnson, 2002; Levy and Manning, 2004; Campbell, 2004; Gabbard et al., 2006), pre-processing sentences such that they contain tokens for traces before parsing (Di-enes and Dubey, 2003b), or directly parsing sentences to either PTB-style trees with empty elements or pre-processed trees that can be deterministically converted to PTB-style trees (Collins, 1163 tnks Ullnaomrdet ka med 9000 , tnks 0 Mrsta industriomrde ka 0 med 7000 , tnks 00 Jordbro ka 00 med 4000 , ... is-thought Ullna-area increase with 9000 , is-thought Mrsta industrial-area increase with 7000 , is-thought Jordbro increase with 4000 , ....", "1997; Dienes and Dubey, 2003a; Schmid, 2006; Cai et al., 2011; Hayashi and Nagata, 2016; Kato and Matsubara, 2016; Kummerfeld and Klein, 2017).", "However, all of these works are primarily concerned with recovering traces for phenomena such as Wh-movement or control and raising constructions and, with the exception of Kummerfeld and Klein (2017), none of these works attempt to output the co-indexing that is used for analyzing gapping constructions.", "And again, none of these works try to reconstruct elided material.", "Lastly, several methods have been proposed for resolving other forms of ellipsis, including VP ellipsis (Hardt, 1997; Nielsen, 2004; Lappin, 2005; McShane and Babkin, 2016) and sluicing (Anand and Hardt, 2016) but none of these methods consider gapping constructions.", "We presented two methods to recover elided predicates in sentences with gapping.", "Our experiments suggest that both methods work equally well in a realistic end-to-end setting.", "While in general, recall is still low, the oracle experiments suggest that both methods can recover elided predicates from correct dependency trees, which suggests that as parsers become more and more accurate, the gap recovery accuracy should also increase.", "We also demonstrated that our method can be used to automatically add the enhanced UD representation to UD treebanks in other languages than English.", "Apart from being useful in a parsing pipeline, we therefore also expect our method to be useful for building enhanced UD treebanks.", "All data, pre-trained models, system outputs as well as a package for running the enhancement procedure are available from https:// github.com/sebschu/naacl-gapping .", "We thank the anonymous reviewers for their thoughtful feedback.", "Also thanks to Vera Grib-anova and Boris Harizanov for continuous feedback throughout this project, and to Matthew Lamm for help with annotating the data.", "This work was supported in part by gifts from Google, Inc. and IPSoft, Inc.", "The first author is also supported by a Goodan Family Graduate Fellowship." ]
[ "abstain", "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "other", "other", "method", "other", "other", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "objective", "method", "other", "other", "other", "other", "other" ]
[ "Humans (e.g., crowdworkers) have a remarkable ability in solving different tasks, by simply reading textual instructions that define them and looking at a few examples.", "Despite the success of the conventional supervised learning on individual datasets, such models often struggle with generalization across tasks (e.g., a question-answering system cannot solve classification tasks).", "A long-standing challenge in AI is to build a model that learns a new task by understanding the human-readable instructions that define it.", "To study this, we introduce NATURALINSTRUCTIONS , a dataset of 61 distinct tasks, their human-authored instructions, and 193 k task instances (input-output pairs).", "The instructions are obtained from crowdsourcing instructions used to create existing NLP datasets and mapped to a unified schema.", "Using this meta-dataset, we measure cross-task generalization by training models on seen tasks and measuring generalization to the remaining unseen ones.", "We adopt generative pre-trained language models to encode task-specific instructions along with input and generate task output.", "Our results indicate that models benefit from instructions when evaluated in terms of generalization to unseen tasks (19% better for models utilizing instructions).", "These models, however, are far behind an estimated performance upperbound, indicating significant room for more progress in this direction.", "1 1 Introduction We have witnessed great progress in solving many NLP datasets through fine-tuning pre-trained language models (LMs) (Peters et al., 2018; Brown et al., 2020).", "More recent studies show tremendous promise in generalization within the set of observed tasks through multi-task training and unified encoding (Khashabi et al., 2020; Aghajanyan et al., Work done while interning at Allen Institute for AI. 1 Dataset is available at https://instructions. apps.allenai.org grammarcheck tagging essentialphrases questiontyping answering questions Input: She chose to make a salad for lunch on Sunday. Question: how long did it take for her to make a salad? Crowdsourcing Instruction: List all the words that are essential for answering it correctly. [...] Crowdsourcing Instruction: Label the type of the temporal phenomena in the question. Example are [...] Output: 30mins Output: making salad Output: no ? supervision with seen tasks Output: Event duration ? evaluation on unseen tasks Crowdsourcing Instruction: Label \"yes\" if the sentence contains any grammatical issues. Otherwise, [...] Crowdsourcing Instruction: Answer the provided question based on a given [...] Figure 1: We construct the NATURALINSTRUCTIONS dataset from crowdsourcing instructions and instances of different NLP datasets. We study if models can learn from seen tasks and generalize to unseen tasks given their natural crowdsourcing instructions. 2021).", "However, cross-task generalization generalization to unseen tasks has generally remained under-explored.", "For example, can we supervise a model with instances of grammar checking or question answering tasks, yet expect it to solve a different task like question typing (Fig.1).", "Evidently, humans are capable of such generalizations; an average human can follow natural language instructions to solve a variety of problems, as evident by the success of crowdsourcing platforms (also argued in Efrat and Levy (2020)).", "In this paper, we study if models can generalize to unseen tasks given their crowdsourcing instructions (Fig.1).", "We build NATURALINSTRUCTIONS , a dataset consisting of natural crowdsourcing instructions for various tasks and their instances.", "Training on seen tasks T seen in our dataset, we build a model that learns to follow natural instructions that define a task and perform tasks (i.e., mapping input to out-put).", "Testing on unseen tasks T unseen , we evaluate if the model can perform unseen tasks solely from 3470 Task Instance-LevelGeneralization Task-LevelGeneralization Trainingdata X train ,Y train p I t ,X train t ,Y train t q t P T seen Evaluation x y where: p x,y q P p X test ,Y test q p x,I t q y where: p x,y q P p X test t ,Y test t q t P T unseen", "(a) A comparison of task vs instance -level generalization I t , X t and Y t indicate natural language instructions, input, and output sets respectively for task t .", "In the conventional setup, training and evaluation are done on the instances of the same task.", "However, in task-level generalization, a model is expected to generalize to unseen tasks, where T unseen XT seen H .", "(b) BART evaluation on unseen tasks ( y -axis is perf. on T unseen ) when supervised with seen tasks ( x -axis is | T seen | ).", "A model using instructions ( I t ) consistently improves with more observed tasks.", "In contrast, models with no access to the instructions show no sign of improved generalization.", "Details in 6.3.", "their instructions and without any task-specific labeled data (Table 2a; right).", "In contrast to the instance-level generalization (Table 2a; left), our model uses instruction as additional input, and evaluations are done on tasks that were not observed in the training stage.", "We compile NATURALINSTRUCTIONS from task instructions written by researchers for crowdsourcing existing NLP datasets.", "Such crowdsourcing instructions often elaborate a variety of details about how a task should (and should not) be done.", "To provide a systematic study of various elements of crowdsourcing instructions, we map them to a unified schema to cover the most important elements of task descriptions such as definition, constraints, positive and negative examples.", "We collect tasks in NATURALINSTRUCTIONS as minimal stand-alone steps provided to crowdworkers to complete a downstream NLP task.", "For example, tasks collected from QASC (Khot et al., 2020) include sub-tasks about generating topic words or combining facts, as well as answering multi-hop questions.", "Therefore our dataset not only contains typical downstream tasks in NLP, but also the intermediate subtasks that are not well-represented in the common benchmarks.", "The unified schema and the collection of minimal subtasks enable training LMs that can generalize across different tasks by learning from instructions.", "In total, our dataset consists of 61 distinct NLP tasks and 193 k instances.", "Our experimental results indicate that LMs learn to leverage natural language instructions as they show improved generalization to new tasks.", "For example, a BART (Lewis et al., 2019) achieves a 19% gain in terms of cross-task generalization compared to a model not using instructions (6).", "Importantly, LMs can generalize better to unseen tasks if they observe more tasks in training (Fig.2b).", "This upward trajectory suggests the potential for stronger cross-task generalizable models upon scaling up the diversity of tasks represented in a meta-dataset of task instructions.", "Despite the benefits of instructions, we observe a sizable gap between models' generalization and their estimated upperbounds (6.4), encouraging the community to work on this challenging problem.", "Contributions: In summary, the contributions of this work are as follows:", "(a) we introduce NATURALINSTRUCTIONS , a dataset of human-authored instructions curated from existing wellknown datasets mapped to a unified schema, providing training and evaluation data for learning from instructions;", "(b) we build models that can encode instructions and show: (b.1) the benefit of cross-task generalization by leveraging instructions; (b.2) the importance of different elements of instructions in the performance; (b.3) noteworthy headroom for improvement on our benchmark, which hopefully will motivate further work in this direction.", "Learning from instructions.", "There is recent literature on the extent to which models follow language instructions (Hase and Bansal, 2021; Ye and Ren, 2021; Gupta et al., 2021; Zhong et al., 2021).", "For example, Efrat and Levy (2020) examine if language models can follow crowdsourcing instructions with no further training.", "On the contrary, our work is pursuing a fundamentally different goal: creating a dataset of crowdsourcing instructions and task instances and formulating cross-task generalization by training models on seen tasks and 3471 measuring generalization to the remaining unseen ones.", "Weller et al. (2020) construct a crowdsourced dataset with short question-like task descriptions.", "Compared to this work, our instructions are longer, more complex and natural since they were used to collect datasets through crowdsourcing.", "PromptSource and FLAN (Wei et al., 2022; Sanh et al., 2022) are two concurrent works that pursue a similar goal as ours.", "A key difference between our work to these works is in terms of data collection strategy.", "Our work uses natural instructions created by NLP researchers before the dataset instances were created by crowd workers, and hence it contains the complete definition of each task (defini-tion, things to avoid, negative examples, etc.).", "On the other hand, instructions in the concurrent work are collected retroactively based on the already-available task instances.", "Our natural instructions enable evaluating models on how they learn tasks given different elements of task descriptions.", "(See A.5 for further comparisons.)", "Nevertheless, we believe that all these approaches to constructing instructions and task categories are complementary and the community will benefit from considering both towards solving the challenging problem of cross-task generalization.", "Prompt engineering.", "Constructing effective discrete prompts for language models to perform NLP tasks is an active area of research (Schick and Schtze, 2021; Reynolds and McDonell, 2021; Liu et al., 2021).", "Such prompts are often extremely short and may not include a complete definition of complex tasks.", "In contrast, our instructions encode detailed instructions as they were used to collect the datasets.", "Moreover, the goals are different: Most prompt-engineering approaches seek prompts with higher performance on a particular task, typically through assumptions about their target task which make them non-trivial to generalize to any other task.", "However, our introduced meta dataset enables the measurement of generalization to unseen tasks.", "Beyond standard multi-task learning.", "Multitask learning is a long-standing goal for AI (Caru-ana, 1997) and has led to successful models that can support a wider range of tasks (McCann et al., 2018; Raffel et al., 2020; Khashabi et al., 2020; Mishra et al., 2020; Aghajanyan et al., 2021; Ye et al., 2021).", "Most of the conventional setups in the multi-tasking literature evaluate on instances that belong to the tasks that are seen, i.e., their labeled instances were observed during training (1st column of Table 2a).", "We augment this setup by introducing natural language instructions which enable our models to bridge to tasks that were not seen during training.", "Here we formally define the problem setup for generalization across tasks.", "Each task t consists of input/output instances p X t , Y t q and is described in terms of its natural language instructions I t .", "Task-specific models.", "Standard supervised learning algorithms use task-specific labeled instances to learn a mapping from input x to output y : M p x q y for p x, y q P p X train t , Y train t q and is evaluated on the test instances of the same (or similar) task p X test t , Y test t q . We refer to this as the instance-level generalization (Table 2a; left). Cross-task models. In this setup, the goal is to learn a model M that at inference obtains the output y given the input x and the task instruction I t : M p I t , x q y, for p x, y q P p X t , Y t q . In contrast to the task-specific models, no task-specific training data is used to learn the mapping M . We collect NATURALINSTRUCTIONS (4) to study this question: can a model be trained to follow instructions via training tasks T seen and be generalized to follow instructions for a task t 1 PT unseen . We refer to this as a task -level generalization (Table 2a; right). 4 NATURALINSTRUCTIONSNATURALINSTRUCTIONS consists of instructions that describe a task (e.g., question answering) and instances of that task (e.g., answers extracted for a given question). Fig.3 shows an example instruction for the task of generating questions that require an understanding of event duration' accompanied with positive and negative examples that contextualize the task. Here we introduce a schema for representing instructions (4.1) and then describe how existing datasets (their crowdsourcing templates) are mapped into our schema (4.2). 4.1 Instruction Schema Instructions used in crowdsourcing various datasets, are written by distinct authors for different purposes, and they are different in a variety of ways (see Appendix A.2 for their differences.) We introduce a unified schema (Fig.4) to consistently represent these diverse forms of instructions. Our instruction schema is the result of our pilot 3472 Instructions for MC-TACO question generation task -Title: Writing questions that involve commonsense understanding of \"event duration\". -Definition: In this task, we ask you to write a question that involves ?event duration\", based on a given sentence. Here, event duration is defined as the understanding of how long events typically last. For example, ?brushing teeth?, usually takes few minutes. -Emphasis & Caution: The written questions are not required to have a single correct answer. -Things to avoid: Don't create questions which have explicit mentions of answers in text. Instead, it has to be implied from what is given. In other words, we want you to use \"instinct\" or \"common sense\". -Input: Sentence: Jack played basketball after school, after which he was very tired. -Output: How long did Jack play basketball? -Reason: the question asks about the duration of an event; therefore it's a temporal event duration question. Positive Example -Input: Sentence: He spent two hours on his homework. -Output: How long did he do his homework? -Reason: We DO NOT want this question as the answer is directly mentioned in the text. -Suggestion: Negative Example -Prompt: Ask a question on \"event duration\" based on the provided sentence. Example task instances -Input: Sentence: It's hail crackled across the comm, and Tara spun to retake her seat at the helm. -Expected Output: How long was the storm? Instance -Input: Sentence: During breakfast one morning, he seemed lost in thought and ignored his food. -Expected Output: How long was he lost in thoughts? Instance ... Figure 3: An example from our dataset. Note that it follows the schema provided in Fig.4. See Fig .11 for more examples. study conducted on a subset of datasets. Below we describe the ingredients of this schema: TITLE provides a high-level description of a task and its associated skill (such as question generation, answer generation). PROMPT is a single sentence command that often appears before the input instance and connects it to the instructions. DEFINITION provides the core detailed instructions for a task. THINGS TOAVOID contain instructions regarding undesirable annotations that must be avoided. These help to define the scope of a task and the space of acceptable responses. EMPHASIS ANDCAUTION are short, but important statements highlighted in the crowdsourcing templates which were intended to be emphasized or warned against. POSITIVEEXAMPLES contain inputs/outputs similar to the input given to a worker/system and its expected output, helping crowdworkers better understand a task (Ali, 1981). NEGATIVEEXAMPLES contain inputs/outputs Instructions Title Definition Things to avoid Emphasis/caution Prompt # of positive examples Input Output Reason # of negative examples Input Output Reason Suggestion Positive Example Negative Example Instances # of i nstances Input Output Task Instance Figure 4: The schema used for representing instruction in NATURALINSTRUCTIONS (4.1), shown in plate notation. to emphasize THINGS TOAVOID by providing examples that must not be produced. REASON provides explanations behind why an example is positive or negative. SUGGESTION contains suggestions on how a negative example could be modified to turn it into a positive example. The next section describes the process of mapping the raw instructions (designed for crowdworkers) to our instruction schema. 4.2 Constructing NATURALINSTRUCTIONS 4.2.1 Collecting Data Collecting raw instructions and instances. We use existing, widely adopted NLP benchmarks that are collected via crowdsourcing platforms and hence, come with crowdsourcing templates. In the first step, we identified several datasets and engaged with their authors to get their crowdsourcing templates and raw data. This yields the following datasets: CosmosQA (Huang et al., 2019), DROP (Dua et al., 2019), Essential-Terms (Khashabi et al., 2017), MCTACO (Zhou et al., 2019), MultiRC (Khashabi et al., 2018), QASC (Khot et al., 2020), Quoref (Dasigi et al., 2019), ROPES (Lin et al., 2019) and Winogrande (Sakaguchi et al., 2020). 2 Splitting crowdsourcing instructions into minimal tasks. Almost all the crowdworking instructions include sequences of steps to guide crowdworkers in creating task instances. For example, QASC and MCTACO include 7 and 19 steps in the data creation process, respectively. We divide 2 We only focus on textual instructions and avoid datasets that involve visual or auditory steps, mostly focusing on QA datasets that were available to the authors. 3473 source dataset task Quoref (Dasigi et al., 2019) question generation answer generation QASC (Khot et al., 2020) topic word generation fact generation combining facts question generation answer generation incorrect answer generation Table 1: Examples of the datasets and the tasks formed from them. The extracted tasks are independent annotation assignments in the crowdsourcing templates of the datasets. The complete list is in Table 10 in Appendix. category # of tasks # of instances question generation 13 38 k answer generation 16 53 k classification 12 36 k incorrect answer generation 8 18 k minimal modification 10 39 k verification 2 9 k Total 61 193 k Table 2: Task categories and their statistics. crowdsourcing instructions into their underlying steps and generate multiple subtasks that are minimal and standalone. 3 Table 1 shows subtasks extracted for Quoref and QASC. For example, the main task in Quoref is to answer a question given a context paragraph, but the crowdsourcing template consists of two sub-tasks of question generation and answer generation with their separate instructions. This process results in a more consistent definition of tasks, enabling a successful mapping of instructions into our schema, in contrast to the work of Efrat and Levy (2020) that uses crowdsourcing instructions as-is. In total, there are 61 tasks, which are categorized into 6 semantic categories (Table 2). We assigned these broad categories to the tasks to understand their collective behavior in the experiments. It is noteworthy that, despite the apparent resemblance of the tasks included in the same category, any pair of tasks are distinct. For example, while question generation is part of Quoref, CosmosQA, and QASC, each has its own separate variant of the question generation task (see Fig.10 in Appendix). 4.2.2 Mapping Raw Instructions to Schema We manually fill in the fields of our instruction schema with the content from the crowdsourcing 3 We eliminate tasks that involve model-in-the-loop. instructions. For instance, parts of the raw instructions that are highlighted for emphasis are incorporated as part of our emphasis/caution field. The modifications suggested in this step were applied by one author and were verified by another author. 4 Improving description quality and consistency. We edit raw instructions to ensure their quality. Particularly, we fix writing issues (typos, ambiguities, etc.) and redact repetitions. While repetition often helps in augmenting human understanding, short and concise instructions are often more effective for computers due to their limited attention span (Beltagy et al., 2020). Augmenting examples and reasons. There is a large variance in the number of examples provided in the raw instructions. Instructions often include more positive examples, or some instructions do not include any negative examples (e.g., QASC). Whenever possible, we add negative examples such that each task has at least two negative examples. Furthermore, not all raw instructions contain REASONS or SUGGESTIONS for each of their examples. For example, positive examples are usually not accompanied by explanations, and most datasets do not include suggestions. We add them, wherever such information is missing in the instructions. Collecting input/output instances for subtasks. Most of our tasks are the intermediate steps in the crowdsourcing process. Therefore, to extract input/output instances for each task, we need to parse the raw annotations of crowdworkers for every step. Since each dataset stores its annotations in a slightly different format, extracting and unifying such intermediate annotations can be non-trivial. Verification. An annotator verified the quality of the resulting data in consultation with dataset authors. The annotator iterated on the authors' feedback (avg of 3 iters) until they were satisfied. Quality assessment. We ask independent human annotators to answer 240 random instances (20 instances from 12 random tasks, used later for our evaluation 5.1). The subsequent evaluation of the human-generated responses results in more than 96% accuracy, which indicates that humans can effortlessly understand and execute our instructions. 4.2.3 NATURALINSTRUCTIONS Statistics In summary, NATURALINSTRUCTIONS consists of subtasks each with a set of instructions and in-4 On average, the process of data curation for each task takes around 5 hrs-34 hrs (details in Appendix; Table 9). 3474 put/output instances (Fig.3 and 4). The complete list of instructions is included in the appendix. In total, the dataset includes 61 tasks and 193 k instances. Table 2 shows data statistics for each task category. 5 On average, instructions contain 4.9 positive examples and 2.2 negative examples. The longest element of instructions is usually DEFINITIONS with 65.5 tokens and the shortest is TITLE with 8.3 tokens (more statistics in Table 3). statistic value title length 8.3 tokens prompt length 12.6 tokens definition length 65.5 tokens things to avoid length 24.1 tokens emphasis/caution length 45.0 tokens reason length 24.9 tokens suggestion length 19.6 tokens num of positive examples 4.9 num of negative examples 2.2 Table 3: Statistics of NATURALINSTRUCTIONS 5 Problem Setup and Models Here we define different cross-task generalization settings (5.1) and the models (5.2).", "Random split.", "This setup follows the common practice in benchmarking NLP models with random data splits.", "Here, two tasks from each task category (Table 2) in NATURALINSTRUCTIONS are randomly selected for evaluation, and the rest of the tasks are used for training.", "This leads to 12 tasks in T unseen and 49 tasks in T seen .", "6 Leave-one-out generalization.", "To better understand the nature of cross-task generalization, we study more restrictive settings of dividing training and evaluation tasks.", "leave-one-category: evaluates how well a model generalizes to a task category if it is trained on others no task of that category is in T seen .", "leave-one-dataset: evaluates how well a model can generalize to all tasks in a particular dataset if it is trained on all other tasks no task of that dataset is in T seen .", "This split prevents any leakage across tasks that belong to the same source datasets.", "leave-one-task: evaluates how well a model can learn a single task by training on all other tasks.", "We build models using pre-trained LMs with encoder-decoder architectures BART (Lewis et al., 2019) for fine-tuning and GPT3 (Brown et al., 2020) for few-shot experiments.", "Encoding instructions and instances.", "For every problem setup, we map a given instruction I t and an input instance x into a textual format and decode an output y and obtain enc p I t , x q .", "This encoding function is then fed to an encoder-decoder model to predict y : M : enc p I t , x q y .", "Encoding instances follows a standard NLP paradigm of mapping an input instance to text.", "Each instruction I t consists of multiple elements as described in our instruction schema (4.1).", "Here, we map each element of the instruction to a textual format and append it before the input instance.", "Fig.5 shows how we encode the full instruction.", "To study the impact of each instruction element for cross-task generalization, we compare these encodings: (1) PROMPT , (2) POS .", "EXAMPLES , (3) PROMPT + DEFINITION , (4) PROMPT + THINGS TO AVOID , (5) PROMPT + EMPHASIS , (6) PROMPT + POS .", "EXAMPLES , (7) PROMPT + + DEFINITION + POS .", "EXAMPLES , and (8) FULL INSTRUCTION .", "Each of these (e.g., PROMPT and POS . EXAMPLES ) correspond to prompting setups in the recent literature (Le Scao and Rush, 2021; Lu et al., 2021).", "BART.", "We use BART (base) (Lewis et al., 2019) which allows us to fine-tune its model parameters.", "This is an encoder-decoder architecture with 140 m parameters.", "For each setup, the input is encoded 3475 model evaluation set T unseen random split of tasks leave-onecategory (QG) leave-onedataset (QASC) leave-onetask (QASC QG) BART (fine-Tuned) NO INSTRUCTIONS 13 6 37 20 FULL INSTRUCTIONS 32 17 51 56 GPT3 (not fine-tuned) FULL INSTRUCTIONS 24 33 22 33 Table 4: Cross-task generalization of BART under various splits (5.1).", "T seen tasks, and evaluated on T unseen (5.1).", "GPT3.", "As a comparison, we evaluate GPT3 (Brown et al., 2020) which is a 175 B parameter autoregressive LM ( 1 . 2 k larger than BART) and has shown promising results in mimicking demonstrations provided in its prompt.", "We cannot fine-tune the parameters of this massive model and use it as-is under its default setting on the evaluation tasks in T unseen (5.1) using the encoding introduced earlier.", "Evaluation metrics.", "We treat all of our tasks as text generation problems and evaluate them with automated evaluation metrics for text generation.", "In particular, we use ROUGE-L (Lin, 2004) to automatically evaluate the generated outputs.", "7 Implementation details.", "For BART, our models are trained for 3 epochs with a learning rate of 5e-5 for a given training split and input encoding.", "For GPT3, we use the davinci-instruct engine and produce outputs with greedy decoding, generating up to a maximum number of tokens of 16 (the default value).", "We use the default stop condition which is 2 newline tokens.", "8 6.1 Generalization Under Various Task Splits Table 4 reports the results of the BART model train and evaluated with various task splits (5.1).", "For comparison, we evaluate GPT3 which uses no fine-tuning, unlike BART that is fine-tuned with the T seen tasks.", "The first column corresponds to random split of tasks, while the remaining columns report cross-task generalization results of the BART model under leave-onex splits (5.1).", "For x category, the tasks in question-generation category 7 Our experiments show that other metrics, e.g. BLEURT (Sellam et al., 2020) are also correlated with ROUGE-L, which has also been used in generative QA tasks. 8 The relevant code is available at: https://github. com/allenai/natural-instructions-v1 are held out during training. For x dataset, the tasks that were extracted from the QASC dataset were excluded from training. For x task, we train a model on all tasks, except QASC question generation task which is used for evaluation. Instructions benefit cross-task generalization. The results indicate that BART benefits from instructions in generalizing to new tasks, regardless of task splits. For example, under random split, the model using FULLINSTRUCTIONS results in +19% gains over a model that is not using instructions. This is particularly interesting for leave-one-cat-egory-out split since the trained model can generalize to the tasks of a particular semantic category, without being exposed to it. In comparison to GPT3, the fine-tuned BART model that utilizes instructions achieves a stronger performance despite being 1 k smaller than GPT3. For example, a BART models using FULLINSTRUCTIONS achieves 8% higher performance than GPT3 under random split of tasks. Note that the absolute values in leave-one-category are lower due to the difficulty of this setup compared to, for example, the random split setup. While all settings involve evaluating on tasks not seen during training, the leave-one-category setting enforces more dissimilarity among training and evaluation tasks. 6.2 Generalization Under Instruction Encoding and Task Categories Table 5 reports the results of the BART model per encodings of different instruction elements (5.2) and for different task categories. The table shows that encoding more elements of the instructions generally achieves better results than just using PROMPT or POSITIVE EXAMPLES . It additionally shows that the benefit of the instruction elements seems to depend on the target task category. We observe that the question-generation (QG) tasks benefit the most from POSITIVE EXAMPLES , whereas in classification (CF), POSITIVE EXAMPLES are of 3476 model task category QG AG CF IAG MM VF avg BART (fine-tuned) NOINSTRUCTION 26 6 0 21 33 7 13 PROMPT 27 22 7 22 34 9 20 + DEFINITION 35 24 50 25 36 7 30 (+50) + THINGS TO AVOID 33 24 4 24 58 9 25 (+25) + EMPHASIS 38 23 16 26 49 3 26 (+30) + POS . EXAMPLES 53 22 14 25 17 7 23 (+15) + DEFINITION + POS . EXAMPLES 51 23 56 25 37 6 33 (+65) POS . EXAMP . 55 6 18 25 8 6 20 FULLINSTRUCTION 46 25 52 25 35 7 32 (+60) GPT3 (not fine-tuned) FULLINSTRUCTION 33 18 8 12 60 11 24 (+11) Table 5: Cross-task generalization under random split (5.1). Models show improved results when provided with instructions. The numbers in parenthesis indicate absolute gains compared to N OINSTRUCTIONS ' baseline. Fine-tuned BART archives better performance than GPT3, despite being over 1 k times smaller. Category names: QG: Question Generation, AG: Answer Generation, CF: Classification, IAG: Incorrect Answer Generation, MM: Minimal Text Modification, VF: Verification. All numbers are ROUGE-L (in percentage). little help. We hypothesis this is because it is easier to mimic question-generation based on a few examples, whereas it is difficult to define classes via a few examples, where DEFINITION can be more helpful. The models show little improvement in verification (VF). We hypothesize these tasks are inherently more difficult, partially because of their distinctness from the rest of the tasks in the dataset. We hope future work on this line will study a wider variety of tasks and will improve our understanding of such failure cases. 6.3 Generalization vs. Number of Seen Tasks Fig.2b compares the impact of the number of seen tasks for cross-task generalization. For supervision, we randomly sample a few tasks as T seen and evaluate on 6 tasks (one from each category). (each point in the figure is averaged over 5 random subsamples.) The results show that with NOINSTRUCTION encoding there is no tangible value in observing more tasks. In contrast, the generalization of the models that encode instructions improves with observing more tasks. This is an exciting observation since it suggests that scaling up our dataset to more tasks may lead to stronger instruction-following systems. 6.4 Analyses Upperbound: Task-specific Models. For each task, we obtain a task-specific model ( 3) by training BART separately on each task's annotated training data. We evaluate these task-specific models to obtain a loose estimate of upperbounds for each task. On average, task-specific models score Model Split w/ neg. examples w/o neg. examples BART random 32 35 leave-onex x category (AG) 19 21 x dataset (Quoref) 37 37 x task (QASC QG) 56 57 GPT3 24 44 Table 6: Effect of excluding negative examples from FULLINSTRUCTION encoding. Negative instructions are surprisingly difficult for the models to learn from. 66% which is considerably higher than our models' best generalization (32%; Table 4). This indicates that there is considerable room for improving generalization-based models that use instructions. Impact of Negative Examples. Crowdsourcing instructions often include negative examples to exemplify undesirable responses. We study how negative examples in instructions affect cross-task generalization. Our cases study (Table 6) indicates that the models work better without (w/o) negative examples, contrary to the previously-observed benefits of other instructional elements (e.g., definition, positive examples). This is aligned with the previous studies (Xuan et al., 2020; Lin et al., 2003) that discuss the challenges of learning from negative examples. Interestingly, GPT3's drop (44 vs 24) is more significant than BART (35 vs 32), showing that BART can partly recover through the training step. Error Analysis. We randomly sample 30 erroneous predictions of our fine-tuned BART on 3 distinct tasks (Winogrande answer generation; QASC 3477 Category Helpful Fields Explanation Question Generation (QG) 1. DEFINITIONProvides a holistic picture of the task. 2. EMPHASIS & CAUTIONProvides key information for solving the task. 3. POSITIVEEXAMPLES -Thisgivesanideaofwhatisexpectedintheoutput. 4. NEGATIVEEXAMPLESGood to know the common mistakes people do. Answer Generation (AG) 1. PROMPTIt limits the exploration space to question spans. 2. DEFINITIONProvides a general understanding of the task. 3. POSITIVEEXAMPLESReason field is very helpful. Classification (CF) 1. DEFINITIONThe task is unclear without this field. Incorrect Answer Generation (IAG) 1. DEFINITIONHelps understand the utility of such a task. 2. EMPHASIS & CAUTIONSource of some useful shortcuts. 3. POSITIVEEXAMPLESHelps in understanding the type of questions asked. Minimal Text Modification (MM) 1. THINGS TOAVOIDProvides critical information. Verification (VF) 1. DEFINITIONMakes the task easy to understand. 2. THINGS TO AVOIDContains useful tips required for this task. 3. POSITIVEEXAMPLESExemplifies task understanding. 4. NEGATIVE EXAMPLESHelps avoid potential mistakes. Table 7: Results of humans' perceived importance of instruction elements. Our annotators, for example, find DEFINITION and THING TOAVOID to be helpful for Classification and Minimal Text Modification tasks, respectively. question generation; MC-TACO incorrect answer generation). We categorize the errors into common patterns (Table 8). error type BART Generates a nonsensical/vague question 47 Generate an invalid question 8 Generates a yes/no question 4 Copies the given fact or a subset of it 3 Generates unanswerable questions 3 Table 8: Percentage of errors on QASC QG task. The numbers do not sum to 100 since the error types are not mutually exclusive. Unlike GPT3 which generally suffers from generating redundant content irrelevant to instructions, our BART model provides more control, however, it can fail to generate proper output. Here are several erroneous predictions from our model: Generates a nonsensical/vague question BART input: [...]input: Fact: graduations are used for measuring the volume of substances. output: ? BART output: How many substances are there in volume? Copies the given fact or a subset of it BART input: [...] input: Fact: If someone is not breathing, begin with ventilation. output: ? BART output: If someone is breathing, begin with ventilation. Generates a yes/no question BART input: [...] input: Fact: Lack of saliva hinders the break down of food into components the body can absorb. output: ? BART output: Does saliva hinders the break down of food into components the body can absorb? Perceived Impact of Instruction Elements. We survey human annotators to find out the value of instruction elements to humans. Except for the negative examples which were shown to be difficult for models, we observe similar trends between humans' perceived value of those elements (Ta-ble 7) and their contributions to the model performance (Table 5). For example, humans viewed DEFINITION and THINGS TOAVOID as necessary fields for classification and minimal text modification categories, respectively, which is compatible with our empirical observations (e.g., PROMPT + DEFINITION has the highest score on CF category in Table 5). 7 Conclusion In this paper, we studied the goal of building models that generalize to new tasks by encoding and understanding crowdsourcing instructions. We introduced NATURALINSTRUCTIONS , which is built based on existing crowdsourced datasets, that enables building such models and systematically evaluate them. To the best of our knowledge, this is the first work to show the benefit of instructions towards improved cross-task generalization. Additionally, we observe that our proposed task has a large room for improvement, which we believe will bring more attention to building stronger models that can generalize to a wider range of tasks. Acknowledgements We thank OpenAI for providing access to the GPT3 API, authors who generously shared their dataset templates with us, Matt Peters and Nicholas Lourie for helpful input, the Beaker team for their support with experiments, and the anonymous reviewers for their helpful feedback. The support of DARPA SAIL-ON, DARPA CHESS program, NSF IIS-2044660, ONR N00014-18-1-2826, and Paul G. Allen Foundation is gratefully acknowledged. 3478 References Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. 2021. Muppet: Massive multi-task representations with pre-finetuning. In Proceedings of EMNLP , pages 57995811. Ali M Ali. 1981. The use of positive and negative examples during instruction. Journal of instructional development , 5(1):27. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150 . Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In NeurIPS , volume 33, pages 18771901. Curran Associates, Inc. Rich Caruana. 1997. Multitask learning. Machine learning , 28(1):4175. Pradeep Dasigi, Nelson F Liu, Ana Marasovic, Noah A Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In Proceedings of EMNLP-IJCNLP , pages 59275934. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of NAACL , pages 23682378. Avia Efrat and Omer Levy. 2020. The turking test: Can language models understand instructions? arXiv preprint arXiv:2010.11982 . Tanmay Gupta, A. Kamath, Aniruddha Kembhavi, and Derek Hoiem. 2021. Towards general purpose vision systems. ArXiv , abs/2104.00743. Peter Hase and Mohit Bansal. 2021. When can models learn from explanations? a formal framework for understanding the roles of explanation data. arXiv preprint arXiv:2102.02201 . Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos qa: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of EMNLP-IJCNLP , pages 23912401. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of NAACL , pages 252262. Daniel Khashabi, Tushar Khot, Ashish Sabharwal, and Dan Roth. 2017. Learning what is essential in questions. In Proceedings of CoNLL , pages 8089. Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UnifiedQA: crossing format boundaries with a single qa system. In Proceedings of EMNLP: Findings , pages 18961907. Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. QASC: A dataset for question answering via sentence composition. In Proceedings of AAAI . Teven Le Scao and Alexander M Rush. 2021. How many data points is a prompt worth? In Proceedings of NAACL-HLT , pages 26272636. Mike Lewis, Yinhan Liu, Naman Goyal, Mar-jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of ACL . Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out , pages 7481. Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gardner. 2019. Reasoning over paragraph effects in situations. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering , pages 58 62. Winston Lin, Roman Yangarber, and Ralph Grishman. 2003. Bootstrapped learning of semantic classes from positive and negative examples. In Proceedings of ICML Workshop on The Continuum from Labeled to Unlabeled Data , volume 1, page 21. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 . Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2021. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. arXiv preprint arXiv:2104.08786 . Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. 3479 Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, and Chitta Baral. 2020. Towards question format independent numerical reasoning: A set of prerequisite tasks. arXiv preprint arXiv:2005.08516 . Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT , pages 22272237. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research , 21(140):167. Laria Reynolds and Kyle McDonell. 2021. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems , pages 17. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavat-ula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. In Proceedings of the AAAI . Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Tae-woon Kim, Gunjan Chhablani, Nihal Nayak, De-bajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Ab-heesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In Proceedings of ICLR . Timo Schick and Hinrich Schtze. 2021. Few-shot text generation with natural language instructions. In Proceedings of EMNLP . Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. Bleurt: Learning robust metrics for text generation. In Proceedings of ACL , pages 78817892. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022. Finetuned language models are zero-shot learners. In Proceedings of ICLR . Orion Weller, Nicholas Lourie, Matt Gardner, and Matthew Peters. 2020. Learning from task descriptions. In Proceedings of EMNLP , pages 13611375. Hong Xuan, Abby Stylianou, Xiaotong Liu, and Robert Pless. 2020. Hard negative examples are hard, but useful. In Proceedings of ECCV , pages 126142. Springer. Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. 2021. Crossfit: A few-shot learning challenge for cross-task generalization in nlp. In Proceedings of EMNLP . Qinyuan Ye and Xiang Ren. 2021. Zero-shot learning by generating task-specific adapters. arXiv preprint arXiv:2101.00420 . Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In Proceedings of ICML , pages 1269712706. Ruiqi Zhong, Kristy Lee, Zheng Zhang, and Dan Klein. 2021. Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections. In Proceedings of EMNLP: Findings , pages 28562878. Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. going on a vacation takes longer than going for a walk: A study of temporal commonsense understanding." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "result", "method", "result", "abstain", "abstain", "other", "abstain", "objective", "abstain", "method", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "objective", "method", "objective", "other", "other", "other", "objective", "other", "method", "other", "abstain", "method", "other", "method", "other", "objective", "other", "other", "other", "objective", "other", "abstain", "other", "other", "other", "objective", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other" ]
[ "Transformer is an attention-based neural network, which consists of two sublayers, namely, Self-Attention Network (SAN) and Feed-Forward Network (FFN).", "Existing research explores to enhance the two sublayers separately to improve the capability of Transformer for text representation.", "In this paper, we present a novel understanding of SAN and FFN as Mask Attention Networks (MANs) and show that they are two special cases of MANs with static mask matrices.", "However, their static mask matrices limit the capability for localness modeling in text representation learning.", "We therefore introduce a new layer named dynamic mask attention network (DMAN) with a learnable mask matrix which is able to model localness adaptively.", "To incorporate advantages of DMAN, SAN, and FFN, we propose a sequential layered structure to combine the three types of layers.", "Extensive experiments on various tasks, including neural machine translation and text summarization demonstrate that our model outperforms the original Transformer.", "Recently, Transformer (Vaswani et al., 2017) has been widely applied in various natural language processing tasks, such as neural machine translation (Vaswani et al., 2017) and text summarization (Zhang et al., 2019).", "To further improve the performance of the text representation, Transformer-based variants have attracted a lot of attention (Lu et al., 2019; Sukhbaatar et al., 2019a,b; Bugliarello and Okazaki, 2019; Ma et al., 2020).", "Work is done during internship at Microsoft Research", "Sukhbaatar et al. (2019a) proposes attention span to control the maximum context size used in SAN and scales Transformer to long-range ( 8192 tokens) language modeling.", "Recently, some works targeting on FFN have been proposed.", "Lu et al. (2019) gives a new understanding of Transformer from a multi-particle dynamic system point of view and designs a macaron architecture following Strang-Marchuk splitting scheme.", "Sukhbaatar et al. (2019b) regards the FFN as the persistent memory in SAN to augment SAN.", "These works focus on enhancing SAN or FFN, but neglect the inner relationship between SAN and FFN that hinders further improvement.", "In this work, we present a more systematic analysis for both SAN and FFN to reveal their connections.", "We introduce Mask Attention Networks (MANs), in which each network has a mask matrix that element-wise multiplies a key-query attention matrix.", "We show that SAN and FFN are two special cases in MANs with static mask matrices.", "The mask matrix of SAN is an all-ones matrix, while that of FFN is an identity matrix, which is shown as", "(a) and", "(c) in Figure 1. Since the mask matrix of SAN has no restriction on relationship modeling with other tokens, SAN is expert in long-range dependency modeling and capture the global semantics.", "In contrast, mask of FFN disables it to perceive the information of other tokens and forces it into self-evolution.", "We believe that these two specialties endowed by two mask matrices make the success of Transformer in text representation.", "Although positive results of Transformer have been reported, recent works (Shaw et al., 2018; Yang et al., 2018; Guo et al., 2019) have shown that modeling localness would further improve the performance through experiments.", "We argue that deficiency of Transformer in local structure modeling is caused by the attention computation with static mask matrix.", "In the framework of MANs, we find a problem that irrelevant tokens with overlapping neighbors incorrectly attend to each other with relatively large attention scores.", "For example a black dog jump to catch the frisbee, though catch and black are neither relevant nor neighbors, for the reason that both of them are highly related to their common neighbor dog in attention, we demonstrate that the attention score from catch to black would be large, which also decreases the attention score from catch to frisbee.", "The issue in self-attention not only introduces noise to the semantic modeling, but also mislead query tokens to overlook these neighbor tokens.", "This reveals that self-attention is insufficient in localness modeling and inspires us to mask tokens that not appear in neighborhood.", "To strengthen Transformer in localness modeling with better keeping the advantage of SAN and FFN, we propose a Dynamic Mask Attention Network (DMAN) as shown in Figure", "1(b), which originates from MANs.", "Observations reveal that tokens have different ranges of neighbors, for example, that of dog, which is also connected with frisbee, is larger than black and catch.", "Instead of being static that determined in advance, the mask matrix of DMAN is dependent on the query context and relative distance.", "In DMAN, the tokens in a specific neighborhood are able to receive more attention beyond the normal self-attention mechanism.", "The dynamic endows DMAN with text representation in different scales, and we validate the superiority through experiments.", "In Transformer (Vaswani et al., 2017), SAN and FFN cooperate in a sequential layered structure SAN FFN.", "Considering SAN, FFN, and DMAN all belong to MANs and have different advantages in text representation, instead of directly replacing SAN in previous works (Shaw et al., 2018; Yang et al., 2018; Guo et al., 2019), we propose to incorporate them with the architecture DMAN SAN FFN.", "The main contributions of this work are threefold: We introduce Mask Attention Networks and reformulate SAN and FFN to point out that they are two special cases with static mask in MANs.", "We analyze the advantages of SAN and FFN in text representation learning and demonstrate that they are insufficient for localness modeling.", "Inspired by the different specialities of SAN and FFN, we propose Dynamic Mask Attention Network (DMAN) to model localness more effectively.", "We investigate the different collaboration methods of SAN, FFN, and DMAN, and propose a sequential layered structure DMAN SAN FFN.", "We conduct experiments on machine translation and abstract summarization.", "Experimental results show that our method outperforms original Transformer.", "We also perform ablation study to verify the effectiveness of different modules of our proposed model.", "In 2.1, we review the Transformer architecture.", "We introduce Mask Attention Networks and reformulate SAN and FFN to point out they are two special cases in 2.2, and analyze their deficiency in localness modeling in 2.3.", "Then, in 2.4, we describe Dynamic Mask Attention Network (DMAN) in detail.", "At last, in 2.5, we discuss the collaboration of DMAN, SAN and FFN.", "Transformer has two sublayers: Self-Attention Network (SAN) and Feed-Forward Network (FFN).", "attention function maps a query and a set of key-value pairs to an output shown in Equation 1. A ( Q, K, V ) = S ( Q, K ) V S ( Q, K ) = (cid:34) exp (cid:0) Q i KT j / d k (cid:1) (cid:80) k exp (cid:0) Q i K Tk / d k (cid:1) (cid:35) (1) where the queries Q , keys K and values V", "SAN produces representations by applying attention function to each pair of tokens from the input sequence.", "It is beneficial to capture different contextual features with multiple individual attention functions.", "Given a text representation sequence H l RT d .", "in the l -the layer.", "In FFN, the computation of each h lt in H l is in-dependent of others.", "It consists of two affine transformations with a pointwise non-linear function: H l +1 = ReLU (cid:0) H l W 1 (cid:1) W 2 (3) where W 1 and W 2 are matrices of dimension d d f and d f d , respectively.", "Typically, d f is set to be 4 times larger than d .", "On the basis of attention function in Equation 1, we define a new mask attention function: AM ( Q, K, V ) = SM ( Q, K ) V SM ( Q, K ) = (cid:34) M i,j exp (cid:0) Q i K Tj / d k (cid:1) (cid:80) k M i,k exp (cid:0) Q i K Tk / d k (cid:1) (cid:35) (4)", "where M RT T , M i,j [0 , 1] is a mask matrix and can be static or dynamic.", "Intuitively, the value in each position of M can be viewed as the color shade in Figure 1. With the knowledge of mask attention function, we introduce Mask Attention Networks (MANs), in which each network can be written as Equation 5.", "H l +1 = F (cid:0)(cid:2) A 1 M 1 , , AIMI (cid:3)(cid:1) WHA iM i = AM i (cid:0) H l W iQ , H l W iK , H l W iV (cid:1) (5) where F is the activation function, M i is the mask matrix for the i -th attention head.", "Next, we show that SAN and FFN both belong to the Mask Attention Networks.", "For SAN, let M = [1] RT T be an all-ones matrix and F = F id be the identity function, its mask attention function would be formalized: S [1] ( Q, K ) = (cid:34) 1 exp (cid:0) Q i KT j / d k (cid:1) (cid:80) k exp (cid:0) Q i K Tk / d k (cid:1) (cid:35) = S ( Q, K ) A [1] ( Q, K, V ) = S [1] ( Q, K ) V = A ( Q, K, V ) (6) Figure 2: Overview of our proposed model.", "For FFN, let M = I RT T be the identity", "matrix, F = ReLU and head number I = 1 .", "SI ( Q, K ) = (cid:34) 1 i ( j ) exp (cid:0) Q i K Tj / d k (cid:1) (cid:80) k 1 i ( k ) exp (cid:0) Q i K Tk / d k (cid:1) (cid:35) = I AI ( Q, K, V ) = SI ( Q, K ) V = I V = V (8) where 1 i ( x ) is an indicator function that equal to 1 if x = i , otherwise 0. The MAN degenerates into FFN.", "H l +1 = ReLU (cid:16)(cid:2) A 1 M (cid:3)(cid:17) WH = ReLU (cid:0) H l W 1 V (cid:1) WH (9) In summary, SAN and FFN are two special cases in MANs with different static mask matrices.", "The mask matrix of SAN is an all-ones matrix and that of FFN is an identity matrix, they are two extreme cases in MANs.", "We analyze that these two static MANs are deficient in localness modeling.", "Intuitively, through blocking other tokens in advance, FFN focuses on its own information and is unable to perceive the information except itself, let alone its neighbors.", "In SAN, each token is equally accessible to any other ones.", "As the example in Introduction shows, we find that tokens not in neighborhood are also likely to attend to each other with relatively large scores.", "Therefore, SAN might introduce noises to semantic modeling and overlook the relation of neighboring signals.", "We demonstrate the issue of self-attention.", "Generally assuming that (cid:2) a, b, c (cid:3) appear in sequence, and ( a, b ) , ( b, c ) are two neighbor pairs, but a, c are not neighbors.", "First, to explicitly define the relationship of tokens, we introduce U ( h ) as the set of tokens at the distance of from h with key and query linear transformation in SAN, in other words, u U ( h ) || hW Q uW K || 22 .", "For example, if ( a, b ) is a neighbor pair, there would exist some small 0 such that a U ( b ) and b U ( a ) .", "Second, we know that the larger the inner product is, the smaller the Euclidean distance is, and vice versa.", "With the awareness of the relationships between (cid:2) a, b, c (cid:3) , we have a, b U ( a ) , b, c U ( c ) and a, b, c U ( b ) for some small 0 .", "Third, we are able to estimate the semantic distance between a and c as the Equation 10 shows.", "|| aW Q cW K || 22 = || aW Q bW K + bW K bW Q + bW Q cW K || 22 3 || aW Q bW K || 22 + 3 || bW K bW Q || 22 +3 || bW Q cW K || 22 (cid:1) 9 (10) Thus, though a and c are not neighbors, no matter how irrelevant the semantics of a and c , c U 9 ( a ) that c would play an important role in modeling semantics of a .", "The upper phenomenon illustrates following normal attention function in Equation 1, some tokens not in neighborhood not are still likely to occupy an important position in attention weight that can not be ignored.", "With the knowledge of MANs, we propose to mask other tokens that not in neighborhood of the target token for better local semantic modeling.", "For example, we build a distance-dependent mask matrix SM.", "If each token only model the relationship with those tokens within b units of itself, we can set SM [ t, s ] = (cid:26) 0 , | t s | > b 1 , | t s | b (11) where t, s are the positions of query and key, and SM [ t, s ] is the value of the t -th row and s -th column of SM .", "By means of SM, we take those tokens within b units into account and ignore others.", "The static mask does assign more weights to a specific neighborhood, but lacks flexibility.", "Considering the neighborhood size varies with different query tokens, number of tokens that benefit for different query tokens' local semantic representation are different.", "Moreover, their mask matrices should match different attention heads and layers in MANs.", "We propose Dynamic Mask Attention Network (DMAN) that replaces the static mask matrix.", "Incorporating query tokens, relative distance, attention head and layer, we build a dynamic mask function which replaces the hard 0 / 1 mask gate in Equation 11 with a soft one through sigmoid activation function in Equation 12.", "where s, t are the positions of query and key, i is the attention head, l is the layer.", "P lt s is parameterized scalar for the positions t and s , U li is for the i th head, and W l R d 1 .", "W l , P lt s and U li are trainable parameters.", "Until here, we have three sub-networks of MANs, namely, SAN, FFN and DMAN.", "SAN that does not mask any tokens and specializes in global semantic modeling.", "FFN that masks all tokens except itself and focuses on self-processing.", "DMAN masks the tokens not in neighborhood and is able to model local structure more effectively.", "Transformer is composed of SAN and FFN that achieves positive results in various NLP tasks, the stacking method of Transformer inspires us to stack DMAN, SAN and FFN to incorporate their advantages.", "We insert DMAN in the manner of DMAN SAN FFN, which is shown in Figure 2. With this architecture, we first model the localness then globalness, and take the step for self-evolution in the end.", "In this section, we introduce our experiments.", "We first describe the experimental details in 3.1.", "Then we show the experimental results in 3.2.", "Finally we conduct the ablation study and analysis in 4. 3.1 Experimental Setting 3.1.1 Machine Translation Machine translation is an important application of natural language processing (Vaswani et al., 2017).", "We evaluate our methods on two widely used public datasets: IWSLT14 German-to-English (De-En) and WMT14 English-to-German (En-De).", "IWSLT14 De-En dataset consists of about 153K/7K/7K sentence pairs for train-ing/validation/testing.", "WMT14 En-De dataset consists of about 4.5M sentence pairs, and the models were validated on newstest2013 and examined on newstest2014.", "Our data processing follows Lu et al. (2019).", "For IWSLT2014, we set our model into the small one, the hidden size, embeddings and attention heads to 512, 512, and 4 respectively.", "For the WMT14 dataset, following the Transformer setting of Vaswani et al. (2017), we set our model into the base and big ones which both consist of a 6-layer encoder and 6-layer decoder, the hidden nodes are set to 512 and 1024, and the number of attention heads are 8 and 16.", "For each setting (small, base and big), we replace all layers in Transformer by our MAN layer.", "To make a relatively fair comparison, we set the dimensionality of the inner-layer of the FFN in the MAN layers to two times of the dimensionality of the hidden states.", "We train our proposed model with cross-entropy with 0.1 label smoothing rate.", "Inverse-sqrt learning rate scheduler are employed, the peak learning rates are 1.5e-2, 1e-2 and 7e-3 with 8k warmup, 50k update, 80k update and 80k update for transformer big, base and small model with max-tokens 4096, 12288 and 8192 per batch.", "The dropout rates are 0.3, 0.1 and 0.3 for small, base and big models.", "The optimizer of model is Adam with (0.9,0.98).", "The beam size and length penalty for base and big models are 4 and 0.6, for small model is 5 and 1.0.", "The base and large model are trained on 8 V100 GPUs, and the small model is trained on 2 P40.", "Automatic summarization aims to produce a concise and fluent summary conveying the key information in the input text.", "We focus on abstractive summarization, a generation task where the summary is not limited in reusing the phrases or sentences in the input text.", "We use the CNN/Daily Mail (See et al., 2017) and Gigaword (Rush et al., 2015) for model evaluation.", "Following Song et al. (2019), we set the hidden size, embeddings and attention heads to 768, 768, and 12 respectively.", "Our model consists of a 6-layer encoder and 6-layer decoder.", "For the convenience of comparison, the training follows classic seq2seq model without copy, converge or RL.", "We remove duplicated trigrams in beam search (Paulus et al., 2018).", "Moreover, the dimensionality of the inner-layer of the FFN in the MAN layers is set to two times of the dimensionality of the hidden states.", "In training, inverse-sqrt learning rate scheduler is employed.", "The peak learning rates are 1e-3 and 8e-4, max-tokens per batch are 8192 and 12288 for CNN/Daily Mail and Gigaword, respectively.", "The warmup steps is 8k and the total updates is 50k.", "The optimizer of model is Adam with (0.9,0.98).", "The dropout and clip-norm are both 0.1.", "During decoding, the beam size are both 5, the max length and length penalty are 50 and 2.0 for CNN/Daily Mail, 30 and 1.0 for Gigaword.", "In machine translation, BLEU (Papineni et al., 2002) is employed as the evaluation measure.", "Following common practice, we use tokenized case-sensitive BLEU and case-insensitive BLEU for WMT14 En-De and IWSLT14 De-En, respectively.", "We take Transformer (Vaswani et al., 2017) as the baseline and compare with other concurrent methods.", "Convolutional Transformer (Yang et al., 2019b) restricts the attention scope to a window of neighboring elements in order to model locality for self-attention model.", "Local Transformer (Yang et al., 2018) casts localness modeling as a learnable Gaussian bias, which indicates the central and scope of the local region to be paid more attention.", "The results for machine translation are shown in Table 1. Our model exceeds the baseline Transformer and other models.", "For the IWSLT14 dataset, our small model outperforms the Transformer small by 1.6 points in terms of BLEU.", "For the WMT14 dataset, our base model exceeds its Transformer counterpart by 1.8 BLEU points.", "Furthermore, the performance of our base model is even better than that of the Transformer big model reported in (Vaswani et al., 2017), but with much less parameters.", "Our big model outperforms the Transformer big by 2.0 BLEU points.", "Compare with Convolutional Transformer and Local Transformer, our model also achieve 1.7 and 1.2 points improvement in BLEU, respectively.", "This validates that the superiority of our model to systematically solve the localness modeling problem in Transformer.", "We use the F1 score of ROUGE (Lin and Hovy, 2003) as the evaluation metric 1 .", "In Table 2, we compare our model against the baseline Transformer (Vaswani et al., 2017) and several generation models on CNN/Daily Mail and Gigaword.", "LEAD3 (Nallapati et al., 2016) extracts the first three sentences in a document as its summary.", "PT-GEN+Converage (See et al., 2017) is a sequence-to-sequence model based on the pointer-generator network.", "As shown in Table 2, our model outperforms Transformer by 1.4 in ROUGE-1, 2.2 in 1 https://github.com/pltrdy/files2rouge ROUGE-2 and 1.2 in ROUGE-L in CNN/Daily Mail.", "In Gigaword dataset, ours exceeds the baseline by 0.7 in ROUGE-1, 0.5 in ROUGE-2 and 0.7 in ROUGE-L.", "As a summary, in machine translation and abstractive summarization our proposed model achieves better results than the Original Transformer (Vaswani et al., 2017).", "In this section, we conduct further analysis for our model.", "We first investigate stacking methods for different sublayers in 4.1.", "Then we compare strategies of static mask and dynamic mask in 4.2.", "Finally, we analyse the behavior of SAN and DMAN in localness modeling through attention scores in 4.3.", "Here, we investigate different collaboration mechanisms of the elements in MANs.", "Under our design principles, there are three elements: FFN, SAN, and DMAN.", "For the convenience of comparison, we take FFN as the last component in the sequential layered structure.", "We try different collaboration methods and test them on IWSLT2014 German-to-English (De-En).", "The results are shown in the Table 3. We conclude that: 1. Our proposed C #5 achieves the best performance that verify the effectiveness of our proposed sequential layered structure.", "2. All of C #3, C #4 and C #5 outperform C #1 and C #2, and the least improvement in BLEU is 0.2.", "This shows that no matter what collaboration method, models with the participation of DMAN perform better than models without DMAN, which validates the capability of DMAN.", "3. Both C #5 and C #4 are better than C #3 and C #2.", "This indicates that models without DMAN or SAN are not comparable to models with all three modules.", "This shows that DMAN and SAN have their own strengths, namely, localness modeling and globalness modeling, and are able to make up for each other's defects through collaboration.", "first modeling the localness and then globalness would be better than the inverse order.", "In this section, we compare the performance of Static Mask Attention Network (SMAN) and Dynamic Mask Attention Network (DMAN).", "Both of them follow the collaboration strategy of DMAN(SMAN) SAN FFN.", "In SMAN, we set a fixed mask boundary which has been determined in advance following Equation 11.", "Empirically, we propose two static mask strategies:", "(a) SMAN 1 , the boundary b depends on sentence length L , b = L/ 2 ;", "(b) SMAN 2 , b is set to 4, which is chosen from 2, 4, 6, 8 through validation.", "The results in IWSLT2014 De-En are shown in Table 4. The performance of SMAN 1 and SMAN 2 are very close.", "They both outperform the Transformer but fall behind our proposed DMAN.", "This indicates that our proposed DMAN is superior to SMAN.", "SMAN fails to manage various neighborhood for different query tokens, but DMAN can model localness with more flexibility according to these factors.", "In this section, we analyse the behavior of DMAN and SAN in localness modeling through attention scores in Equation 4. To quantify the role of neighbors in semantic modeling, we compute the sum of attention scores within some particular window size.", "Generally, if the attention score from a to c is bigger than b to c , we consider that a contributes more to the semantic modeling of c compared to b , in other words, model utilizes more information of a than b to learn the semantic representation of c .", "Therefore, larger attention scores mean that model utilizes more information of the corresponding tokens to learn the semantic representation of query token.", "For each sentence in dataset X i = ( x i, 1 , , x i,T i ) D , we utilize s li, DMAN and s li, SAN RT i T i to denote the average attention scores SM ( Q, K ) in Equation 4 across different heads in the l -th layer for DMAN and SAN, respectively.", "We sum the attention scores of these tokens x i,k within the window size w of the query x i,j in the l -th layer, and average the sum across X i and dataset D following Equation 13.", "value of the j -th row and k -th column of s li, .", "attn _ s w,l, measures the overall contribution of these neighbor tokens within the window size w to the query tokens' semantic modeling.", "We take D as the test set of IWSLT14 De-En and compute attn _ s w,l, with w = 1 , 2 , 4 and l = 1 , 3 , 6 .", "The result is shown in Table 5.", "We see that in layer#1, #3 and #6, the sum attention scores of DMAN within the window size 2 are 50% more than those of SAN, especially in layer#1 where the gap is as much as five times between SAN and DMAN.", "This phenomenon validates that the attention scores of DMAN in neighbors are larger than those of SAN, thus DMAN is more specialized in localness modeling than SAN.", "Recently, there is a large body of work on improving Transformer (Vaswani et al., 2017) for various issues.", "For recurrence modeling, Hao et al. (2019) introduces a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks.", "For context modeling, Yang et al. (2019a) focuses on improving self-attention through capturing the richness of context and proposes to contextualize the transformations of the query and key layers.", "Wu et al. (2019) introduces dynamic convolutions to predict separate convolution kernels solely based on the current time-step in order to determine the importance of context elements.", "In order to adjust attention weights beyond SAN, Shaw et al. (2018) extends the self-attention mechanism to efficiently consider representations of the relative positions or distances between sequence elements through adding a relative position embedding to the key vectors; Bugliarello and Okazaki (2019) transfers the distance between two nodes in dependency trees with a pre-defined Gaussian weighting function and multiply the distance with the key-query inner product value; Dai et al. (2019) presents a relative position encoding scheme that adds additional relative position representation to the key-query computation.", "Sukhbaatar et al. (2019a) proposes a parameterized linear function over self-attention to learn the optimal attention span in order to extend significantly the maximum context size used in Transformer.", "To merge FFN to SAN, Sukhbaatar et al. (2019b) proposes a new model that solely consists of attention layers and augments the self-attention layer with persistent memory vectors that play a similar role as the feed-forward layer.", "As for the collaboration of SAN and FFN, Lu et al. (2019) introduces Macaron layer that split the FFN into two half-steps based on Strang-Marchuk splitting scheme in ODE.", "For localness modeling, Yang et al. (2018) casts localness modeling as a learnable Gaussian bias according to relative distance to external energy in softmax function as a new self-attention network.", "Zhao et al. (2019) explores parallel multi-scale representation learning to capture both long-range and short-range language structures with combination of convolution and self-attention.", "In our work, DMAN, SAN and FFN are unified in Mask Attention Networks, where DMAN is a supplement of SAN and FFN that specializes in localness modeling.", "Moreover, we investigate different collaboration mechanisms.", "In this paper, we introduce Mask Attention Networks and reformulate SAN and FFN to point out they are two special cases with static mask in MANs.", "We analyze the the deficiency of SAN and FFN in localness modeling.", "Dynamic Mask Attention Network is derived from MANs for better local structure modeling.", "Considering the different specialities of SAN, FFN, and DMAN, we investigate a sequential layered structure DMAN SAN FFN for their collaboration.", "Compared with original Transformer, our proposed model achieves better performance in neural machine translation and abstract summarization.", "For future work, we consider adding structure information or external knowledge, e.g., dependency tree, with mask matrices in MANs.", "This work was supported by China National Key R&D Program (No.2018YFC0831105)." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "method", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "method", "abstain", "objective", "objective", "method", "other" ]
[ "For a natural language understanding benchmark to be useful in research, it has to consist of examples that are diverse and difficult enough to discriminate among current and near-future state-of-the-art systems.", "However, we do not yet know how best to select text sources to collect a variety of challenging examples.", "In this study, we crowdsource multiple-choice reading comprehension questions for passages taken from seven qualitatively distinct sources, analyzing what attributes of passages contribute to the difficulty and question types of the collected examples.", "To our surprise, we find that passage source, length, and readability measures do not significantly affect question difficulty.", "Through our manual annotation of seven reasoning types, we observe several trends between passage sources and reasoning types, e.g., logical reasoning is more often required in questions written for technical passages.", "These results suggest that when creating a new benchmark dataset, selecting a diverse set of passages can help ensure a diverse range of question types, but that passage difficulty need not be a priority.", "State-of-the-art systems have shown performance comparable with humans on many recent natural language understanding (NLU) datasets (Devlin et al., 2019; Sun et al., 2021), suggesting that these benchmarks will no longer be able to measure future progress.", "To move beyond this, we will need to find better ways of building difficult datasets, ideally without sacrificing diversity or coverage (Bowman and Dahl, 2021).", "To obtain such human-written examples at scale, there are active lines of crowdsourcing research on protocols of worker handling and feedback (Nangia et al., 2021) and the design of the collection task (Ning et al., 2020; Rogers et al., 2020).", "However, we do not have clear MCTest : Tony walked home from school on his birthday.", "He was surprised to see a lot of cars in front of his house.", "When he opened the door and entered the house, he heard a lot of people yell, Surprise!", "It was a surprise party for his birthday.", "His parents called all his friends' parents and invited them to come to a party for Tony.", "[...] Q: Who were invited to the party and by who?", "(cid:3)", "Tony's parents invited only his friends (cid:3) Tony invited his friends and their parents (cid:3) Tony's parents invited his friends' parents (cid:88)(cid:3) Tony's parents invited his friends and their parents ReClor : Humanitarian considerations aside, sheer economics dictates that country X should institute, as country Y has done, a nationwide system of air and ground transportation for conveying seriously injured persons to specialized trauma centers.", "Timely access to the kind of medical care that only specialized centers can provide could save the lives of many people.", "[...] Q: What is the economic argument supporting the idea of a transportation system across the nation of Country X?", "(cid:3)", "Building the transportation system creates a substantial increase of jobs for the locals (cid:88)(cid:3) Increasing access to specialized medical centers can lower the chance of the workforce population dying (cid:3) Transportation ticket prices directly contribute to the government's revenue (cid:3) Country Y was successful with their attempts to potentially save lives so Country X should try it as well Figure 1: Example questions for passages from simple narratives (MCTest) and technical arguments (ReClor).", "Crowdsourced datasets in reading comprehension use passages taken from a variety of sources, such as news articles, exams, and blogs, about which questions are written (Lai et al., 2017; Trischler et al., 2017; Rogers et al., 2020).", "The first example in Figure 1 is from MCTest (Richard-son et al., 2013), the passages of which are written in grade-school-level English.", "The second example is from ReClor (Yu et al., 2020), which consists of passages and questions written for graduate and law school admission examinations.", "We hypothesize that difficult passages, such as those in the second example, are more suitable for crowdsourcing challenging questions.", "Passages that are linguistically 6951 complex and have dense information could help facilitate the writing of questions that require understanding a wide range of linguistic and world knowledge, following intricate events, and comprehending logical arguments.", "In contrast, easy passages, as in children's stories, likely talk about common situations and simple facts, which might prevent workers from writing difficult questions.", "In this work, we crowdsource multiple-choice reading comprehension questions to analyze how question difficulty and type are affected by the choice of source passage.", "Using passages extracted from seven different sources, we ask crowdworkers to write questions about the given passages.", "We compute the difference between human and machine accuracy, using it as a measure of the question difficulty, to investigate whether there is a correlation between the question difficulty and linguistic aspects of the passage, such as their source, length, and readability.", "In addition to a standard setting where we directly accept crowdworkers' submissions, we use an adversarial setting in which they have to write questions that fool a strong reading comprehension model (Bartolo et al., 2020; Kiela et al., 2021).", "Previous work finds that questions that require numerical reasoning frequently appear in the adversarial data collection of the extractive QA task on Wikipedia articles (Kaushik et al., 2021), but our aim is to see whether we observe a similar trend in multiple-choice questions written for different passage sources or if the adversarial setting is useful for collecting especially diverse questions.", "To our surprise, we find that the difficulty of collected questions does not depend on the differences of passages in linguistic aspects such as passage source, passage length, FleschKincaid grade level (Kincaid et al., 1975), syntactic and lexical surprisal, elapsed time for answering, and the average word frequency in a passage.", "Our main positive finding comes through our manual annotation of the types of reasoning that each question targets, where we observe that questions that require numerical reasoning and logical reasoning are relatively difficult.", "In addition, we find several trends between the passage sources and reasoning types.", "For example, logical reasoning is more often required in questions written for technical passages, whereas understanding of a given passage's gestalt and the author's attitude toward it are more frequently required for argumentative and subjective passages than expository passages.", "These results suggest that when creating a new benchmark dataset or choosing one for evaluating NLU systems, selecting a diverse set of passages can help ensure a diverse range of question types, but that passage difficulty need not be a priority.", "Our collected datasets could be useful for training reading comprehension models and for further analysis of requisite knowledge and comprehension types in answering challenging multiple-choice questions.", "1 2 Related Work Crowdsourcing NLU Datasets Crowdsourcing has been widely used to collect human-written examples at scale (Rajpurkar et al., 2016; Trischler et al., 2017).", "Crowdworkers are usually asked to write questions about a given text, sometimes with constraints imposed to obtain questions that require specific reasoning skills such as multi-hop reasoning (Yang et al., 2018) or understanding of temporal order, coreference, or causality (Rogers et al., 2020).", "In this study, to analyze naturally written examples, we do not consider specific constraints on questions or answer options.", "Current benchmark datasets constructed by crowdsourcing may not be of sufficient quality to precisely evaluate human-level NLU.", "For example, Ribeiro et al. (2020) reveal that state-of-the-art models in traditional NLP benchmarks fail simple behavioral tests of linguistic capabilities ( checklists ).", "Chen and Durrett (2019) and Min et al. (2019) show that questions in multi-hop reasoning datasets such as HotpotQA by Yang et al. (2018) do not necessarily require multi-hop reasoning across multiple paragraphs.", "To investigate how to collect high-quality, challenging questions through crowdsourcing, Nangia et al. (2021) compare different sourcing protocols and find that training workers and providing feedback about their submissions improve the difficulty and quality of their reading comprehension questions.", "To encourage workers to write difficult examples, Bartolo et al. (2020) propose to collect questions using a model-in-the-loop setting.", "Although this adversarial approach enables us to collect challenging questions efficiently, Gardner et al. (2020) point out that the collected examples might be bi-1 Our datasets, annotation instructions and results, and crowdsourcing scripts are available at https://github.", "ased towards the quirks of the adversary models.", "Bowman and Dahl (2021) extend this argument, and point out that adversarial methods can systematically eliminate coverage of some phenomena.", "This is also supported by Kaushik et al. (2021), but their findings are limited to extractive QA for Wikipedia articles.", "Our motivation is to see if this argument is applicable to the multiple-choice format with a wide range of passage sources for which we expect crowdworkers to write linguistically diverse questions and answer options.", "Sources of NLU Datasets Reading comprehension datasets are often constructed with a limited number of passage sources.", "Rajpurkar et al. (2016) sample about five hundred articles from the top 10,000 articles in PageRank of Wikipedia.", "Similarly, Dua et al. (2019) curate passages from Wikipedia articles containing numeric values to collect questions for mathematical and symbolic reasoning.", "Khashabi et al. (2018) construct a dataset in which questions are written for various passage sources such as news articles, science textbooks, and narratives.", "However, we cannot use their questions for our analysis of the variation of naturally written questions because they are designed to require local multi-sentence reasoning (such as coref-erence resolution and paraphrasing) by filtering out questions answerable only with a single sentence.", "Similarly to our work, Sugawara et al. (2017) find that readability metrics and question difficulty do not correlate in reading comprehension datasets.", "Our study differs in the following two points, which could cause different findings: First, their observational study of existing datasets has fundamental confounding factors because the questions they examine are constructed using different sourcing methods (e.g., automatic generation, expert writing, and crowdsourcing), which could have an impact on the question difficulty.", "We aim to investigate uniformly crowdsourced examples across seven different sources to obtain insights for future data construction research using crowdsourcing.", "Second, they define question difficulty using human annotations alone, but this does not necessarily reflect the difficulty for current state-of-the-art models.", "In this study, we define the question difficulty as the humanmachine performance gap using eight recent strong models, which enables a more fine-grained analysis of the collected questions for a better benchmark of current models.", "consisting of different in-domain and out-domain datasets.", "However, they combine datasets in different task formats and sourcing methods, which prevents us from comparing questions across passage sources alone.", "In contrast, our focus is to compare questions collected by crowdsourcing for the same task format to analyze the question difficulty for current state-of-the-art models.", "We adopt the multiple-choice format because, as discussed by Huang et al. (2019), it allows us to evaluate both human and machine performance easily.", "This study aims to analyze what kinds of passages make crowdsourced reading comprehension questions difficult.", "We use Amazon Mechanical Turk.", "To collect difficult and high-quality examples, we require crowdworkers to take a qualification test before accepting our question writing and validation tasks.", "The qualification test has two parts, which we run in separate tasks: question answering and writing.", "To take the qualification test, workers have to meet the following minimum qualifications: based in the United States, Canada, or United Kingdom, have an approval rate of at least 98%, and have at least 1,000 approved tasks.", "The question answering task is used to identify workers who answer reading comprehension questions carefully.", "A single question answering task has five questions that are randomly sampled from the validation set of ReClor in which most questions are taken from actual exams.", "Those who correctly answer at least four out of the five questions proceed to the next qualification phase.", "The question writing task is used to familiarize workers with the writing of multiple-choice reading comprehension questions and select those who can carefully write examples.", "We ask workers to write two questions given two different passages randomly sampled from the validation set of RACE (Lai et al., 2017).", "This dataset consists of self-contained passages written for middleand highschool exams in various subjects, which we expect the workers to be able to write questions for easily.", "Following Nangia et al. (2021), we then review the workers' submissions and grade them using a rubric with four criteria: the question (1) is answerable without ambiguity ( yes or no ); (2) requires 6953 reading the whole passage (five-point scale); (3) is creative and non-obvious (five-point scale); and (4) has distractor answers that could look correct to someone who has not read the passage carefully ( more than one , one , or no ).", "We rank workers using this rubric and allow approximately the top 50% of workers to proceed to the main writing task.", "We make sure that these workers write two unambiguous and answerable questions.", "In the main writing task, a worker is shown a single passage and asked to write a question about it along with four answer options.", "We provide instructions where we describe that questions have to be challenging but still answerable and unambiguous for humans, and we include good and bad examples to illustrate what kinds of questions we aim to collect.", "For example, good examples require reading the whole passage and ask about characters' motivations or consequences of described events, while bad examples only ask about a simple fact or are answerable without reading the passage (Ap-pendix P).", "Each worker who passes the qualification round is randomly assigned to either standard or adversarial data collection.", "In the standard collection, we accept workers' submissions without any filtering.", "In the adversarial collection, a written question is sent to a reading comprehension model immediately.", "If the model cannot answer that question correctly, we accept it.", "We allow workers to submit questions (i.e., get paid) after three attempts even if they keep failing to fool the model.", "We use UnifiedQA 3B v2 (Khashabi et al., 2020) for the adversary model, which is trained on a wide variety of question answering datasets such as MCTest, RACE, NarrativeQA (Kocisk et al., 2018), and SQuAD.", "While the source of training data that we use in our models will inevitably influence our findings, focusing on a model with very diverse pretraining and fine-tuning will minimize this effect.", "Passage Sources We use passages from the following seven sources: (1) MCTest children's narratives, (2) Project Gutenberg narratives, (3) Slate online magazine articles from the 1990s sourced from the Open American National Corpus (Ide and Suderman, 2006), (4) middleand high-school exams from RACE, (5) graduate-level exams from ReClor, and (6) science and (7) arts articles from Wikipedia.", "We use the passages from the training sets of MCTest, RACE, and ReClor.", "For Gutenberg, Slate, and Wikipedia, we split available books and articles into passages.", "Details are in Appendix A. In the writing task, a passage is randomly taken from a passage pool in which there are the same number of passages extracted from each source.", "We collect the votes of five workers for each of the collected questions.", "Those workers who passed the question answering task of the qualification round can accept the validation tasks.", "To incen-tivize workers, we use preexisting gold-labeled examples (from Nangia et al., 2021) as catch trials, representing about 10% of the tasks, and pay a bonus of $0.50 USD if a worker can answer those questions correctly at least 80% of the time.", "If a worker fails to answer them at least 60% of the time, we disqualify the worker from future rounds of data collection.", "Worker Pay and Logistics For the writing tasks, the base pay is $2.00 per question, which we estimate to be approximately $15.00 per hour based on measurements from our pilot runs.", "If a worker succeeds in fooling the model in adversarial data collection, they receive an additional bonus of $1.00.", "For validation, a single task consisting of five questions pays $2.00, which we estimate to be approximately $15.00 per hour as well.", "We collect a total of 4,340 questions, with 620 in each of the seven sources, further divided into 310 each for the standard and adversarial methods.", "Each passage is paired with only one question.", "We randomly sample two out of five validation votes to validate the collected examples and use the remaining three votes for measuring human performance.", "In the validation, we regard a question as valid if at least one of the two votes is the same as the writer's gold answer.", "If both votes are the same as the gold answer, the question is regarded as a high-agreement example.", "We find that 90.3% of the collected questions are valid (92.0% for standard collection and 88.7% for adversarial collection).", "In addition, 65.7% of the collected questions are classified as high-agreement (68.7% and 62.7% for standard and adversarial collection, respectively).", "We present the dataset and worker statistics in Appendices B and C. 6954 All valid examples High-agreement portion Source Method Human UniQA DeBERTa M-Avg.", "Table 1 displays human and model performance.", "We use the questions that are validated using two out of five human votes in the validation step above and take the majority vote of the remaining three votes to measure human performance on them.", "We observe 3.3% and 2.0% gaps between the standard and adversarial collection in the valid and high-agreement questions, respectively.", "To establish the model performance that is not biased towards a single model, we compute the average accuracy ( M-avg. ) of eight different models from the following two classes: RoBERTa large (four models with different random seeds; Liu et al., 2019) and DeBERTa large and xlarge (v2; He et al., 2021) either fine-tuned on MNLI (Williams et al., 2018) first or not.", "The RoBERTa and DeBERTa models are all fine-tuned on RACE.", "Among these models, DeBERTa xlarge (MNLI-fine-tuned) performs best on RACE, achieving 86.8% accuracy.", "Because UnifiedQA 3B (72.3% on RACE) is used in the adversarial data collection, it shows lower accuracy on the adversarial questions (not included in the average).", "The performance of these two models is shown for comparison in Table 1. Except where noted, we do not train the models on any collected questions.", "Supervised Performance For each dataset, we evaluate the performance of DeBERTa large trained on the datasets other than the target dataset in a leave-one-out manner.", "Our motivation is to see whether the accuracy values significantly improve by training (i.e., the humanmodel gaps decrease).", "If there is a large gain, it would imply that the datasets have simple patterns among examples that the models can exploit.", "The results show no significant gains in the adversarial datasets, but the standard datasets show some small gains (Appendix D).", "and Lipton (2018) point out, reading comprehension datasets might have annotation artifacts that enable", "models to answer questions without passages or question sentences.", "To investigate such artifacts in our collected examples, we evaluate the performance of two DeBERTa models (xlarge and large fine-tuned on MNLI), which are stronger than the others, with the ablation of questions ( P+A ), passages ( Q+A ), and both questions and passages ( A only ).", "We see large drops in the zero-shot performance of DeBERTa xlarge.", "In addition, we do not observe a significant performance improvement in the supervised performance by DeBERTa large (MNLI-fine-tuned).", "These results demonstrate that the collected questions and answer options do not have severe annotation artifacts for any passage source (Appendix E).", "Following Nangia et al. (2021), we compute the humanmodel performance gap ( ) between the human and the average model accuracies to estimate the difficulty of questions for models.", "We observe a small variation in the gap for different passage sources in the high-agreement questions ( = 14 . 9 3 . 6) .", "We find the highest human performance for MCTest questions in the high-agreement portion and the lowest for Gutenberg, whereas the model's highest performance is for Slate and the lowest for MCTest.", "Surprisingly, the questions sourced from MCTest, which consists of simple narrative passages, show the largest gap out of all sources for the high-agreement questions.", "Although ReClor consists of passages for graduate-level exams, it produces smaller gaps than RACE, which consists of passages for middleand high-school English exams.", "Gutenberg passages are written for adults, but the examples written for those passages do not show larger gaps than those for MCTest passages.", "We find a trend in the human performance: the questions of easy-to-read sources (e.g., MCTest and RACE) show higher accuracy and those of difficult-to-read sources (e.g., Gutenberg and Slate) show lower, but this trend is not observed either in the machine performance or humanmachine performance gap.", "These observations are inconsistent with our initial expectations in the introduction.", "We analyze how the linguistic aspects of the collected examples correlate with the humanmodel performance gap computed in the experiments.", "To get a better estimate of human performance, we use the high-agreement examples (Nie et al., 2020).", "For ease of comparison, we split these examples into two subsets: easy ( 20% ) and hard ( 40% ).", "These subsets have 1,970 and 547 examples, respectively.", "Appendix F provides the frequency of easy and hard examples across the passage sources and collection methods.", "We compute the correlation between the human model performance gap and readability measures across all valid examples (Pearson's r and p -value) and independence between the distributions of the easy and hard subsets about the measures ( p -value in Welch's t-test).", "Figure 2 shows the density distributions of the easy and hard subsets, while Appendices G to L provide the plots of all valid examples.", "left in Figure 2).", "Across all examples, we observe r = 0 .", "01 ( p = 0 . 47 ) (the full plot is in Appendix G).", "The t-test shows p = 0 .", "51 .", "We observe no relationship between the passage length and question difficulty.", "We also analyze question and option length in Appendix H. FleschKincaid Grade Level We use the FleschKincaid grade level (Kincaid et al., 1975) as a basic metric of text readability (top center in Figure 2).", "This metric defines readability based on an approximate US grade level with no upper bound (higher is more difficult to read).", "It is computed for a passage using the average number of words that appear in a sentence and the average number of syllables in a word (Appendix I).", "The correlation between the grade and humanmodel performance gap is r = 0 .", "08 ( p < 0 . 001 ) and the t-test shows p < 0 .", "001 .", "This result demonstrates that passage readability has a small negative effect on the question difficulty, perhaps pointing to an interfering effect whereby our pre-qualified human annotators are more likely to make mistakes on more complex passages.", "Syntactic and Lexical Surprisal The Flesch Kincaid grade level only considers sentence length and the number of syllables.", "To better estimate the passage difficulty in terms of the psycholinguistic modeling of human text processing, we use syntactic and lexical surprisal measures (Roark et al., 2009).", "These measures are computed using incremental parsing and proved to be useful for predicting human reading time.", "We observe r = 0 .", "000 ( p = 0 . 99 ) for syntactic surprisal and r = 0 .", "007 ( p = 0 . 66 ) for lexical surprisal across all examples.", "We do not observe any statistically significant difference between the easy and hard subsets (syn-tactic p = 0 . 52 and lexical p = 0 . 57 in the t-test; see top right in Figure 2).", "Appendix J describes details of the calculation.", "Annotation Speed Inspired by the psycholinguistic study of text complexity (Gibson, 1998; Lapata, 2006), we measure the average time crowdworkers spent answering questions in the validation tasks (see bottom left in Figure 2).", "This measures the elapsed time of both reading a given passage and thinking about its question, which is used as an approximation of reading time (as a proxy of text readability).", "The correlation coefficient ( r = 0 . 06 with p < 0 . 001 ) and t-test ( p = 0 . 88 ) show that there is only a small negative correla-w h a t w h i c h w h y h o w w h o w h e r e i s o t h e r s i s w a s d i d d o e s c a n ... o f ... d i d w a s d o e s i s w o u l d a r e d o ... d i d d o e s m a n y w a s d o i s ... i s ... d i d ... r e a s o n ... m a i n ... n o t ... m i c h a e l ... a u t h o r ... b e w e ... ... f o ll o w i n g ... ... n o t ... j o hn ... a u t h o r ... i t ... y o u n o t p e o p l e ... n i t r o u s ... p e o p l e s c i e n t i s t s ... ... j a m e s ... a u t h o r ... y e a r s ... o l d ... s h a r i t a ... r e a li s m ... ... k r i s hn a b a rr y m o r e n o t ... ... m a d a m e ... ... t h a t ... ...", "tion with question difficulty.", "We also measure the elapsed time for writing questions as a reference (bottom center in Figure 2 and Appendix K), observing that there is no strong correlation ( r = 0 . 02 with p = 0 . 27 ).", "Word Frequencies Following Chen and Meur-ers (2016), we analyze the effect of word frequencies on text readability.", "Using word frequencies per one million words in SUBTLEXus (Brysbaert and New, 2009), we calculate the average frequency of words appearing in a passage as a measure of passage difficulty in terms of vocabulary (a lower average frequency implies greater difficult).", "We do not observe any statistically significant difference by the t-test p = 0 .", "14 (bottom right in Figure 2) or Pearson's r = 0 .", "02 with p = 0 .", "27 (Appendix L).", "We observe similar trends even when using the human performance as the difficulty measure (Ap-pendix N).", "We analyze how passage sources and collection methods affect question types in this section.", "Question Words We automatically extract the first wh -words that appear in each valid question; if no wh -word is extracted, we count the question as polar.", "Figure 3 plots the question words and their two subsequent words (except articles) in the easy and hard questions.", "From this we observe that the hard questions are generic, not specific to given passages (e.g., which of the following is correct? ) more often than the easy questions.", "This probably results from the difference between the standard and adversarial data collection.", "The workers in the adversarial collection tend to write generic questions, while those in the standard collection write questions that are more balanced (e.g., there are more easy why and how questions).", "We also notice 6957 factuality factoid non-factoid gestalt/attitude numeric spatial/temporal logical Comprehension types 0 5 10 15 20 25 30 35 40 F r e q u e n c y ( % ) Dir-easy Dir-hard Adv-easy Adv-hard Figure 4: Frequency of comprehension types in the easy and hard examples for each collection method.", "factuality factoid non-factoid gestalt/attitude numeric spatial/temporal logical", "that the hard subset has more how many questions.", "This is likely due to the fact that it is easy for annotators to learn that numeric questions often fool the adversary model.", "These observations imply that adversarial data collection tends to concentrate the distribution of questions towards a few specific question types (e.g., generic and numeric).", "This is consistent with the observations in Kaushik et al. (2021).", "See Appendix M for details.", "Comprehension Types Following Bartolo et al. (2020) and Williams et al. (2020), we analyze what kind of comprehension is required to answer the collected questions.", "We sample a total of 980 high-agreement questions, 70 from each passage source and collection method, and then manually annotate them with one or more labels of seven comprehension types.", "The definitions of these types, examples, and detailed results are presented in Appendix M. Figure 4 shows the frequency of comprehension types for different question difficulties (676 easy, 172 hard) and the collection methods.", "We find that 868 questions have one label, 110 have two labels, and two have three labels.", "We can see that numeric , spatial/temporal , and logical questions appear more often in the hard subset in both collection methods.", "2 Looking at the frequency across the passage sources in Figure 5, we find that there are some trends between the sources and comprehension types as follows: Technical documents, such as those used in graduate-school-level reading comprehension exams, tend to yield logical reasoning questions (e.g., ReClor and Slate).", "Child-level texts tend to yield numerical reasoning questions in the standard setting (e.g., MCTest and RACE).", "In the adversarial setting, passages containing many numerical values tend to yield such questions (e.g., MCTest and Wikipedia arts).", "To collect gestalt questions or those considering the author's attitude in a given passage, passages covering subjective or argumentative topics (e.g., Gutenberg, Slate, and ReClor) are suitable.", "In contrast, expository passages such as Wikipedia articles are not.", "Narratives and related texts (e.g., MCTest, Gutenberg, and part of RACE) involve events with characters, which tend to yield spa-tial/temporal reasoning questions.", "Although the definitions of our comprehension types are coarse and these trends do not ensure that specific kinds of passages always yield the target comprehension type, considering passage sources might be an effective strategy for collecting questions of an intended comprehension type.", "Adversarial data collection for this purpose might not be useful because it may encourage workers to focus on writing only a few specific types of questions (e.g., numeric).", "To make an NLU benchmark useful, it has to consist of examples that are linguistically diverse and", "difficult enough to discriminate among state-of-the-art models.", "We crowdsource multiple-choice reading comprehension questions for passages extracted from seven different sources and analyze the effects of passage source on question difficulty and diversity.", "Although we expect that the difficulty of a passage affects the difficulty of questions about that passage, the collected questions do not show any strong correlation between the humanmachine performance gap and passage source, length, or readability measures.", "Our manual annotation of comprehension types reveals that questions requiring numerical or logical reasoning are relatively difficult.", "We also find several trends between passage sources and comprehension types.", "These results suggest that when creating a new benchmark dataset, we need to select passage sources carefully, so that the resulting dataset contains questions that require an understanding of the linguistic phenomena that we are interested in.", "This is especially important in the adversarial setting because it could concentrate the distribution of questions towards a few specific question types.", "We aim to accelerate scientific progress on robust general question answering, which could translate downstream to useful tools.", "We are not looking at possible sources of social bias, although this issue should be highly relevant to those considering sources to use as training data for applied systems (Li et al., 2020; Parrish et al., 2022).", "We are using Amazon Mechanical Turk despite its history of sometimes treating workers unfairly (Kummerfeld, 2021), especially in recourse for unfair rejections.", "We make sure that our own pay and rejection policies are comparable to in-person employment, but acknowledge that our study could encourage others to use Mechanical Turk, and that they might not be so careful.", "This work passed review or is exempt from the oversight of the internal review boards of the authors' institutes.", "We thank Saranya Venkatraman and Ethan Perez for their feedback on early drafts of this paper.", "For his early contributions to this project, we thank Harsh Trivedi.", "SS was supported by JST PRESTO Grant No.", "JPMJPR20C4.", "This project has ben-efited from financial support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), Samsung Research (un-der the project Improving Deep Learning using Latent Structure ), and Apple.", "This material is based upon work supported by the National Science Foundation under Grant Nos. 1922658 and 2046556.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation." ]
[ "abstain", "method", "objective", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "objective", "method", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "objective", "abstain", "method", "abstain", "method", "objective", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "result", "result", "result", "objective", "abstain", "abstain", "method", "method", "method", "abstain", "other", "other", "other", "other", "other", "other", "other" ]
[ "Transformer-based models have achieved state-of-the-art results in a wide range of natural language processing (NLP) tasks including document summarization.", "Typically these systems are trained by fine-tuning a large pretrained model to the target task.", "One issue with these transformer-based models is that they do not scale well in terms of memory and compute requirements as the input length grows.", "Thus, for long document summarization, it can be challenging to train or fine-tune these models.", "In this work, we exploit large pre-trained transformer-based models and address long-span dependencies in abstractive summarization using two methods: local self-attention; and explicit content selection.", "These approaches are compared on a range of network configurations.", "Experiments are carried out on standard long-span summarization tasks, including Spotify Podcast, arXiv, and PubMed datasets.", "We demonstrate that by combining these methods, we can achieve state-of-the-art results on all three tasks in the ROUGE scores.", "Moreover, without a large-scale GPU card, our approach can achieve comparable or better results than existing approaches.", "1 1 Introduction Transformer-based models (Vaswani et al., 2017) are ubiquitously state-of-art across many natural language processing (NLP) tasks, including summarization.", "To achieve the best results, the community has trained ever larger transformer models on larger amount of data, and/or more task-specific optimization objectives (Devlin et al., 2019; Raf-fel et al., 2020; Lewis et al., 2020; Brown et al., 2020).", "In long document summarization, the input 1 Our code is available at https://github.com/ potsawee/longsum0 .", "sequences could be more than an order of magnitude longer than the limits of these transformer models.", "Although the limits can be extended, training large transformer models on long sequences is expensive and may not be possible on a standard GPU card because of the self-attention mechanism that grows quadratically with sequence length.", "To tackle the quadratic characteristic, recent works have modified self-attention mechanism and proposed variants of the transformer such that the quadratic complexity is reduced (Tay et al., 2020b; Kitaev et al., 2020; Child et al., 2019; Beltagy et al., 2020; Ainslie et al., 2020; Zaheer et al., 2020).", "However, pre-trained weights of the modified models are not readily available.", "In contrast, standard models such as BERT (Devlin et al., 2019) or BART (Lewis et al., 2020) have been trained on various target tasks, including text summarization (Liu and Lapata, 2019b).", "This allows practitioners to achieve good performance with less training time.", "Thus, we are interested in exploiting pretrained models for long-span summarization tasks.", "We study a range of design configurations empirically and theoretically in regards to memory and compute requirements as well as their performance.", "We propose that long-span dependencies can be handled by two complementary methods.", "Firstly, inspired by modified self-attention transformers, we exploit standard transformer models by constraining attention mechanism to be local, allowing longer input spans during training.", "Secondly, because abstractive summarization systems perform content selection implicitly (Nallapati et al., 2016; Lebanoff et al., 2020), to reduce memory and compute requirements an alternative method is to perform content selection explicitly before the abstractive stage.", "We study content selection during two phases: training time and test time.", "At training time, we investigate methods to select data for training fixed-span abstractive models.", "At test time, we extend existing model-based selection methods, and we propose a multitask content selection method that ranks sentences through extractive labelling based module (Cheng and Lapata, 2016) and attention based module (See et al., 2017).", "Ultimately, we explore the combined approach, consisting of local self-attention transformer and content selection for long-document summarization.", "We conduct our experiments using a number of design configurations on the Spotify open-domain Podcast summarization dataset (Clifton et al., 2020).", "This dataset is challenging not only because of its long-span nature, but also because transcribed spoken utterances typically have lower information density (Li et al., 2019; Manakul et al., 2020).", "Furthermore, we carry out experiments on arXiv and PubMed datasets (Cohan et al., 2018) to further demonstrate and verify the effectiveness of our approach as well as making comparisons to existing approaches.", "We highlight the strengths and weaknesses of our approach in different resources and tasks.", "The main contributions of this paper are: On local self-attention, we show how to exploit a standard transformer model for long-span summarization, and we show good design considerations based on empirical results.", "On content selection, we demonstrate the best selection method at training time, and we propose a multitask content selection (MCS) method outperforming baselines at test time.", "Our work has set new state-of-the-art results on Spotify Podcast, arXiv and PubMed datasets in the ROUGE scores.", "Furthermore, with a small-scale GPU card, our approach achieves comparable or superior performance to previous state-of-the-art systems.", "Efficient Transformers.", "Pre-trained transformer models have shown success and become the starting point for various NLP problems such as BERT (Devlin et al., 2019) in contextual representation, GPT2 in text generation (Radford et al., 2019), or BART in seq2seq tasks (Lewis et al., 2020).", "However, the memory and time requirements for transformer models grow quadratically with the sequence length, and for long-span tasks this quickly leads to GPU running out of memory in training.", "To mitigate the quadratic nature, a wide range of modified architectures have recently been proposed (Tay et al., 2021).", "They reduce the quadratic complexity of the full self-attention mechanism by using fixed attention patterns (Parmar et al., 2018; Dai et al., 2019; Child et al., 2019; Qiu et al., 2020; Ainslie et al., 2020; Zaheer et al., 2020; Beltagy et al., 2020), learnable patterns (Kitaev et al., 2020; Tay et al., 2020a), low-rank matrix approximation (Wang et al., 2020), or kernel method (Choroman-ski et al., 2021).", "Alternatively, it has been shown that some attention heads are redundant and can be pruned to reduce model size (Voita et al., 2019; Michel et al., 2019).", "Knowledge distillation reduces memory and compute by compressing a large model to a smaller one (Hinton et al., 2015; Sanh et al., 2019).", "In contrast, we focus on the dependencies of long input and target sequences in encoder-decoder architectures, and we exploit publicly available transformer models with summarization weights to long-span summarization tasks.", "Long-span Summarization.", "Efficient transformer architectures have been applied to summarize long documents such as BigBird (Zaheer et al., 2020), and Longformer-Encoder-Decoder (LED) (Beltagy et al., 2020), which has recently been revised parallel to this work.", "2 Hierarchical transformer architectures have been applied to multi-document summarization (Liu and Lapata, 2019a), and extractive news and table-to-text summarization (Zhang et al., 2019; Narayan et al., 2020).", "Hierarchical attention RNN system has been applied to summarize long articles (Cohan et al., 2018).", "Alternatively, earlier methods show that good content selection helps abstractive news summarization systems (Chen and Bansal, 2018; Gehrmann et al., 2018; Hsu et al., 2018).", "Hybrid systems that select sentences and generate an abstractive summary have been proposed such as extractive system + TLM for scientific articles (Pi-lault et al., 2020), simple selection + BART for podcasts (Manakul and Gales, 2020; Song et al., 2020), and guided summarization by BERT-based keyword/sentence extraction + BART for news and scientific articles (He et al., 2020; Dou et al., 2021).", "Other work includes dividing the source and target into multiple smaller pairs to train abstractive summarizers (Gidiotis and Tsoumakas, 2020).", "Extractive methods with and without redundancy reduction techniques for long-span summarization have been studied (Xiao and Carenini, 2019, 2020).", "Spotify Podcast.", "3 The dataset consists of ASR transcripts with human descriptions as summaries (Clifton et al., 2020).", "We follow the data processing at TREC2020 (Jones et al., 2020) in removing bad transcript-summary pairs from a total of 105,360+1,027 episodes, resulting in train/valid/test splits of 60,415/2,189/1,027 episodes the same as Manakul and Gales (2020).", "arXiv and PubMed.", "Popular long document summarization datasets consist of academic articles with abstracts as summaries (Cohan et al., 2018) and train/valid/test splits of 203,037/6,436/6,440 for arXiv and 119,924/6,633/6,658 for PubMed.", "BART and LoBART.", "We use the publicly released BART model (Lewis et al., 2020) fine-tuned on CNNDM (Hermann et al., 2015).", "4 Following the local window attention in Sparse Transformer (Child et al., 2019) and Longformer (Beltagy et al., 2020), we modify the self-attention mechanism in the encoder to local self-attention (see Figure 2), and we refer to this local self-attention BART as LoBART.", "It has the same architecture as BART, e.g. the number of parameters, except that we extend positional embedding beyond 1,024 by copying BART's positional embedding with flipping to allow a smoother transition.", "See details in Appendix B.1.", "Hierarchical RNN.", "The content selection model is based on a hierarchical encoder-decoder architecture that has been shown effective on meeting and long document summarization (Cohan et al., 2018; Zhao et al., 2019; Li et al., 2019).", "The model consists of word-level and sentence-level GRUs (Cho et al., 2014).", "We add a linear layer on top of the sentence-level GRU to perform extractive labelling.", "The sentence-level attention mechanism and extractive labelling modules form our multitask content selection (MCS).", "More details in Section 5.2.", "We provide the full details about our implementation, model parameters, hyperparameters, optimizer, and training configurations in Appendix B. 4 Longer Span via Local Self-Attention It has been known that memory and compute complexity of transformers is quadratic with the sequence length.", "However, in encoder-decoder architectures, the exact dependencies on input length N , target length M , and batch size B are less understood.", "This is particularly important in long-span seq2seq tasks because large memory or compute requirement could make training impractical.", "Thus, this work studies these dependencies, and shows the trade-off between the size of input span and the size of attention span in local self-attention.", "Firstly, through a regression analysis for an encoder-decoder architecture such as BART, the", "The term c b 1 depends on only the model size and optimizer, and it is constant (theoretical calculation provided in Appendix A).", "The remaining terms are activation memory associated with the activation outputs cached for backpropagation, and they grow with N , M , and B .", "Table 2 shows system-independent 5 regression results for the memory in training BART.", "It is apparent that as N grows the dominant term is c b 6 N 2 , which is associated with the encoder self-attention.", "Thus, this motivates us to modify self-attention only on the encoder side.", "For large N , the memory is now dominated by c l 6 NW .", "The coefficient c l 6 1 .", "72 c b 6 , suggesting that W should be at most 0 .", "58 N to reduce memory.", "We provide more details about the exact theoretical calculation for model and optimizer memory as well as time complexity in Appendix A. The memory for training BART/LoBART in Figure 3 enables us to choose an operating point.", "Additionally, other complementary techniques for reducing memory in training include:", "(i) gradient-checkpoint where a subset of intermediate values in the computation graph are cached, and the rest are re-computed during backpropagation (Chen et al., 2016), but this requires changes to optimization and leads to longer training time;", "(ii) half/mixed-precision training (Micikevicius et al., 2018) that would almost halve y-axis in Figure 3, but this requires changes to the model precision and may result in lower performance;", "(iii) model parallelism with micro-batching (Huang et al., 2019), but this method requires multiple accelerators.", "We study the characteristics of the full self-attention in BART by defining the mean attention", "distance in a particular layer and head as follows: D = 1 NN (cid:88) i =1 N (cid:88) j =1 i,j | i j | (1) where i,j is the attention weight of position i attending to position j ( (cid:80) Nj =1 i,j = 1 ).", "This measure corresponds to the average distance of self-attention.", "If the attention weight is uniform, DU = N 2 1 3 N .", "For N = 1024 , DU = 341 .", "In Figure 4, our results show that most layers have a shorter mean distance than DU , supporting that the information is more localized.", "The mean distances of differently initialized BART models computed on the podcast data also show that the attention mechanism is learned during pre-training stage as there is little variation after the pre-training stage.", "As illustrated in Figure 4, the average attention distance D of the BART model is around 250-350 tokens.", "This suggests the window size W should be designed to be above 700, allowing half local attention window W/ 2 be greater than 250-350 to effectively match BART and to exploit transfer learning more efficiently.", "Subsequently, we train different configurations of BART/LoBART models up to our GPU memory limit of 32GiB.", "The results in Table 3 show that:", "(i) expanding the model to accommodate longer input spans improve over the baseline BART(1k) as opposed to Manakul and Gales (2020) that trained longer-span models by freezing bottom layers and did not show any improvement over their baseline;", "(ii) Although LoBART(8k) with W =512 can process longer input spans than LoBART(4k) with W =1024, it performs worse and we suggest that this is because LoBART(8k)'s window is too small, 0 2 4 6 8 10 Layer ID (Bottom=0, Top=11) 150 200 250 300 350 400 450 500 A v e r age A tt en t i on D i s t an c e Average Attention Distance over All Heads (meanstd) random weights D_Ubart-large (no-finetune) bart-cnnbart-podcast Figure 4: The average mean distance across multi-heads for each layer.", "e.g. < 700, to utilize transfer learning efficiently and its effective receptive field is also smaller.", "Some input sequences still exceed LoBART's longer fixed-span limit.", "Further extending the input span would lead to a small local attention span, a diminishing improvement, or GPU running out of memory.", "Alternatively, it has been shown that a better content selection improves abstractive summarization in news (Chen and Bansal, 2018; Gehrmann et al., 2018; Hsu et al., 2018), multi documents (Liu and Lapata, 2019a; Liu et al., 2018), and scientific articles (Pilault et al., 2020).", "Thus, we propose to tackle the excess length by content selection.", "Here, we distinguish between two phases of content selection: training time and test time.", "During training, ground-truth targets are available.", "We categorize selection methods in this phase into two types: ground-truth based (model-free), which is also referred to as oracle ; and model-based.", "Ground-truth based methods cannot be used at test time, while model-based methods can be applied at both phases.", "Although model-based methods do not rely on ground-truth targets, they have the advantage of matching in training and test phases.", "Existing oracle methods include using ROUGE-2 recall (Liu et al., 2018) or the average of ROUGE-1,2,L recall (Pilault et al., 2020).", "We discuss model-based methods in Section 5.2, where we propose the MCS method.", "Let the subscript ( i, j ) denote the position of the j -th word in the i -th input sentence, the full input X = { x 1 , ..., x i , ..., x N 1 } = [ x 1 , 1 , x 1 , 2 , x 1 ,J 1 (cid:124) (cid:123)(cid:122) (cid:125) sent 1 , ..., x i, 1 , x i,J i (cid:124) (cid:123)(cid:122) (cid:125) sent i , ..., x N 1 , 1 , x N 1 ,J N 1 (cid:124) (cid:123)(cid:122) (cid:125) sent N 1 ] .", "Content selection re-ranks, truncates, and sorts X to get X cs for training BART/LoBART as follows: X = { x r 1 , x r 2 , x r 3 , ..., x r R } (2) X cs = SortOrig ( TruncateN ( X )) (3) where r i is the index of the sentence of rank i , the TruncateN operation filters X such that the total of number of words is less than N , and SortOrig retains the original sentence order.", "The following ranking methods are considered: Truncation (TRC): r k = k .", "Model-based: Given the score f of model , r k = { i N 1 : f ( i | X ) is ranked k -th } Oracle (ORC): Given the ground-truth summary y and similarity measure d , r k = { i N 1 : d ( x i , y ) is ranked k -th } In this work, we use ROUGE-2 recall as the similarity measure d .", "For the ORC method, first, we retain only sentences with positive d , leading to R N 1 .", "We found that the number of sentences with positive d is low at 21.3% of the total number of sentences in average on podcast data.", "This corresponds to 56% of training instances being shorter than BART input span of 1024.", "6 This no-padding oracle method (ORC no-pad ) is highly aggressive , potentially preventing the downstream summarizer 6 We refer to this percentage as %AgORC no-pad (the percentage of inputs aggressively extracted by the oracle method).", "from learning complex abstraction.", "Hence, we propose variants of oracle methods to extend the ORC no-pad -selected input to the max input span N : ORC pad-lead : Pad by leading unselected sentences and keep the original sentence order.", "ORC pad-rand : Pad by random unselected sentences and keep the original sentence order.", "In Figure 5, since any oracle method is considered cheating at test time, the best performance is obtained by MCS (in blue), and the upper bound performance is obtained by optimal oracle method (in green).", "The results show that although ORC no-pad yields the highest upper bound, the abstractive model in fact does not learn how to perform abstraction.", "For instance, with TRC or MCS at test time, ORC no-pad yields the lowest performance level.", "The best way to fine-tune the abstractive model shown in Figure 5 is using ORC pad-rand .", "Compared to ORC pad-lead , ORC pad-rand is better as it introduces more diversity to the abstractive model.", "Compared to the model-based method, ORC pad-rand is also computationally less expensive.", "In addition, Table 5 shows that when there is no content selection at test time (i.e. TRC ap-plied), LoBART(4k) and LoBART(8k) benefit from ORC pad-rand , whereas BART(1k) does not.", "This is because in the 1k setting, content selection is more aggressive; as a result, the large mismatch between training and test leads to a poor result.", "Thus, we suggest that the best content selection during training is ORC pad-rand given that content selection will be used at test time, or model's input span is long.", "linearly with the sequence length, and hierarchical architectures which have been shown effective for long seq2seq tasks (Cohan et al., 2018; Li et al., 2019).", "In this work, the hierarchical RNN model described in Section 3.2 has memory requirement given the target length of 144 during training of 0 .", "83+ B (3 . 96 10 5 +3 . 33 10 5 N 2 ) N 1 , 7 where N 1 is #sentences, and N 2 is the maximum number of words in a sentence, and B is batch size.", "By setting N 1 =1000 and N 2 =50, only 2% of podcast data exceeds this limit, while taking GPU memory to only 2.53GiB for B =1.", "Thus, this shows that this model can cover long sequences.", "Previous model-based methods treat content selection as extractive labelling and create labels heuristically (Pilault et al., 2020), or using encoder-decoder attention mechanism (Manakul and Gales, 2020).", "To utilize both of these in one framework, we propose a Multitask Content Selection (MCS) method where we train the hierarchical encoder-decoder with attention mechanism and a classification layer on top of the encoder (described in Section 3.2).", "First, the model is trained on seq2seq abstractive summarization objective: L seq2seq = M (cid:88) m =1 log P ( y m | y <m , X ) (4) Second, we create binary labels as follows: for sentence i , the label z i is 1 if d ( x i , y ) > 0 ; else z i is 0, and d is the ROUGE-2 recall measure.", "The extractive labelling task objective is: L label = (cid:80) N 1 i =1 ( z i log z i + (1 z i ) log(1 z i )) (5) z i = sigmoid ( WT cls h i + b cls ) (6) where h i is the sentence-level encoder output associated with sentence i , and W cls , b cls are the parameters of the classification layer.", "Thus, the MCS training loss is defined as follows: LMCS = L label + (1 ) L seq2seq (7) At inference stage, there are two modes:", "(i) standard abstractive summary generation, e.g. via beam search decoding;", "(ii) ranking input sentences via labelling score and seq2seq attention score.", "The latter is how we use MCS during inference.", "8 For sentence i , the scores are: score i, ( label ) = z i , score i, ( seq2seq ) = (cid:80) M m =1 s m,i (8) 7 Obtained by least-squares regression with 20 samples.", "where s m,i is the sentence-level attention weight at decoder step m over input sentence i .", "Since the scores are on different scales, rather than using the scores defined in Eq.", "8, we simply rank the scores, and then normalize the score ranks into the range 0.0 to 1.0.", "Let nscore denote the normalized ranking score, the MCS inference score is: f ( i | X ) = nscore i, ( label ) + nscore i, ( seq2seq ) (9) In our preliminary experiments, we vary the amount of selected sentences from the limit of BART/LoBART to a few sentences, and we found that more aggressive selection at test time degrades the performance.", "Therefore, our MCS selects input sentences up to the limit of BART/LoBART.", "By setting =0.0, our method is comparable to the attention-based method in Manakul and Gales (2020).", "By setting =1.0, our method is similar to the extractive models in Hsu et al. (2018); Pi-lault et al. (2020).", "In Table 4, we show that when coupled with BART, MCS yields better summarization performance than both Attn-only and Ext-only baselines.", "MCS also achieves higher recall rate of sentences with d ( x i , y ) > 0 than the two baselines.", "In Table 5, a performance gain is obtained in all settings by adding MCS.", "By comparing different configurations with MCS, it can be seen that the gain from MCS in LoBART(8k) system is the lowest.", "This is because the average length is 5,727, meaning that many Podcasts inputs to LoBART(8k) do not benefit from content selection.", "CUED-filt, the best single-model system in Manakul and Gales (2020), uses an attention-based content selection at both training and test time, and it is combined with fine-tuned vanilla BART.", "Our approach outperforms CUED-filt by improved content selection at both training time and test time as demonstrated by BART(1k)-ORC+MCS.", "Additionally, local self-attention allows training on longer sequences, and our LoBART(4k)-ORC+MCS system has yielded the best results.", "Lastly, even though LoBART(8k) requires more resource to train, it does not perform as well as LoBART(4k) due to its smaller attention window, and it also has a lower improvement when adding MCS.", "To verify the effectiveness of our systems, we re-train BART(1k) and LoBART(4k) on arXiv and PubMed datasets.", "Our training is different from Ext+TLM (Pilault et al., 2020) where their abstractive models are trained using inputs extracted from top two sentences in ROUGE recall for each target sentence without padding, similar to ORC no-pad .", "Although in 1k setting, ORC no-pad yields %AgORC no-pad (defined in Section 5.1) of only 2.8% on arXiv (12% on PubMed), in 4k setting this is 39% on arXiv (71% on PubMed).", "Based on the best configurations on podcast data, we train BART(1k) and LoBART(4k) using TRC or ORC pad-rand content selection, and we train the hierarchical model on arXiv/PubMed for MCS.", "ArXiv.", "In Table 6, both BART(1k)+MCS and LoBART(4k)+MCS outperform all existing systems.", "To better understand the advantages of our approach, the following systems are compared: Type System arXiv PubMed R1 R2 RL R1 R2 RLP r e v i ou s W o r k Abs Discourse-Aware (Cohan et al., 2018) 35.80 11.05 31.80 38.93 15.37 35.21 Mix Ext+TLM (Pilault et al., 2020) 41.62 14.69 38.03 42.13 16.27 39.21 Ext ExtSum-LG+Rd(Xiao and Carenini, 2020) 44.01 17.79 39.09 45.30 20.42 40.95 Abs Pegasus (Zhang et al., 2020) 44.21 16.95 38.83 45.97 20.15 41.34 Abs DANCER (Gidiotis and Tsoumakas, 2020) 45.01 17.60 40.56 46.34 19.97 42.42 Abs BigBird(3k) (Zaheer et al., 2020) 46.63 19.02 41.77 46.32 20.65 42.33 Abs LED(4k) (Beltagy et al., 2020) 44.40 17.94 39.76 -Abs LED(16k) (Beltagy et al., 2020) 46.63 19.62 41.83 -Mix CTRLsum(BART+BERT) (He et al., 2020) 46.91 18.02 42.14 -T h i s W o r k Abs BART(1k) 44.96 17.25 39.76 45.06 18.27 40.84 Mix BART(1k)+MCS 47.68 19.77 42.25 46.49 19.45 42.04 Abs LoBART(4k) 46.59 18.72 41.24 47.47 20.47 43.02 Mix LoBART(4k)+MCS 48.79 20.55 43.31 48.06 20.96 43.56 Table 6: Results on arXiv and PubMed.", "CTRLsum versus our BART(1k) baseline; LED and BigBird versus our LoBART(4k) system.", "CTRLsum extends BART by conditioning it with extracted keywords v using a BERT-based model, e.g. p ( y | X , v ) .", "Their BERT-based model uses sliding window allowing it to extract v in long sequences, but their BART is still limited to the first 1,024 tokens.", "As a result, it performs better than BART(1k), but worse than BART(1k)+MCS.", "LoBART(4k) has a similar architecture to LED(4k) without the global attention pattern for special tokens.", "Instead, our LoBART(4k) benefits from knowledge transferred from CNNDM and the ORC pad-rand training-time content selection, which yields a larger gain when MCS is applied, i.e. the system trained with truncated data has a smaller gain when MCS is applied.", "Transfer learning comparison and additional results on the impact of ORC pad-rand are provided in Appendix C. Compared to BigBird, LoBART(4k) has a longer input span, e.g. 3,072 vs. 4,096.", "However, BigBird benefits from utilizing more recent summarization specific pre-training Pegasus (Zhang et al., 2020) which is better than our transfer learning.", "BigBird incorporates a global attention pattern similar to LED, and it also has a random attention pattern.", "Hence, LoBART without MCS performs worse.", "Ultimately, we show that adding MCS to either BART(1k) or LoBART(4k) yields a significant improvement, resulting in state-of-the-art results in both settings.", "Moreover, although the gain from adding MCS is comparable to the gain observed in extending LED(4k) to LED(16k), the content selection method adds less training cost.", "PubMed.", "Similarly, LoBART(4k)+MCS achieves state-of-the-art results shown in Table 6.", "In contrast to the arXiv results, BART(1k)+MCS does not outperform LoBART(4k) nor BigBird, and the gain from MCS is not as high in both 1k and 4k settings.", "Local attention yields better performance on PubMed, while MCS yields better performance on arXiv.", "To understand this discrepancy, a fine-grained analysis is conducted.", "In Figure 6, we partition the test sets by input lengths, and we evaluate the performance improvement in each partition with respect to the BART(1k) baseline.", "9 The results illustrate that as the input length N increases: The improvement of systems with MCS increases and subsequently plateaus out.", "The improvement of systems without MCS decreases once the input exceeds the length limit but then plateaus, suggesting that fixed-span systems without content selection perform worse once the maximum fixed-span is reached.", "For instance, below 4,000 input words, LoBART(4k) without MCS performs better than BART(1k)+MCS on both datasets.", "Therefore, our MCS method is more effective on arXiv compared to PubMed because the average length of PubMed documents is more than twice shorter than the average length of arXiv documents.", "We study two methods for long-span summarization tasks.", "First, on local self-attention transformers, we present the design considerations for local self-attention BART, and we investigate the feasibility and performance of different network configurations.", "Second, on content selection, we distinguish between training time and test time methods, and we provide a good practice for both phases.", "At training time, we show that the oracle method with random sentences padded (ORC pad-rand ) yields the best results.", "At test time, we propose multitask content selection (MCS) that shows an improvement over baselines.", "We demonstrate that content selection is essential, in particular for longer documents such as the articles in the arXiv dataset.", "Our BART(1k)+MCS outperforms the current best systems on Podcast and arXiv datasets, and this system does not require a large-scale accelerator in training.", "Ultimately, by combining local self-attention technique with MCS, our LoBART(4k)+MCS system has set new state-of-the-art results in terms of ROUGE scores in all three long-span summarization tasks.", "Future work will focus on training our LoBART+MCS system in an end-to-end fashion.", "9 For arXiv/PubMed, each test set consists of over 6,000 instances, while Podcast test set has only 1,027 instances.", "The same analysis is conducted on Podcast, but the results are noisy due to the smaller size of its test set (see Appendix C).", "This paper reports on research supported by ALTA institute, Cambridge Assessment English, University of Cambridge, and Cambridge International & St John's College Scholarship.", "Thanks to Yiting Lu, Qingyun Dou, Xixin Wu, Raf Czlonka, and Kate Knill for interesting discussions and computing resource support.", "Thanks to the anonymous reviewers for their helpful comments." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "objective", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "method", "objective", "abstain", "objective", "method", "abstain", "other", "result", "objective", "objective", "other", "result", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "other", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "other", "method", "other", "other", "other", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "objective", "method", "result", "objective", "other", "other", "objective", "method", "other", "abstain", "other", "other", "other" ]
[ "When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities.", "If we humans cannot figure out the meaning of those expressions from the immediate local context, we consult dictionaries for definitions or search documents or the web to find other global context to help in interpretation.", "Can machines help us do this work?", "Which type of context is more important for machines to solve the problem?", "To answer these questions, we undertake a task of describing a given phrase in natural language based on its local and global contexts.", "To solve this task, we propose a neural description model that consists of two context encoders and a description decoder.", "In contrast to the existing methods for non-standard English explanation (Ni and Wang, 2017) and definition generation (Noraset et al., 2017; Gadetsky et al., 2018), our model appropriately takes important clues from both local and global contexts.", "Experimental results on three existing datasets (including WordNet, Oxford and Urban Dictionaries) and a dataset newly created from Wikipedia demonstrate the effectiveness of our method over previous work.", "When we read news text with emerging entities, text in unfamiliar domains, or text in foreign languages, we often encounter expressions (words or phrases) whose senses we do not understand.", "In such cases, we may first try to figure out the meanings of those expressions by reading the surrounding words ( local context) carefully.", "Failing to do so, we may consult dictionaries, and in the case of polysemous words, choose an appropriate meaning based on the context.", "Learning novel word senses via dictionary definitions is known to be Figure 1: Lo cal & G lobal C ontexta ware D escription generator ( LOG-CaD ).", "more effective than contextual guessing (Fraser, 1998; Chen, 2012).", "However, very often, handcrafted dictionaries do not contain definitions of expressions that are rarely used or newly created.", "Ultimately, we may need to read through the entire document or even search the web to find other oc-curances of the expression ( global context) so that we can guess its meaning.", "Can machines help us do this work?", "Ni and Wang (2017) have proposed a task of generating a definition for a phrase given its local context.", "However, they follow the strict assumption that the target phrase is newly emerged and there is only a single local context available for the phrase, which makes the task of generating an accurate and coherent definition difficult (perhaps as difficult as a human comprehending the phrase itself).", "On the other hand, Noraset et al. (2017) attempted to generate a definition of a word from an embedding induced from massive text (which can be seen as global context).", "This is followed by Gadetsky et al. (2018) that refers to a local context to disambiguate polysemous words by choosing relevant dimensions of their word embeddings.", "Although these research efforts revealed that both local and global contexts are useful in generating definitions, none of these studies exploited both contexts directly to describe unknown phrases.", "In this study, we tackle the task of describing (defining) a phrase when given its local and global contexts.", "We present LOG-CaD , a neural description generator (Figure 1) to directly solve this task.", "Given an unknown phrase without sense definitions, our model obtains a phrase embedding as its global context by composing word embeddings while also encoding the local context.", "The model therefore combines both pieces of information to generate a natural language description.", "Considering various applications where we need definitions of expressions, we evaluated our method with four datasets including WordNet (Noraset et al., 2017) for general words, the Oxford dictionary (Gadetsky et al., 2018) for polysemous words, Urban Dictionary (Ni and Wang, 2017) for rare idioms or slang, and a newly-created Wikipedia dataset for entities.", "Our contributions are as follows: We propose a general task of defining unknown phrases given their contexts .", "We propose a method for generating natural language descriptions for unknown phrases with local and global contexts ( 3).", "As a benchmark to evaluate the ability of the models to describe entities, we build a large-scale dataset from Wikipedia and Wikidata for the proposed task.", "We release our dataset and the code 1 to promote the reproducibility of the experiments ( 4).", "The proposed method achieves the state-of-the-art performance on our new dataset and the three existing datasets used in the related studies (Noraset et al., 2017; Ni and Wang, 2017; Gadetsky et al., 2018) ( 5).", "This task is a generalization of three related tasks (Noraset et al., 2017; Ni and Wang, 2017; Gadetsky et al., 2018) and involves various situations where we need definitions of unknown phrases ( 2).", "In this section, we define our task of describing a phrase in a specific context.", "Given an unde-fined phrase X trg = { x j , , x k } with its context X = { x 1 , , x I } ( 1 j k I ), our task is to output a description Y = { y 1 , , y T } .", "Here, X trg can be a word or a short phrase and is included in X .", "Y is a definition-like concrete and concise sentence that describes the X trg .", "For example, given a phrase sonic boom with its context the shock wave may be caused by sonic boom or by explosion, the task is to generate a description such as sound created by an object moving fast.", "If the given context has been changed to this is the first official tour to support the band's latest studio effort, 2009's Sonic Boom , then the appropriate output would be al-bum by Kiss.", "The process of description generation can be modeled with a conditional language model as p ( Y | X, X trg ) = T (cid:89) t =1 p ( y t | y <t , X, X trg ) .", "(1) 3 LOG-CaD: Local & Global Context-aware Description Generator In this section, we describe our idea of utilizing local and global contexts in the description generation task, and present the details of our model.", "When we find an unfamiliar phrase in text and it is not defined in dictionaries, how can we humans come up with its meaning?", "As discussed in Section 1, we may first try to figure out the meaning of the phrase from the immediate context, and then read through the entire document or search the web to understand implicit information behind the text.", "In this paper, we refer to the explicit contextual information included in a given sentence with the target phrase (i.e., the X in Eq.", "(1)) as local con-text, and the implicit contextual information in massive text as global context.", "While both local and global contexts are crucial for humans to understand unfamiliar phrases, are they also useful for machines to generate descriptions?", "To verify this idea, we propose to incorporate both local and global contexts to describe an unknown phrase.", "Figure 1 shows an illustration of our LOG-CaD model.", "Similarly to the standard encoder-decoder model with attention (Bahdanau et al., 2015; Luong and Manning, 2016), it has a context encoder and a description decoder.", "The challenge here is that the decoder needs to be conditioned not only on the local context, but also on its global context.", "To incorporate the different types of contexts, we propose to use a gate function similar to Noraset et al. (2017) to dynamically control how the global and local contexts influence the description.", "Local & global context encoders We first describe how to model local and global contexts.", "Given a sentence X and a phrase X trg , a bidirectional LSTM (Gers et al., 1999) encoder generates a sequence of continuous vectors H = { h 1 , h I } as h i = Bi-LSTM ( h i 1 , h i +1 , x i ) , (2) where x i is the word embedding of word x i .", "In addition to the local context, we also utilize the global context obtained from massive text.", "This can be achieved by feeding a phrase embedding x trg to initialize the decoder (Noraset et al., 2017) as y 0 = x trg .", "Here, the phrase embedding x trg is calculated by simply summing up all the embeddings of words that consistute the phrase X trg .", "Note that we use a randomly-initialized vector if no pre-trained embedding is available for the words in X trg .", "Description decoder Using the local and global contexts, a description decoder computes the conditional probability of a description Y with Eq.", "(1), which can be approximated with another LSTM as s t = LSTM ( y t 1 , s (cid:48) t 1 ) , (4) d t = ATTENTION ( H , s t ) , (5) c trg = CNN ( X trg ) , (6) s (cid:48) t = GATE ( s t , x trg , c trg , d t ) , (7) p ( y t | y <t , X trg ) = softmax ( W s (cid:48) s (cid:48) t + b s (cid:48) ) , (8) where s t is a hidden state of the decoder LSTM ( s 0 = (cid:126) 0 ), and y t 1 is a jointly-trained word embedding of the previous output word y t 1 .", "In what follows, we explain each equation in detail.", "Attention on local context Considering the fact that the local context can be relatively long (e.g., around 20 words on average in our Wikipedia dataset introduced in Section 4), it is hard for the decoder to focus on important words in local contexts.", "In order to deal with this problem, the ATTENTION ( ) function in Eq.", "(5) decides which words in the local context X to focus on at each time step.", "d t is computed with an attention mechanism (Luong and Manning, 2016) as d t = T (cid:88) i =1 i h i , (9) i = softmax ( U h h T i U s s t ) , (10) where U h and U s are matrices that map the encoder and decoder hidden states into a common space, respectively.", "Use of character information In order to capture the surface information of X trg , we construct character-level CNN s (Eq.", "(6)) following (No-raset et al., 2017).", "Note that the input to the CNN s is a sequence of words in X trg , which are concatenated with special character , such as sonic boom.", "Following Noraset et al. (2017), we set the CNN kernels of length 2-6 and size 10 , 30 , 40 , 40 , 40 respectively with a stride of 1 to obtain a 160-dimensional vector c trg .", "Gate function to control local & global contexts In order to capture the interaction between the local and global contexts, we adopt a GATE ( ) function (Eq.", "(7)) which is similar to Noraset et al. (2017).", "The GATE ( ) function updates the LSTM output s t to s (cid:48) t depending on the global context x trg , local context d t , and character-level information c trg as f t = [ x trg ; d t ; c trg ] (11) z t = ( W z [ f t ; s t ] + b z ) , (12) r t = ( W r [ f t ; s t ] + b r ) , (13) s t = tanh ( W s [( r t (cid:12) f t ); s t ] + b s ) , (14) s (cid:48) t = (1 z t ) (cid:12) s t + z t (cid:12) s t , (15) where ( ) , (cid:12) and ; denote the sigmoid function, element-wise multiplication, and vector concatenation, respectively.", "W and b are weight matrices and bias terms, respectively.", "Here, the update gate z t controls how much the original hidden state s t is to be changed, and the reset gate r t controls how much the information from f t contributes to word generation at each time step.", "Our goal is to let machines describe unfamiliar words and phrases, such as polysemous words, rarely used idioms, or emerging entities.", "Among the three existing datasets, WordNet and Oxford dictionary mainly target the words but not phrases, thus are not perfect test beds for this goal.", "On the other hand, although the Urban Dictionary dataset contains descriptions of rarely-used phrases, the domain of its targeted words and phrases is limited to Internet slang.", "In order to confirm that our model can generate the description of entities as well as polysemous words and slang, we constructed a new dataset for context-aware phrase description generation from Wikipedia 2 and Wikidata 3 which contain a wide variety of entity descriptions with contexts.", "The overview of the data extraction process is shown in Figure 2.", "Each entry in the dataset consists of (1) a phrase, (2) its description, and (3) context (a sentence).", "For preprocessing, we applied Stanford Tok-enizer 4 to the descriptions of Wikidata items and the articles in Wikipedia.", "Next, we removed phrases in parentheses from the Wikipedia articles, since they tend to be paraphrasing in other languages and work as noise.", "To obtain the contexts of each item in Wikidata, we extracted the 2 https://dumps.wikimedia.org/enwiki/ 20170720/ 3 https://dumps.wikimedia.org/ wikidatawiki/entities/20170802/ 4 https://nlp.stanford.edu/software/ tokenizer.shtml sentence which has a link referring to the item through all the first paragraphs of Wikipedia articles and replaced the phrase of the links with a special token [TRG] .", "Wikidata items with no description or no contexts are ignored.", "This utilization of links makes it possible to resolve the ambiguity of words and phrases in a sentence without human annotations, which is a major advantage of using Wikipedia.", "Note that we used only links whose anchor texts are identical to the title of the Wikipedia articles, since the users of Wikipedia sometimes link mentions to related articles.", "We evaluate our method by applying it to describe words in WordNet 5 (Miller, 1995) and Oxford Dictionary, 6 phrases in Urban Dictionary 7 and Wikipedia/Wikidata.", "8 For all of these datasets, a given word or phrase has an inventory of senses with corresponding definitions and usage examples.", "These definitions are regarded as ground-truth descriptions.", "Datasets To evaluate our model on the word description task on WordNet, we followed Noraset et al. (2017) and extracted data from WordNet using the dict-definition 9 toolkit.", "Each entry in the data consists of three elements: (1) a word, (2) its definition, and (3) a usage example of the 5 https://wordnet.princeton.edu/ 6 https://en.oxforddictionaries.com/ 7 https://www.urbandictionary.com/ 8 https://www.wikidata.org 9 https://github.com/NorThanapon/ dict-definition Corpus #Phrases #Entries Phrase Context Desc.", "word.", "We split this dataset to obtain Train, Validation, and Test sets.", "If a word has multiple defini-tions/examples, we treat them as different entries.", "Note that the words are mutually exclusive across the three sets.", "The only difference between our dataset and theirs is that we extract the tuples only if the words have their usage examples in WordNet.", "Since not all entries in WordNet have usage examples, our dataset is a small subset of Noraset et al. (2017).", "In addition to WordNet, we use the Oxford Dictionary following Gadetsky et al. (2018), the Urban Dictionary following Ni and Wang (2017) and our Wikipedia dataset described in the previous section.", "Table 1 and Table 2 show the properties and statistics of the four datasets, respectively.", "To simulate a situation in a real application where we might not have access to global context for the target phrases, we did not train domain-specific word embeddings on each dataset.", "Instead, for all of the four datasets, we use the same Global Local I-Attn.", "pre-trained CBOW 10 vectors trained on Google news corpus as global context following previous work (Noraset et al., 2017; Gadetsky et al., 2018).", "If the expression to be described consists of multiple words, its phrase embedding is calculated by simply summing up all the CBOW vectors of words in the phrase, such as sonic and boom. (See Figure 1).", "If pre-trained CBOW embeddings are unavailable, we instead use a special [UNK] vector (which is randomly initialized with a uniform distribution) as word embeddings.", "Note that our pre-trained embeddings only cover 26 .", "79% of the words in the expressions to be described in our Wikipedia dataset, while it covers all words in WordNet dataset (See Table 2).", "Even if no reliable word embeddings are available, all models can capture the character information through character-level CNN s (See Figure 1).", "Models We implemented four methods: (1) Global (Noraset et al., 2017), (2) Local (Ni and Wang, 2017) with CNN , (3) I-Attention (Gadetsky et al., 2018), and our proposed model, (4) LOG-CaD .", "The Global model is our reimplementation of the best model (S + G + CH) in Noraset et al. (2017).", "It can access the global context of a phrase to be described, but has no ability to read the local context.", "The Local model is the reimplementation of the best model (dual encoder) in Ni and Wang(2017).", "In order to make a fair comparison of the effectiveness of local and global contexts, we slightly modify the original implementation by Ni and Wang(2017); as the character-level encoder in the Local model, we adopt CNN s that are exactly the same as the other two models instead of the original LSTM s.", "The I-Attention is our reimplementation of the best model (S + I-Attention) in Gadetsky 10 GoogleNews-vectors-negative300.bin.gz at https://code.google.com/archive/p/ word2vec/ Model WordNet Oxford Urban Wikipedia Global 24.10 15.05 6.05 44.77 Local 22.34 17.90 9.03 52.94 I-Attention 23.77 17.25 10.40 44.71 LOG-CaD 24.79 18.53 10.55 53.85 Table 4: BLEU scores on four datasets.", "et", "al.(2018).", "Similar to our model, it uses both local and global contexts.", "Unlike our model, however, it does not use character information to predict descriptions.", "Also, it cannot directly use the local context to predict the words in descriptions.", "This is because the I-Attention model indirectly uses the local context only to disambiguate the phrase embedding x trg as x (cid:48) trg = x trg (cid:12) m , (16) m = ( W m (cid:80) Ii =1 FFNN ( h i ) I + b m ) .", "Here, the FFNN ( ) function is a feed-forward neural network that maps the encoded local contexts h i to another space.", "The mapped local contexts are then averaged over the length of the sentence X to obtain a representation of the local context.", "This is followed by a linear layer and a sigmoid function to obtain the soft binary mask m which can filter out the unrelated information included in global context.", "Finally, the disambiguated phrase embedding x (cid:48) trg is then used to update the decoder hidden state as s t = LSTM ([ y t 1 ; x (cid:48) trg ] , s t 1 ) .", "Automatic Evaluation Table 4 shows the BLEU (Papineni et al., 2002) scores of the output descriptions.", "We can see that the LOG-CaD model consistently outperforms the three baselines in all four datasets.", "This result indicates that using both local and global contexts helps describe the unknown words/phrases correctly.", "While the 11 http://pytorch.org/ Input: waste Context: #1 #2 if the effort brings no compensating gain it is a waste We waste the dirty water by channeling it into the sewer Reference: useless or profitless activity to get rid of Global: to give a liquid for a liquid Local: a state of being assigned to a particular purpose to make a break of a wooden instrument I-Attention: a person who makes something that can be be be done to remove or remove the contents of LOG-CaD: a source of something that is done or done to remove a liquid Table 6: Descriptions for a word in WordNet.", "I-Attention model also uses local and global contexts, its performance was always lower than the LOG-CaD model.", "This result shows that using local context to predict description is more effective than using it to disambiguate the meanings in global context.", "In particular, the low BLEU scores of Global and I-Attention models on Wikipedia dataset suggest that it is necessary to learn to ignore the noisy information in global context if the coverage of pre-trained word embeddings is extremely low (see the third and fourth rows in Table 2).", "We suspect that the Urban Dictionary task is too difficult and the results are unreliable considering its extremely low BLEU scores and high ratio of unknown tokens in generated descriptions.", "Manual Evaluation To compare the proposed model and the strongest baseline in Table 4 (i.e., the Local model), we performed a human evaluation on our dataset.", "We randomly selected 100 samples from the test set of the Wikipedia dataset and asked three native English speakers to rate the output descriptions from 1 to 5 points as: 1) completely wrong or self-definition, 2) correct topic with wrong information, 3) correct but incomplete, 4) small details missing, 5) correct.", "The averaged scores are reported in Table 5.", "Pair-wise bootstrap resampling test (Koehn, 2004) for the annotated scores has shown that the superiority of LOG-CaD over the Local model is statistically significant ( p < 0 . 01 ).", "Qualitative Analysis Table 6 shows a word in the WordNet, while Table 7 and Table 8 show the examples of the entities in Wikipedia as examples.", "When comparing the two datasets, the quality of generated descriptions of Wikipedia dataset is sig-nificantly better than that of WordNet dataset.", "The main reason for this result is that the size of training data of the Wikipedia dataset is 64x larger than the WordNet dataset (See Table 1).", "For all examples in the three tables, the Global model can only generate a single description for each input word/phrase because it cannot access any local context.", "In the Wordnet dataset, only the I-Attention and LOG-CaD models can successfully generate the concept of remove given the context #2 .", "This result suggests that considering both local and global contexts are essential to generate correct descriptions.", "In our Wikipedia dataset, both the Local and LOG-CaD models can describe the word/phrase considering its local context.", "For example, both the Local and LOG-CaD models could generate american in the description for daniel o'neill given united states in context #1 , while they could generate british given belfast in context #2 .", "A similar trend can also be observed in Table 8, where LOG-CaD could generate the locational expressions such as philippines and british given the different contexts.", "On the other hand, the I-Attention model could not describe the two phrases, taking into account the local contexts.", "We will present an analysis of this phenomenon in the next section.", "In this section, we present analyses on how the local and global contexts contribute to the description generation task.", "First, we discuss how the local context helps the models to describe a phrase.", "Then, we analyze the impact of global context under the situation where local context is unreliable.", "Local context helps us (1) disambiguate polysemous words and (2) infer the meanings of unknown expressions.", "Can machines also utilize the local context?", "In this section, we discuss the two roles of local context in description generation.", "Considering that the pre-trained word embeddings are obtained from word-level co-occurrences in massive text, more information is mixed up into a single vector as the more senses the word has.", "While Gadetsky et al. (2018) de-1 2 3 4+ # senses 20 30 40 50 60 BLEU Global Local I-Attention LOG-CaD", "signed the I-Attention model to filter out unrelated meanings in the global context given local context, they did not discuss the impact of the number of senses has on the performance of definition generation.", "To understand the influence of the ambiguity of phrases to be defined on the generation performance, we did an analysis on our Wikipedia dataset.", "Figure", "3(a) shows that the description generation task becomes harder as the phrases to be described become more ambiguous.", "In particular, when a phrase has an extremely large number of senses, (i.e., #senses 4 ), the Global model drops its performance significantly.", "This result indicates that the local context is necessary to disambiguate the meanings in global context.", "As shown in Table 2, a large proportion of the phrases in our Wikipedia dataset includes unknown words (i.e., only 26 . 79% of words in the phrases have their pre-trained embeddings).", "This fact indicates that the global context in this dataset is not fully reliable.", "Then our next question is, how does the lack of information from global context affect the performance of phrase description?", "Figure", "3(b) shows the impact of unknown words in the phrases to be described on the performance.", "As we can see from the result, the advantage of LOG-CaD and Local models over Global and I-Attention models becomes larger as the unknown words increases.", "This result suggests that we need to fully utilize local contexts especially in practical applications where the phrases to be defined have many unknown words.", "Here, Figure", "3(b) also shows a counterintuitive phenomenon that BLEU scores increase as the ratio of unknown words in a phrase increase.", "This is mainly because unknown phrases tend to be person names such as writers, actors, or movie directors.", "Since these entities have fewer ambiguities in categories, they can be described in extremely short sentences that are easy for all four models to decode (e.g., finnish writer or american television producer).", "As discussed earlier, local contexts are important to describe unknown expressions, but how about global contexts?", "Assuming a situation where we cannot obtain much information from local contexts ( e.g. , infer the meaning of boswellia from a short local context Here is a boswellia), global contexts should be essential to understand the meaning.", "To confirm this hypothesis, we analyzed the impact of the length of local contexts on BLEU scores.", "Figure", "3(c) shows that when the length of local context is extremely short ( l 10 ), the LOG-CaD model becomes much stronger than the Local model.", "This result indicates that not only local context but also global context help models describe the meanings of phrases.", "In this study, we address a task of describing a given phrase with its context.", "In what follows, we explain existing tasks that are related to our work.", "Our task is closely related to word sense disambiguation ( WSD ) (Navigli, 2009), which iden-tifies a pre-defined sense for the target word with its context.", "Although we can use it to solve our task by retrieving the definition sentence for the sense identified by WSD , it requires a substantial amount of training data to handle a different set of meanings of each word, and cannot handle words (or senses) which are not registered in the dictionary.", "Although some studies have attempted to detect novel senses of words for given contexts (Erk, 2006; Lau et al., 2014), they do not provide definition sentences.", "Our task avoids these difficul-ties in WSD by directly generating descriptions for phrases or words.", "It also allows us to flexibly tailor a fine-grained definition for the specific context.", "Paraphrasing (Androutsopoulos and Malakasi-otis, 2010; Madnani and Dorr, 2010) (or text simplification (Siddharthan, 2014)) can be used to rephrase words with unknown senses.", "However, the target of paraphrase acquisition are words/phrases with no specified context.", "Although a few studies (Connor and Roth, 2007; Max, 2009; Max et al., 2012) consider sub-sentential (context-sensitive) paraphrases, they do not intend to obtain a definition-like description as a paraphrase of a word.", "Recently, Noraset et al. (2017) introduced a task of generating a definition sentence of a word from its pre-trained embedding.", "Since their task does not take local contexts of words as inputs, their method cannot generate an appropriate definition for a polysemous word for a specific context.", "To cope with this problem, Gadetsky et al. (2018) proposed a definition generation method that works with polysemous words in dictionaries.", "They presented a model that utilizes local context to filter out the unrelated meanings from a pre-trained word embedding in a specific context.", "While their method use local context for disambiguating the meanings that are mixed up in word embeddings, the information from local contexts cannot be utilized if the pre-trained embeddings are unavailable or unreliable.", "On the other hand, our method can fully utilize the local context through an attentional mechanism, even if the reliable word embeddings are unavailable.", "The most related work to this paper is Ni and Wang (2017).", "Focusing on non-standard English phrases, they proposed a model to generate the explanations solely from local context.", "They followed the strict assumption that the target phrase was newly emerged and there was only a single local context available, which made the task of generating an accurate and coherent definition difficult.", "Our proposed task and model are more general and practical than Ni and Wang (2017); where (1) we use Wikipedia, which includes expressions from various domains, and (2) our model takes advantage of global contexts if available.", "Our task of describing phrases with its context is a generalization of the three tasks (Noraset et al., 2017; Ni and Wang, 2017; Gadetsky et al., 2018), and the proposed method utilizes both local and global contexts of an expression in question.", "This paper sets up a task of generating a natural language description for an unknown phrase with a specific context, aiming to help us acquire unknown word senses when reading text.", "We approached this task by using a variant of encoder-decoder models that capture the given local context with the encoder and global contexts with the decoder initialized by the target phrase's embedding induced from massive text.", "We performed experiments on three existing datasets and one newly built from Wikipedia and Wikidata.", "The experimental results confirmed that the local and global contexts complement one another and are both essential; global contexts are crucial when local contexts are short and vague, while the local context is important when the target phrase is polysemous, rare, or unseen.", "As future work, we plan to modify our model to use multiple contexts in text to improve the quality of descriptions, considering the one sense per discourse hypothesis (Gale et al., 1992).", "We will release the newly built Wikipedia dataset and the experimental codes for the academic and industrial communities at https://github.com/shonosuke/ ishiwatari-naacl2019 to facilitate the reproducibility of our results and their use in various application contexts.", "The authors are grateful to Thanapon Noraset for sharing the details of his implementation of the previous work.", "We also thank the anonymous reviewers for their careful reading of our paper and insightful comments, and the members of Kitsuregawa-Toyoda-Nemoto-Yoshinaga-Goda laboratory in the University of Tokyo for proofreading the draft.", "This work was partially supported by Grant-in-Aid for JSPS Fellows (Grant Number 17J06394) and Commissioned Research (201) of the National Institute of Information and Communications Technology of Japan." ]
[ "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "other", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "abstain", "abstain", "objective", "method", "method", "abstain", "result", "other", "other", "other", "other" ]
[ "Despite recent advances in natural language generation, it remains challenging to control attributes of generated text.", "We propose DEXPERTS : Decoding-time Experts, a decoding-time method for controlled text generation that combines a pretrained language model with expert LMs and/or anti-expert LMs in a product of experts.", "Intuitively, under the ensemble, tokens only get high probability if they are considered likely by the experts and unlikely by the anti-experts.", "We apply DEXPERTS to language detoxification and sentiment-controlled generation, where we outperform existing controllable generation methods on both automatic and human evaluations.", "Moreover, because DEXPERTS operates only on the output of the pretrained LM, it is effective with (anti-)experts of smaller size, including when operating on GPT-3.", "Our work highlights the promise of tuning small LMs on text with (un)desirable attributes for efficient decoding-time steering.", "Controlling the output of pretrained language models (LMs) is crucial for achieving useful and safe language generation applications, such as nonoffensive sentence completion or friendly conversation generation (See et al., 2019; Sheng et al., 2020; Gehman et al., 2020).", "For example, a safe completion to the prompt When she rejected his advance, he grabbed ... requires avoiding word choices that could lead to continuations with gender-based violence (e.g., her ; Figure 1).", "Without such steering, these language models risk generating mindless and offensive content (Sheng et al., 2019; Holtzman et al., 2020) which hinders their safe deployment (Brockman et al., 2020; Bender et al., 2021).", "Importantly, as the scale of pretrained LMs increases (e.g., 175B and 1.6T parameters; Brown et al., 2020; Fedus et al., Figure 1: Illustration of DEXPERTS , where a toxic LM acts as an anti-expert and a non-toxic LM acts as an expert. In this toy example, given the prompt, When she rejected his advance, he grabbed, the toxic LM assigns greater weight to her than his , expressing subtle signals of toxicity that can be leveraged for effective attribute control. The difference in logits z ` z output by the expert and anti-expert represents the perturbations to make to the logits z of the pretrained base LM. 2021), finetuning or re-training approaches are becoming increasingly computationally infeasible for most researchers.", "We propose DEXPERTS , 1 a decoding-time method for controlled text generation based on a 1 DEXPERTS stands for Decoding-time Experts.", "Our code is available at https://github.com/ alisawuffles/DExperts .", "product of experts (Hinton, 2002).", "Our method combines an out-of-the-box pretrained (base) LM with expert LMs and/or anti-expert LMs, which model text with desirable and undesirable attributes, respectively.", "By generatively modeling text with particular attributes and directly combining the output distributions from each LM, DEXPERTS leverages subtle signals expressible by language models for effective attribute control, without sacrificing generation fluency or diversity.", "Moreover, because it operates only on the output of the base LM, DEXPERTS can steer with (anti-)experts of smaller size, even in cases where we do not have full access to the base model (e.g., GPT-3 through an API).", "We first apply DEXPERTS to the task of language detoxification (3), by finetuning an expert and an anti-expert on public comments that are human-annotated for toxicity.", "Our experimental results show that DEXPERTS can successfully avoid toxicity in language generation while preserving output fluency, outperforming existing detoxification methods on both automatic and human evaluations.", "Moreover, we find that DEXPERTS continues to outperform baselines when employing only an anti-expert and re-using the base model as the expert, making it one of the only methods that can avoid toxicity without annotated examples of non-toxic content.", "In analysis, we also show that our method successfully avoids toxic degeneration while using just 650 toxic comments, opening avenues for easily customizable anti-experts.", "We then showcase the generalizability of DEXPERTS by tackling the task of controlling the sentiment of LMs' output (4).", "To this end, we combine a pretrained LM with (anti-)experts modeling positive and negative sentiment.", "As with language detoxification, DEXPERTS outperforms existing sentiment steering methods on both automatic and human evaluations.", "Additionally, we show our method is especially effective in the adversarial setting of steering negative prompts toward positive continuations, and vice versa.", "Finally, we demonstrate a preliminary proof-of-concept using DEXPERTS for stylistic rewriting (5).", "Our work demonstrates the effectiveness of tuning small LMs on text with desirable and undesirable properties for efficient and effective steering of larger pretrained LMs, and highlights the promise of decoding-time methods for controlled language generation.", "Given input text as a prompt , the task of controlled text generation is to generate a continuation that flows naturally from the prompt while having the desired attribute (e.g., positive sentiment) but not an undesired one (e.g., toxicity).", "Given a prompt x t , the language model computes the logits for the t th token, denoted z t P R | V | , where V is the vocabulary.", "A probability distribution over the vocabulary is obtained by normalizing and exponentiating z t : P p X t | x t q softmax p z t q , (1) and the next token is generated by sampling x t P p X t | x t q . 2.1 DEXPERTS Formalization DEXPERTS operates on a pretrained language model M by combining its predictions with an expert M ` , which models text with a desirable attribute, and an anti-expert M , which models text with an undesirable attribute. At time step t , we condition each language model M , M ` , and M on the prompt x t to obtain z t , z ` t , and z t , respectively. The product-of-experts ensemble is given by: 2 P p X t | x t q softmax ` z t ` ` z ` t z t (2) where is a hyperparameter that controls the amount of modification to z t , and can be interpreted as the strength of control over the base model. Equivalently, P p X t | x t q9 P p X t | x t q P ` p X t | x t q P p X t | x t q (3) Intuitively, a token will only have high probability if it has high probability under both P and P ` , and low probability under P . We can interpret the ratio P ` p X t | x t q P p X t | x t q as a scaling coefficient for each token, which is used to modify the original probability predicted for that token. 2.2 Sampling from DEXPERTS Sampling fluent output from language models commonly requires truncating the unreliable tail of 2 Though not explored in this paper, this formulation readily accommodates multiple experts and anti-experts, whose logits can be respectively added or subtracted. the probability distribution, as in topk (Fan et al., 2018) or nucleus sampling (Holtzman et al., 2020). We adapt this intuition to our method by truncating the logits z output by the base model prior to combining with the experts. Formally, let V 1 V denote the set of tokens that are a part of the top-k /topp vocabulary of the base LM at time step t . The truncated logits z 1 are given by z 1 r v s # z r v s if v P V 1 8 otherwise (4) By substituting z with z 1 in Equation 2, we have P 1 p X t | x t q softmax ` z 1 t ` ` z ` t z t (5) We obtain our next token x t via pure sampling from the probability distribution P 1 p X t | x t q , which has non-zero probability only on tokens in V 1 . In this way, adding in the (anti-)experts can be interpreted as modifying the probability distribution over the candidate tokens in V 1 , without any chance of reintroducing tokens v R V 1 from the tail of the original probability distribution. 3 Toxicity Avoidance Given that large pretrained LMs are at risk of producing toxic content (Sheng et al., 2019; Gehman et al., 2020), steering away from toxic degener-ation is crucial for their safe deployment.", "Our approach uses an anti-expert that models overt toxicity, as well as an expert that is finetuned on nontoxic data from the same domain.", "Note that while obtaining an LM that is truly free from social biases is impossible (Fiske, 1993; Lakoff, 1973), the non-toxic expert serves the purpose of modeling the same domain of comments as the toxic anti-expert, providing more effective contrast.", "Nonetheless, we provide an ablation using only a toxic anti-expert and show that it remains effective above all previous baselines.", "We use GPT-2 Large as our base LM.", "For our expert and anti-expert, we finetune several sizes of GPT-2 (Small, Medium, Large) on a dataset of human-annotated comments from the Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge.", "3 We consider an example toxic if 50% of annotators marked it as toxic, and nontoxic if none of the annotators mark it as toxic.", "This toxic dataset 3 https://bit.ly/3cvG5py has 160K comments, and the nontoxic dataset 1.4M comments.", "Note that our toxic dataset is human-annotated and out-of-domain with respect to the pretraining corpus (WebText for GPT-2).", "We report results for 2 .", "0 , chosen after observing the tradeoff between detoxification and fluency, but show results for other values of in Appendix D. 3.2 Evaluation 3.2.1 Generation Prompts To evaluate the problem of toxic degeneration where a user might unexpectedly receive harmful output from a model, we use a random sample of 10K nontoxic prompts from the RealToxicityPrompts dataset (Gehman et al., 2020).", "Domain-adaptive pretraining (DAPT; Guru-rangan et al., 2020) We further pretrain the base model on the non-toxic subset of OpenWebText.", "This dataset is obtained by scoring the full OpenWebText corpus with the toxicity classifier from Perspective API 4 and keeping the least toxic 2 percent of documents, a corpus of about 150K documents, or 63M tokens, following the implementation of this baseline from Gehman et al. (2020).", "Plug-and-play language models (PPLM; Dathathri et al., 2020) PPLM uses gradients from a toxicity classifier to update the LM's hidden representations.", "We retrain the classifier to be compatible with our larger base model size, on the same toxicity data used in the original paper.", "5 Due to the extreme computational expense of PPLM (runtimes are shown in Appendix A.4), we evaluate PPLM on a random subset of 1K prompts.", "Generative discriminators (GeDi; Krause et al., 2020) GeDi uses a class-conditioned LM to provide classification probabilities for all possible next tokens via Bayes' rule.", "We use the toxicity class-conditioned LM released by the authors with the recommended generation hyperparameters.", "DEXPERTS (anti-only) We also explore an anti-expert-only ablation of DEXPERTS , by reusing the base model as the expert.", "To be clear, we substitute z ` t z t in Equation 1, so that we have P p X t | x t q softmax ` p 1 ` q z t z t (6) 4 https://github.com/conversationai/ perspectiveapi 5 https://bit.ly/3yQiCIo Model Toxicity ( ) Fluency ( ) Diversity ( ) Avg.", "Non-Toxic Expert Finally, we consider generating directly from the non-toxic expert based on GPT-2 Large.", "For all baselines, we use nucleus sampling (Holtz-man et al., 2020) with p 0 .", "9 to generate up to 20 tokens.", "Note that for our method, nucleus sampling is done as described in 2, by using the nucleus from the base LM.", "Other training and generation details (e.g., hyperparameters) are described in Appendix A. 3.2.3 Automatic Evaluation We evaluate our generations for toxicity, fluency, and diversity.", "Following previous work (Gehman et al., 2020), we characterize generation toxicity using the toxicity score from Perspective API, along two axes: 1) the maximum toxicity over k 25 generations, and 2) the empirical probability of generating a continuation with toxicity 0 .", "5 at least once over k 25 generations.", "Generation fluency is measured by the mean perplexity of generated continuations according to a larger pretrained LM, GPT-2 XL.", "Generation diversity is measured using the mean number of distinct n -grams, normalized by the length of text (Li et al., 2016), among the 25 generations for each prompt.", "We report Dist-1, Dist-2, and Dist-3 scores for distinct uni-, bi-, and trigrams, respectively.", "Results According to automatic metrics shown in Table 1, DEXPERTS substantially outperforms all existing baselines at detoxification.", "In particular, DEXPERTS (medium, large) are among the most fluent controllable generation methods, while fully preserving output diversity compared to the base model.", "Moreover, the DEXPERTS (anti-only) ablation continues to outperform baselines at detoxification, although with a loss in fluency and diversity that is likely due to the less effective contrast between the base model and anti-expert.", "We report the per-generation runtime of each method in Appendix A.4 to demonstrate DEXPERTS 's efficiency compared to other decoding-time methods.", "While automatic toxicity classifiers like Perspective API enable the kind of large-scale evaluation required for systematic comparison of methods, an abundance of work shows that their accuracy is far from ideal (Dixon et al., 2018; Sap et al., 2019; Davidson et al., 2019; Hutchinson et al., 2020) in part due to reliance on spurious features, which we discuss in 8.", "Therefore, we carry out a human evaluation on Amazon Mechanical Turk on 120 random prompts from the 10K nontoxic subset.", "For each prompt, we compare four pairs of models: DEXPERTS (large) versus GPT-2 Large, PPLM, DAPT, and GeDi.", "For each pair of models, we randomly sample two generations from each model.", "This results in a total of 120 prompts 4 pairingsprompt 2 generationspairing 960 comparisons.", "Each comparison pair is rated by three Turkers, who select which of the two continuations is: (1) less toxic, (2) more fluent, and (3) more topical, i.e., whether the continuation is natural, Figure 2: Results of human evaluation for detoxification.", "relevant, and follows logically from the prompt.", "A screenshot of the user interface is provided in Appendix C. Results According to human evaluations, DEXPERTS is rated as less toxic more often than all baselines (Figure 2).", "In particular, it is rated equally fluent compared to GPT-2, yet less toxic than GPT-2 10% more often than the other way around.", "See Appendix E for examples of generations.", "We next use DEXPERTS to steer GPT-3 Ada.", "Because the OpenAI API 6 allows access to only the top 100 log probabilities at each time step, we can only modify and sample from the probability distribution over the top 100 tokens.", "Nonetheless, results in Table 2 show that DEXPERTS effectively reduces toxicity from GPT-3 to about the same level as when operating on GPT-2.", "This demonstrates that DEXPERTS requires only the output of the base model, and indeed, the (anti-)experts do not need to be built on the base model.", "In practice, gathering large amounts of toxic data may be challenging, especially in applications where we would want to customize the anti-expert LM for differing notions of harmful language.", "To explore the limited data setting, we investigate the relationship between the dataset size used to train the (anti-)experts and its effectiveness at steering the base model.", "We finetune GPT-2 Large 6 https://openai.com/api/ Figure 3: Performance of DEXPERTS when (anti-)experts are trained on differently-sized datasets and evaluated at different checkpoints, calculated on a subset of 1K prompts.", "on five different dataset sizes of exactly 40,960, 204.8K, 1.024M, 5.12M, and 10.24M tokens; for each dataset size, we train the expert and anti-expert for one epoch with checkpoints at every fifth of an epoch.", "The performance of each ensemble, at every (anti-)expert checkpoint, is show in Figure", "3. We can see that even with a dataset of 40,960 tokens ( 650 comments) corresponding to 0 .", "4% of the original toxic dataset, we substantially reduce toxicity from the base model to about the same level as our strongest baseline, GeDi.", "(On one GPU, this corresponds to 3 minutes of fine-tuning.)", "Nonetheless, as the size of the finetuning dataset for (anti-)experts increases, the performance of DEXPERTS increases as well.", "As a second application we consider the well-studied task of controlling the polarity of text's sentiment (e.g., Li et al., 2018; Sudhakar et al., 2019), steering towards either positive or negative sentiment.", "We use the same pretrained model from 3, GPT-2 Large, as our base LM.", "We finetune GPT-2 (Small, TargetSentiment Model % Positive Sentiment Fluency ( ) Diversity ( ) Positiveprompts Neutralprompts Negativeprompts Output ppl.", "Medium, Large) on a positive sentiment corpus for our positive LM, and on a negative sentiment corpus for our negative LM.", "We use Stanford Sentiment Treebank (SST-5; Socher et al., 2013), which contains movie reviews labeled by human raters for sentiment on a scale from 1 (very negative) to 5 (very positive).", "Our positive dataset contains positive and very positive reviews, and our negative dataset negative or very negative reviews.", "Each of these sentiment datasets has about 4K reviews.", "For ease of notation we consider the positive LM our expert and negative LM our anti-expert, and use 3 . 2 for steering in each direction. The tradeoff between fluency and sentiment control for many values of is shown in 4.3. 4.2 Evaluation 4.2.1 Generation Prompts In order to test our method's ability to control sentiment beyond the domain that the sentiment experts are trained on (movie reviews), we collect a dataset of 100K naturally occurring prompts from the OpenWebText Corpus (OWT) (Gokaslan and Cohen, 2019). Details are outlined in Appendix B. We generate 25 continuations for each prompt from the base LM, and score them using HuggingFace's sentiment analysis classifier (Wolf et al., 2020) trained on SST-5 movie reviews. Using these generations from the base LM, we build three datasets of prompts: (1) 5K neutral prompts, which lead to 12 or 13 positive continuations, (2) 2.5K negative prompts, which lead to 25 negative continuations, and (3) 2.5K positive prompts, which lead to 24 or 25 positive continuations.", "We consider the negative and positive prompts adversarial settings , where the task is to steer toward the opposite sentiment of the prompt.", "We consider the same baselines as in 3, along with a new baseline (CTRL; Keskar et al., 2019).", "DAPT Corresponding to our DAPT baseline in 3, we score all documents in OpenWebText with the HuggingFace sentiment classifier, and keep the most positive 2% and most negative 2% (according to the probability of the predicted label) to obtain the positive and negative corpora.", "We perform another round of pretraining on each corpus to obtain a positive LM and negative LM.", "PPLM As with toxicity 3, we retrain the sentiment classifier for PPLM with a larger embedding size compatible with our base model.", "The training data used is SST-5.", "Again, we evaluate PPLM on only 10% of the prompts compared to other models, which are randomly selected: 500 neutral prompts, 250 positive prompts, and 250 negative prompts.", "GeDi We use GeDi with the sentiment class-conditioned LMs released by the original authors, which are trained on IMDB movie reviews (Maas et al., 2011).", "(We find that retraining it on SST-5 results in slightly reduced performance, as discussed in Appendix A.)", "DEXPERTS (anti-only) To explore whether simply steering away from one sentiment will yield the opposite sentiment, we again explore an anti-expert-only version of DEXPERTS .", "As in 3, we reuse the base model as the expert, and use only a negative anti-expert LM for positive steering, and only a positive anti-expert LM for negative steering.", "We use 2 . 0 for this setting. Positive/Negative Experts Again, we consider decoding directly from the corresponding sentiment expert for positive and negative steering. Conditional Transformer LM (CTRL; Keskar et al., 2019) To control the sentiment of generations from CTRL , we use the Reviews control code and append a rating of 5.0 for positive generations and a rating of 1.0 for negative generations.", "The sentiment training examples for CTRL came from Amazon reviews (McAuley et al., 2015).", "As with toxicity experiments (3), we use nucleus sampling with p 0 .", "9 , and include our training and generation details in Appendix A. 4.2.3 Automatic Evaluation We evaluate our generations for the target sentiment, fluency, and diversity.", "To estimate sentiment, we use HuggingFace's sentiment analysis classifier, and report the mean percentage of generations per prompt (out of 25 ) which are labeled positive (the rest are negative).", "We evaluate fluency and diversity in the same ways as 3.", "Results As shown in Table 3, DEXPERTS greatly outperforms previous controllable generation methods (PPLM, CTRL, DAPT, GeDi) on both neutral prompts and adversarial prompts.", "The limited performance of CTRL suggests that the effectiveness of class-conditioned training on domain-specific data is limited to the domain of that data; training on Amazon reviews does not allow generalization outside of the reviews domain.", "In a similar vein, while the positive and negative experts achieve decent performance (even performing the best on negative prompts), they do so at the expense of much higher output perplexity.", "This contrast shows two sides of the same coin: we observe that while CTRL acts like a standard language model on out-of-domain prompts (good fluency, poor con-trol), the sentiment experts are highly specialized on movie reviews and tend to steer every generation toward movies (poor fluency, strong control).", "Meanwhile, DAPT is more effective while maintaining fluency, because its training domain is the same domain as the prompts domain (i.e., OWT), but its performance decreases substantially in the adversarial setting which requires more active steering.", "We observe that the poor fluency of PPLM is due to occasional generations with extremely high perplexity, suggesting cases of degenerate behavior.", "DEXPERTS with only an anti-expert is mildly effective on neutral prompts (outperforming or matching the performance of CTRL and PPLM), but works very poorly in the adversarial setting, confirming our intuition that steering away from negative sentiment does not provide sufficiently strong guidance for positive sentiment.", "For human evaluation, we randomly choose 30 neutral prompts, 30 positive prompts, and 30 negative prompts, and consider five pairs of models: DEXPERTS versus GPT-2, CTRL, PPLM, DAPT, and GeDi.", "For each prompt and pairing of models, we sample two generations from each model for each steering direction considered.", "This results in a total of 120 prompts 5 pairingsprompt 2 generationspairing 1200 pairs, each rated by 3 MTurk workers.", "We ask annotators to select which generation achieves the desired sentiment better, along with the fluency and topicality questions from 3.2.4.", "Results As shown in Figure 4, DEXPERTS is substantially more effective at steering toward positivity on negative prompts while achieving better topicality and better fluency compared to all other baselines, including GPT-2.", "In the opposite setting of steering toward negativity on positive prompts, the gap in sentiment control performance between DEXPERTS and each of GPT-2, CTRL, DAPT, and PPLM is even more pronounced: DEXPERTS is Figure 4: Results of human evaluation for steering toward positivity on negative prompts (left) and steering toward negativity on positive prompts (right).", "rated better than its comparison 6278% of the time.", "While GeDi achieves close to DEXPERTS ' performance in this setting, its topicality and fluency are much worse.", "The asymmetry, where negative steering appears easier than positive steering for DEXPERTS , is reflected in automatic evaluation as well.", "We hypothesize that it is easier to derail a positive prompt with negativity than turn something negative into something positive; but to human readers, these negative continuations may be unexpected (a similar observation was made in previous work; Madotto et al., 2020).", "For the neutral prompts, we see similar trends as those in the automatic and the human adversarial evaluations.", "Due to space constraints, we include those in Appendix D.2.", "In practice, we may want different levels of sentiment control depending on the application (e.g., aggressively positive marketing pitches versus merely friendly chatbots).", "Figure 5 shows the relationship between output sentiment and fluency for different choices of P r 3 .", "4 , 3 .", "4 s , conditioned on neutral prompts.", "The smooth tradeoff suggests that can by adjusted by a practitioner or user, depending on their application.", "In our experiments, we pick 3 .", "2 because the curve becomes less steep, meaning that a greater cost in fluency does not reFigure 5: The relationship between output fluency and positivity for different values of P r 3 .", "4 , 3 .", "4 s .", "We choose 3 .", "2 in our experiments.", "Results are calculated on a subset of 1K neutral prompts.", "turn as great of an increase in the desired sentiment.", "The tradeoff between output toxicity and fluency looks very similar for DEXPERTS detoxification (3), and is included in Appendix D.1.", "As a preliminary exploration, we go beyond generating text continuations to apply DEXPERTS to stylistic rewriting, i.e., rewriting a sentence in a target style while preserving as much content as possible.", "We replace the base model with a pretrained autoencoder, BART (Lewis et al., 2020), and use GPT-2 Large sentiment (anti-)experts from 4 for steering.", "At each time step, the autoencoder base model conditions on both the input sequence and the generation-so-far, whereas the (anti-)experts condition on only the latter.", "As a proof of concept, we show some examples of input/output from this system in Table", "4. Input Output Examples I love cats and seeing them play with yarn.", "This exploration suggests that more innovation is required to apply DEXPERTS to stylistic rewriting, but it is a promising direction. We anticipate future work on the subject.", "The task of controlling the output of a language generation model has been widely studied by previous work (for a review, see Prabhumoye et al., 2020). Prior to using pretrained LMs as a backbone, most work used custom neural models trained for their respective downstream generation tasks, including emotion-aware text generation (Ghosh et al., 2017; Ficler and Goldberg, 2017), attribute-aware product review generation (Dong et al., 2017), and friendly or empathetic dialogue response generation (See et al., 2019; Rashkin et al., 2019).", "Since pretrained LMs have shown impressive text generation ability (Radford et al., 2018, 2019), two directions have emerged to control their language generation: training approaches and decoding-time approaches. Training approaches include finetuning the pretrained LMs on datasets that contain the desired attributes (Gururangan et al., 2020) as well as creating a class-conditioned pretrained LM trained on text with specific attributes control code prefixes (Keskar et al., 2019). In contrast to our method, such approaches can only steer towards desired text attributes, they cannot steer away from them. Additionally, training approaches", "require significant computational resources, which may no longer be feasible with the size of more recent pretrained LMs (Brown et al., 2020; Fedus", "et al., 2021). Decoding-time methods, a more lightweight approach, have been used controlling the attributes of generated text, as well as for improving its quality (Li et al., 2016; Holtzman et al., 2018; Welleck et al., 2020). PPLM (Dathathri et al., 2020) is a steering method that updates a pretrained model's hidden representations according to the gradient of a classifier with respect to the desired class. Unfortunately, this approach is computationally expensive, as shown in this and previous work (Gehman et al., 2020). Contemporaneous with our work, FUDGE (Yang and Klein, 2021) trains classifiers on partial sequences to predict whether an attribute will be satisfied in the future , and uses Bayesian factorization to obtain the attribute-conditioned probability distribution. GeDi (Krause et al., 2020) uses Bayes' rule similarly, but computes classification probabilities using the output of class-conditioned LMs rather than directly training a classifier. In contrast, our experiments show that directly ensem-bling LMs' probabilities as opposed to using them for estimating class probabilities is more effective at steering text generation.", "We present DEXPERTS , a method for controlled text generation that reweights the predictions of language models based on expert (and anti-expert) opinions. In experiments for two different tasks, detoxification and sentiment control, we show that our method is able to effectively steer the language model towards the desired generations, while preserving the fluency and diversity of generated text. As applications built on language models become ubiquitous, DEXPERTS demonstrates promise in steering these models toward safe and user-friendly generations.", "This research is supported in part by NSF (IIS-1714566), DARPA MCS program through NIWC Pacific (N66001-19-2-4031), and Allen Institute for AI. We thank OpenAI, specifically Bianca Martin and Miles Brundage, for providing access to GPT-3 through the OpenAI API Academic Access Program. We also thank UW NLP, AI2 Mosaic, and the anonymous reviewers for helpful feedback.", "Our study is motivated by the potential harms of using pretrained language models (Bender et al., 2021), specifically their tendency to generate hateful, offensive, or toxic content (Sheng et al., 2020; Gehman et al., 2020). Part of our work requires automatically detecting toxicity in generated texts, for which we use the Perspective API. 7 a commercially deployed toxicity detection tool. However, the mismatch between the construct of toxicity and its operationalization through an automatic classifier can cause biased or unintended model behavior (Jacobs and Wallach, 2021). Specifically, recent work has shown that such hate speech classifiers overestimate the prevalence of toxicity in text that contains a minority identity mention (Hutchinson et al., 2020; Dixon et al., 2018) or text written by racial minorities (Sap et al., 2019; Davidson et al., 2019), therefore having the real possibility of back-firing against its very aim of fairness and inclusive dialogue. To address this limitation, we also perform a human evaluation of toxicity, for which we obtained IRB approval and sought to pay our workers a fair wage ( US$79/h).", "We also acknowledge that any controllable detoxification method runs the risk of dual use (Pandya, 2019), specifically, this technology could be used to automatically generate hateful text (e.g., extremist texts; McGuffie and Newhouse, 2020). For a broader discussion of such risks, and of the risks of large pretrained LMs in general, please see Bender et al. (2021).", "Nevertheless, toxicity in pretrained LMs is an unsolved issue (Sheng et al., 2019; Gehman et al., 2020). Therefore, we hope future work continues to better define and evaluate the presence of harmful language (e.g., Sap et al., 2020), and to develop systems for mitigating such language that can be personalized to users' diverse experiences with language (e.g., dealing with reclaimed slurs appropriately; Croom, 2013)." ]
[ "abstain", "objective", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "other", "objective", "other", "abstain", "method", "abstain", "abstain", "objective", "result", "result", "result", "method", "method", "abstain", "result", "objective", "objective", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "method", "other", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other" ]
[ "Several high-profile events, such as the mass testing of emotion recognition systems on vulnerable sub-populations and using question answering systems to make moral judgments, have highlighted how technology will often lead to more adverse outcomes for those that are already marginalized.", "At issue here are not just individual systems and datasets, but also the AI tasks themselves.", "In this position paper, I make a case for thinking about ethical considerations not just at the level of individual models and datasets, but also at the level of AI tasks.", "I will present a new form of such an effort, Ethics Sheets for AI Tasks , dedicated to fleshing out the assumptions and ethical considerations hidden in how a task is commonly framed and in the choices we make regarding the data, method, and evaluation.", "I will also present a template for ethics sheets with 50 ethical considerations, using the task of emotion recognition as a running example.", "Ethics sheets are a mechanism to engage with and document ethical considerations before building datasets and systems.", "Similar to survey articles, a small number of carefully created ethics sheets can serve numerous researchers and developers.", "Good design helps everyone.", "It is well established, for example, that designing for accessibility helps society at large.", "1 As Artificial Intelligence (AI), Machine Learning (ML), and Natural language Processing (NLP) systems become more ubiquitous, their broad societal impacts are receiving more scrutiny than ever before.", "However, several high-profile instances such as face-recognition systems that perform poorly for people with dark skin tones (Buolamwini and Gebru, 2018), machine translation systems that are biased against 1 https://blog.ai-media.tv/blog/why-designing-for-accessibility-helps-everyone some genders (Prates et al., 2019), question answering systems that produce moral judgments (Talat et al., 2021), and mass testing of emotion recognition systems on certain sub-populations (ARTICLE19, 2021; Wakefield, 2021), have highlighted how technology is often at odds with the very people it is meant to help, and how it will often lead to more adverse outcomes for those already marginalized.", "This raises uncomfortable questions for us AI researchers, developers, and leaders of technology companies: What role do we play in the harms perpetrated by technology?", "What are the assumptions in our research?", "What are the implications of our choices?", "Are we striking at the barriers to opportunity or are we amplifying societal inequities?", "The answers are often complex and multifaceted.", "While many AI systems have clear benefits, we are increasingly seeing examples such as those discussed above where real-world AI systems are causing harm.", "Academic research (which often feeds into real-world systems), is also seeing growing amounts of criticisms: criticisms of physiognomy, racism, bias, discrimination, perpetuating stereotypes, ignoring indigenous world views, and more.", "See Arcas et al. (2017) and Ongweso (2020) for recent examples.", "There have also been criticisms of thoughtlessness (e.g., is automating this task, this way, really going to help people? ) and a seemingly callous disregard for the variability and complexity of human behavior (McQuillan, 2018; Fletcher-Watson et al., 2018; Birhane, 2021).", "This position paper makes the following contributions: (1) It describes recent efforts by the AI community to encourage responsible research, the limitations of those efforts, and the need for thinking about ethical considerations at the level of AI tasks.", "(2) Presents a detailed proposal for a new kind of document, Ethics Sheets for AI Tasks , ded-8368 icated to fleshing out the assumptions and ethical considerations hidden in how a task is commonly framed and in the choices we make regarding the data, method, and evaluation.", "(3) Provides a template for ethics sheets (that includes fifty ethical considerations), with the task of automatic emotion recognition (AER) as a running example.", "NLP tasks, such as AER from text, machine translation, and summarization, are particularly rife with ethical considerations because they deal with language and people.", "Ethics sheets can help in recognizing and communicating the social and psychological complexities of language use; thereby, driving the desired design choices in NLP systems.", "More broadly, all AI tasks that deal with people and their artifacts (such as text, images, and video) can benefit from carefully thought out ethics sheets.", "Every year, tens of thousands of people are joining the ranks of AI researchers and developers.", "Ethics sheets can serve them and others as useful introductory documents for AI tasks, guiding research/system design, facilitating the creation of datasheets and model cards, and acting as springboards for new ideas in responsible research.", "If a team builds a new dataset, then it is recommended to create a datasheet or data statement (Gebru et al., 2018; Bender and Friedman, 2018) that lists key details of the dataset such as composition and intended uses.", "It is meant to encourage appropriate use of the data.", "If a team builds a new system, then it is recommended to create a model card (Mitchell et al., 2019) that lists key details of the model such as performance in various contexts and intended use scenarios.", "It is meant to encourage appropriate use of the system.", "For individual papers, we write ethics/impact statements; and conferences have started to institute ethics policies and ethics reviews.", "Limitations: Datasheets and model cards are pivotal inventions that will serve our community well.", "However, they are not without limitations and the specificity of their scope (on individual pieces of work) places additional constraints: Authors are in a position of conflict of interest; there are strong incentives to present their work in positive light (for paper acceptance, community buy-in, etc.) There can be a tendency to produce boiler-plate text without a meaningful and critical engagement with the relevant ethical issues.", "While there is important benefit in creating post-production documents that describe societal impact, it is arguably more important to engage with ethical considerations (and publish an ethics focused document) before building AI systems (and possibly even choosing to not build a system for a particular deployment context based on the analysis).", "Lastly, ethics considerations apply at levels other than individual projects; e.g., at the level of AI tasks.", "A comprehensive engagement with the relevant ethical issues requires a wide literature review, and the resulting analysis to be presented in a dedicated document (and not in add-on sections for individual system papers).", "I am defining AI task to simply mean some task we may want to automate using AI techniques.", "An AI system is a particular AI model built for the task.", "Individual systems have their own unique sets of ethical considerations (depending on the choices that were made when building the systems).", "However, several ethical considerations apply not at the level of individual systems, but at the level of the task.", "For example, consider the task of detecting personality traits from one's utterances.", "Even before we consider a system for the task, we ought to consider questions such as: What are the societal implications of automating personality trait detection?", "How can such a system be used/misused?", "Is there enough credible scientific basis for personality trait identification that we should attempt to do this?", "Which theory of personality traits should such automation rely on?", "What are the implications of that choice?", "And so on.", "In addition, for a given task, there exist ethical considerations latent in the choices commonly made in dataset creation, model development, and evaluation.", "Poor choices lead to more harm.", "Consider these outcomes reported in the popular press: Text Generation: Dangerous' AI writes fake news , BBC.", "2 Image Generation: Deepfakes' a political problem already hitting EU , EU Observer.", "3 2 www.bbc.com/news/technology-49446729 3 https://euobserver.com/opinion/151935 8369 Automatic Emotion Recognition from Faces: China's emotion recognition market and its implications for human rights , Article19 4 .", "Machine Translation: Female historians and male nurses do not exist, Google Translate tells its European users , Algorithm Watch.", "5 Information Extraction: Google apologises for ugliest Indian language' search result , BBC.", "6 Numerous other such examples have surfaced in just the past few years for a variety of AI tasks.", "Additionally, fields such as NLP and Computer Vision organize themselves in sub-fields by task (e.g., machine translation).", "Laws about AI ethics are also emerging in the context of AI tasks (Com-mission, 2020) e.g., based on whether the task is high risk.", "Reading relevant literature, engaging with stakeholders, and past experience in developing systems helps one to start identifying relevant ethical considerations for an AI task; but that takes time.", "Meanwhile, tens of thousands of new researchers are joining our ranks.", "Pressures to graduate and find good jobs force them to build systems and publish papers in a matter of months.", "Even experienced researchers can find it difficult to keep track of various ethical considerations discussed in a wide assortment of conferences and journals.", "If one wants to do work on an AI Task, then right at the beginning it is useful to have access to:", "a document that substantively engages with the ethical issues relevant to that task; going beyond individual systems and datasets, drawing on a body of relevant work.", "Similarly, if one conceptualizes a new AI Task, then it is useful to simultaneously create such a source of information.", "Therefore, I propose that we researchers and developers write such articles, which I will refer to as Ethics Sheets for AI Tasks .", "In some ways, ethics sheets are similar to survey articles for areas of research, except here the focus is on ethical considerations for an AI task.", "Simply put: an ethics sheet for an AI task is a semi-standardized article that aggregates and organizes a wide variety of ethical considerations relevant for that task.", "It: 4 www.article19.org/wp-content/uploads/2021/01/ER-Tech-China-Report.pdf 5 https://algorithmwatch.org/en/google-translate-gender-bias 6 www.bbc.com/news/world-asia-india-57355011 Fleshes out assumptions hidden in how the task is framed, and in the choices often made regarding the data, method, and evaluation.", "Presents ethical considerations unique or especially relevant to the task.", "Presents how common ethical considerations manifest in the task.", "Presents relevant dimensions and choice points; along with tradeoffs for various stakeholders.", "Lists common harm mitigation strategies.", "Communicates societal implications to researchers, developers, and the broader public.", "The sheet should flesh out various ethical considerations that apply at the level of task.", "It should also flesh out ethical consideration of common theories, methodologies, resources, and practices used in building AI systems for the task.", "Ethics sheets may sometimes suggest that certain applications in specific contexts are appropriate or inappropriate, but largely they are meant to discuss the various considerations to be taken into account when the developer is deciding whether to build or use a particular system, how to build it, and how to assess its societal impact.", "It is meant to help the developer identify what is more appropriate for their given deployment context.", "A good ethics sheet will question some of the assumptions that often go unsaid.", "It will encourage more thoughtfulness: Why should we automate this task?", "What is the degree to which human behavior relevant to this task is inherently ambiguous and unpredictable?", "What are the theoretical foundations?", "What social and cultural forces motivate choices in task design, data, methodology, and evaluation?", "(Science is not immune to these forcesthere is no view from nowhere').", "How is the automation of the task going to impact various groups of people?", "How can the automated systems be abused?", "Is this technology helping everyone or only those with power and advantage?", "etc.", "Thinking about these questions is important if we want to break away from the current paradigm of building things that are divisive (that work well for some and poorly for others) and instead move towards building systems that treat human diversity and variability as a feature (not a bug); 8370 systems that truly dismantle barriers to opportunity, and bring diverse groups of people together.", "Thus, questions such as those shown above can be useful in determining what is included in ethics sheets.", "Target audience: The target audience for an ethics sheet includes the various stakeholders of the AI Task.", "The stakeholders may or may not have the time and background to understand the technical intricacies of an AI task.", "However, they build on, use, and make laws about what we create.", "Further, people are impacted by AI systems.", "They should be able to understand its decisions that impact them, understand its broad patterns of behaviour, contest the predictions, and find recourse.", "Ethics sheets can help to that end.", "It is our responsibility to describe our creations in accessible terms, so that others can make informed decisions about them.", "Thus the target audience includes: Researchers; developers Educators (esp. those who teach AI, ethics) Policy makers; politicians People whose data is used; society at large Owing to differences in backgrounds and needs, it is better to create versions of the Ethics Sheet tailored to stakeholders, for example: One sheet for society at large (with a focus on how system behaviour can impact them and how they can contribute/push-back); One sheet for researchers, developers, and the motivated non-technical reader (with a greater emphasis on system building choices).", "Ethics sheets complement datasheets and model cards: while the latter are post-production documents produced by system/data builders, ethics sheets are meant to be accessed before building systems.", "Similar to traditional survey articles, a small number of carefully created ethics sheets can serve numerous researchers and developers creating systems and data for AI tasks.", "See the FAQ in the Appendix (after references) for a discussion on some practicalities involved with who should create ethics sheets, when they should be created, for which tasks, etc. I discuss below some key characteristics and benefits of ethics sheets, followed by a template and a pointer to an example ethics sheet in the next section.", "A single ethics sheet does not speak for the whole community (just as survey articles do not speak for", "the whole community).", "No one group can claim authority or provide the authoritative ethics sheet for a task.", "Ethics sheets can be created through large community efforts (through workshops or carefully maintained wikis) and smaller individual and group efforts.", "Efforts led by small teams may miss important perspectives.", "However, community efforts face several logistical and management challenges.", "They also have the tendency to only include agreed upon non-controversial ideas that do not threaten existing power structures.", "While each of these approaches to implement ethics sheets has their pros and cons, a multiplicity of ethics sheets is likely most promising.", "Multiple ethics sheets created (by different teams and approaches) reflect multiple perspectives, viewpoints, and what is considered important to different groups of people.", "We should be wary of a world where we have single authoritative ethics sheets per task and no dissenting voices.", "The set of ethical considerations for a task is not a static list; it needs to be continuously or periodically revisited and updated.", "The considerations can be developed iteratively and organically, in small teams and in large community efforts (say through dedicated workshops).", "The ethics sheet is not a silver bullet to make things perfect, lead to easy solutions, or solve ethics.", "The goal is to raise awareness of relevant ethical considerations, encourage following of established best practices, and inspire new ideas of responsible research appropriate for one's particular context.", "Preface: Present why and how the sheet came to be written.", "The process followed.", "Who worked on it along with their professional or lived experience relevant to the subject matter.", "Challenges faced in writing the sheet.", "Changes made, if a revision of an earlier sheet.", "Version number, date published, and contact information.", "Introduce, Define, Set Scope: Introduce the task and some common manifestations of the task.", "Define relevant terminology.", "Set the scope of the ethics sheet (e.g., maybe you are creating a sheet for speech input, but not textual input).", "Ethical Considerations: This is the star of the show.", "Aggregate and organize the ethical considerations associated with the AI task.", "Present the trade-offs associated with choices.", "Present harm mitigation strategies.", "Cite relevant literature.", "Organization of ethical considerations should be based on the primary target audience.", "For example, ethics sheets primarily for researchers and developers may benefit from sub-sections on: Task Design, Data, Method, and Evaluation.", "Task design may benefit from sections for theoretical foundations and why automate this task?'.", "Evaluation will benefit from sub-sections that go beyond quantitative metrics.", "Other: Include anything that helps with the goals of the Ethics Sheet.", "Ethics sheets for AI Tasks address a number of concerns raised in the first section of this paper.", "Specifically, their benefits include:", "1. Encourages thoughtfulness regarding why to automate, how to automate, and how to judge success well before the building systems.", "2. Fleshes out assumptions in how the task is commonly framed, and in the choices often made regarding data, method, and evaluation.", "3. Presents the trade-offs of relevant choices so that stakeholders can make informed decisions appropriate for their context.", "Ethical considerations often involve a cost-benefit analysis; where we draw the lines may differ depending on our cultural and societal norms.", "4. Identifies points of agreement and disagreement.", "Includes multiple points of view.", "5. Moves us towards consensus and standards.", "6. Helps us navigate system development choices.", "7. Helps develop better datasheets, model cards.", "8. Has citations and pointers; acts as a jumping off point for further reading.", "9. Helps stakeholders challenge assumptions made by researchers and developers.", "10. Helps stakeholders develop harm mitigation strategies.", "11. Standardized sections and a familiar look and feel make it easy for the compilation and communication of ethical considerations.", "12. Helps engage the various stakeholders of an AI task with each other.", "13. Multiple ethics sheets created for the same task reflect multiple perspectives, viewpoints, and what is considered important to different groups of people at different times.", "14. Acts as a great introductory document for an AI Task (complements survey articles and task-description papers for shared tasks).", "I present below a template that can serve as a handy starting point in the creation of new ethics sheets, and that further clarifies what can be included in an ethics sheet.", "In the template below I will use Automatic Emotion Recognition (AER) as the running example.", "AER is a particularly interesting, widely applicable, and complex example of AI tasks with notable benefits and risks.", "Thus an ethics sheet for AER can be particularly instructive.", "In her seminal book, Affective Computing, Dr. Rosalind Picard described Automatic Emotion Recognition (AER) as: giving emotional abilities to computers.", "It is a sweeping interdisciplinary area of study exploring many foundational research questions and many applications (Picard, 2000).", "However, some of the recent commercial and governmental uses of AER have garnered considerable criticism, including: infringing on one's privacy, exploiting vulnerable sub-populations, and even allegations of downright pseudo-science (Wake-field, 2021; ARTICLE19, 2021; Woensel and Nevil, 2019).", "Even putting aside high-profile controversies, emotion recognition impacts people and thus entails ethical considerations (big and small).", "Mohammad (2022) presents an ethics sheet for automatic emotion recognition and sentiment analysis.", "It is a critical reflection of this broad field of study with the aim of facilitating more responsible emotion research and appropriate use of the technology.", "I will use some details from that sheet below to clarify the elements of the generic template.", "The preface is an opportunity to frame the discussion.", "Mohammad (2022) presents rapid-fire questions such as whether it is ethical to do automatic emotion recognition, how automatic recognition can mean many things, and it can be deployed in many contexts, how emotions are particularly personal, private, and complex; and how the ethics sheet can help in more responsible AER research as well as responsible system development and de-8372 ployment.", "Modalities: AI tasks may involve various modalities.", "For example, work on AER has made use of facial expressions, gait, skin conductance, blood conductance, force of touch, speech, written text, etc.", "All of these modalities come with benefits, potential harms, and ethical considerations.", "Clarify the task.", "Mohammad (2022) states that emotion recognition is a broad umbrella term used to refer to a number of related tasks such as inferring emotions the speaker is trying to convey, inferring patterns of speaker's emotions over longer periods of time, tracking impact of health interventions on one's well-being, inferring speaker's attitudes/sentiment towards a target product, movie, person, idea, policy, entity, etc.", "Each of these framings has ethical considerations and may be more or less appropriate for a given context.", "For example, framing the task as determining the mental state is especially problematic due to concerns about privacy and reliability.", "Discussing applications of the task is important not only because it is an opportunity to present the benefits of the task but also because an understanding of the applications is crucial to recognizing various ethical considerations.", "Mohammad (2022) presents a sample of existing applications of AER in pub-lic health, commerce, government policy, art and literature, research (social Sciences, neuroscience, psychology), and intelligence.", "Note also that all of the benefits come with potential harms and ethical considerations.", "Use of AER for military intelligence and education is especially controversial and laced with ethical considerations.", "The usual approach to building a system for an AI task is to design the task (e.g., for AER, identify the precise emotion task to be automated, identify the emotions of interest, etc.), compile appropriate data (e.g., label some of the data), train ML model (method) to capture relevant patterns of language", "from the data, and evaluate the model by examining their predictions on a held-out test set.", "There are ethical considerations associated with each step of this development process.", "Below is a template of 50 considerations grouped by the associated stage: Task Design, Data, Method, Impact, Privacy & Social Groups (this final category is particularly important and cuts across Task Design, Data, Method, and Impact).", "I present only a high-level summary for each category below.", "See Mohammad (2022) for an instantiation of this generic template for the task of automatic emotion recognition (AER).", "It includes details on how these considerations manifest in AER.", "One can use the template below as a guide (in part or full), skip the considerations that do not apply, and describe how the relevant considerations manifest for their chosen task.", "One should notably include details of key considerations for their task whether it is included in this template or not.", "One can also cite specific issues already discussed in the ethics sheets for other tasks.", "Summary: This section discusses various ethical considerations associated with the choices involved in the framing of the focus task and the implications of automating the focus task.", "For AER, important considerations included: whether it is even possible to determine one's internal mental state; whether it is ethical to determine such a private state; and who is often left out in the design of existing AER systems.", "Mohammad (2022) also discusses how it is important to consider which formulation of emotions is appropriate for a specific task/project; while avoiding careless endorsement of theories that suggest a mapping of external appearances to inner mental states.", "formulations and their ethical implications.", "2. Theoretical Models and their Implications: Discuss notable theoretical constructs from linguistics, psychology, etc. that underpin the focus AI task.", "Discuss the ethical considerations associated with these constructs.", "3. Meaning and Extra-Linguistic Information: Discuss how nuances of meaning in text, images, etc. and extra-linguistic information play a role in the task; and that systems that make use of limited information may lead to false predictions.", "4. Wellness and Health Implications: Discuss im-8373 plications of the task design on wellness and health of people (if any).", "5. Aggregate Level vs. Individual Level Prediction: Discuss whether the goal is to determine something about individuals or groups of people, how that choice impacts the ethical considerations associated with the task.", "6. Why Automate: Discuss who benefits from this automation; and whether this will shift power to those that need it the most (Kalluri, 2020).", "7. Embracing Diversity: Discuss how design choices impact diverse groups of people.", "8. Participatory/Emancipatory Design: Discuss how people that are impacted by the technology can play a role in shaping task design.", "9. Applications, Dual Use, Misuse: Discuss how task design can enhance applications.", "Discuss prohibited and contentious use case scenarios.", "Discuss how task design can mitigate some of the harms associated with the task.", "(Note that even when systems are used as designed, they can lead to harm.)", "10. Disclosure of Automation: Discuss the ethical ramifications of disclosing and of not disclosing to the users that the underlying task is automated.", "Summary: This section has three broad themes: implications of using datasets of different kinds, the tension between human variability and machine normativeness, and the ethical considerations regarding the people who have produced the data.", "Notably, Mohammad (2022) discusses how on the one hand is the tremendous variability in human representation and expression of language and emotions, and on the other hand, is the inherent bias of modern machine learning approaches to ignore variability.", "Thus, through their behaviour (e.g., by recognizing some forms of emotion/language expression and not recognizing others), AI systems convey to the user what is normal\"; implicitly invalidating other forms of emotion/language expression.", "11. Types of data: Discuss notable types of data such as labeled training data, large internet-scraped raw data for language models, lexicons, image repositories, etc. and their ethical implications.", "12. Dimensions of data: Discuss notable dimension of data such as size, whether it is carefully curated for the research or uncurated data obtained from an online platform, less private/sensitive data or more private/sensitive data, what languages are represented in the data, degree of documentation provided with the data, and so on.", "13. Variability of Expression, Conceptualization: Discuss how variability of human expression (e.g., in text, images, videos, etc.) and representations of meaning impacts the associated task.", "14. Norms of Emotions Expression: Discuss how some task-associated forms of human expression may be considered \"normal\" or \"correct\" by a group of people, and the extent to which other forms of expression are also valid and appropriate.", "Discuss how systems for the task are impacted by various design, data, and method choices when it comes to recognizing various forms of appropriate expressions.", "15. Norms of Attitudes: Discuss how different people may have different attitudes towards other people and entities (some of which may be inap-propriate), and how AI systems for the task may produce responses laden with such attitudes.", "16. \"Right\" Label or Many Appropriate Ones: Discuss whether for the given task, certain training instances can/should be labeled with multiple appropriate responses.", "Discuss implications of choices such as keeping only the majority label from the annotators.", "17. Label Aggregation: Discuss notable approaches to label aggregation, and their implications.", "(See Aroyo and Welty (2015); Checco et al.", "(2017).)", "18. Training on Historical Data: Discuss implications of training systems on historical data; who is missing from the data; biases in the data.", "19. TrainingDeployment Differences: Discuss implications of deploying systems on data that is markedly different from the training data.", "20. Platform Terms of Service: Discuss implications of relevant terms of services associated with platforms from which data was obtained.", "21. Anonymization, Ability to Delete One's Data: Discuss importance of anonymization, and the ability to control/delete one's data.", "22. Warnings and Recourse: Discuss appropriate levels of warnings and recourse one should provide when building and deploying systems.", "Summary: Discuss the ethical implications of deploying a given method for the focus task.", "Present the types of methods and their tradeoffs, as well as considerations of who is left out and spurious correlations.", "Mohammad (2022) also discusses green AI and the fine line between emotion management and manipulation.", "24. Types of Methods and their Tradeoffs : Discuss how different methods entail different trade-offs, e.g., less accurate vs. more accurate, white box vs. black box, less data hungry vs. more data hungry, less privacy preserving vs. more privacy preserving, fewer inappropriate biases vs. more inappropriate biases, etc.", "25. Who is Left Out by this Method: Discuss whose voices tend to not be included because of the method and data used.", "27. Context is Everything: Discuss how greater context can impact system accuracy and also the corresponding implications on privacy.", "26. Spurious Correlations: Discuss the tendency and implications of the chosen method to rely on spurious correlations in the data.", "(See Agrawal et al. (2016); Bissoto et al.", "(2020).)", "28. Individual Expression Dynamics: Discuss how variability and other characteristics of an individ-ual's expression over time (e.g., their speech patterns) impact the task.", "29. Historical Behavior vs. Future Behavior: Discuss the extent to which past behavior is not indicative of future behavior, and the impact of methods that assume the contrary.", "30. Communication Management, Manipulation: In case of human interaction systems, discuss whether the system is simply managing communication or if it can be used to nudge a person to a certain behavior.", "31. Green AI: Discuss the energy implications of the chosen method (Strubell et al., 2020; Schwartz et al., 2020).", "Summary: This section discusses ethical considerations associated with the evaluation of the focus task systems (Metrics) as well as the importance", "of examining systems through a number of other criteria (Beyond Metrics).", "Notably, Mohammad (2022) discusses interpretability and contestability, because even when systems work as designed, there will be some negative consequences.", "Recognizing and planning for such outcomes is part of responsible development.", "32. Reliability/Accuracy: Discuss commonly used (traditional) metrics for evaluating systems such as accuracy, F-score, and reliability.", "Discuss their limitations.", "33. Demographic Biases: Discuss when and how systems can be unreliable or systematically inaccurate for certain groups of people, races, genders, people with health conditions, people from different countries, etc. (See Buolamwini and Gebru (2018); Kiritchenko and Mohammad", "(2018).)", "34. Sensitive Applications: Discuss whether systems for the task should be used in sensitive scenarios such as impacting one's health, livelihood, or freedom, and if such use is acceptable then under what conditions.", "Unless a clear case can be made for such uses, it is best to caution against such use of AI systems.", "36. Interpretability, Explainability: Discuss task-specific approaches to system interpretability and explainability of systems and their role in identifying biases and flaws.", "37. Visualization: Discuss how suitable visualizations (especially interactive ones) can allow users to explore trends in the data and system behavior; and importantly, allow one to drill down to the source data that is driving the trends.", "38. Safeguards and Guard Rails: Discuss notable task-specific safeguards to prevent harm to individuals.", "39. Harms when the System Works as Designed: Discuss how systems that work as designed can still cause harms.", "40. Contestability and Recourse: Discuss best practises in allowing users to contest system predictions, and in terms of appropriate recourse.", "41. Ethics Washing: Discuss how ethics documentation should be used to meaningfully engage with the issues rather than for cosmetic purposes.", "Summary: The privacy section discusses both individual and group privacy.", "Mohammad (2022) points out how the idea of group privacy becomes especially important in the context of soft-biometrics determined through AER that are not intended to be able to identify individuals, but rather identify groups of people with similar characteristics.", "The subsection on social groups discusses the need for work that does not treat people as a homogeneous group (ignoring group differences and implicitly favoring the majority group) but rather values disaggregation and explores intersectional-ity, while minimizing reification and essentialization of social constructs.", "42. Privacy and Personal Control: Discuss privacy implications of the task, and measures to give more control to the user on their data.", "43. Group Privacy and Soft Biometrics: Discuss implications of automating the task on group privacy (Floridi, 2014).", "44. Mass Surveillance vs. Right to Privacy, Freedom of Expression, Right to Protest: Discuss implications of automating the task on the ability to monitor behavior of a large number of people, and trade-offs with the right to privacy, freedom of expression, and the right to protest.", "45. Right Against Self-Incrimination: Automating certain tasks may make it easy for systems to find incriminating information produced by an individual.", "This can work against the right afforded by many countries against self-incrimination.", "Discuss any pertinent considerations.", "46. Right to Non-Discrimination: Discuss whether automating the task can be used to discriminate against certain groups of people.", "Discuss safe guards.", "47. Disaggregation: When building automatic prediction systems: Report performance disaggregated for each of the relevant and key demographic groups.", "(See work on model cards Mitchell et al.", "(2019).) Cite work reporting disaggregated results for the", "task.)", "48. Intersectionality: People with multiple group identities are often not seen as prototypical members of any of their groups and thus are subject to, what is refered to as, intersectional invisibility omissions of their experiences in historical narratives and cultural representation, lack of support from advocacy groups, and mismatch with existing anti-discrimination frameworks.", "Discuss implications of the task on those with multiple group identities.", "49. Reification and Essentialization: Avoid reinforcing false beliefs that there are innate differences across different groups or that some features are central for one to belong to a social category.", "Appropriately contextualize work on disaggregation; for example, by impressing on the reader that even though constructs such as race are artificial and social in nature, the impact of people's perceptions and behavior around race lead to very real-world consequences.", "50. Attributing People to Social Groups: In order to be able to obtain disaggregated results, sometimes one needs access to demographic information.", "This leads to considerations such as: whether the participants are providing meaningful consent to the collection of such data and whether the data is being collected in a manner that respects their privacy, their autonomy (e.g., can they choose to delete their information later), and dignity (e.g., allowing self-descriptions).", "In this position paper, I discussed how ethical considerations apply not just at the level of individual models and datasets, but also at the level of AI Tasks.", "I presented a new form of documenting ethical considerations, which I call Ethics Sheets for AI Tasks .", "It is a document dedicated to fleshing out the assumptions and ethical considerations hidden in how a task is commonly framed and in the choices we make regarding the data, method, and evaluation.", "I listed various benefits of such ethics sheets and discussed caveats such as how a single ethics sheet does not speak for the whole community.", "I also provided a template sheet and an example, proof-of-concept, ethics sheet for automatic emotion recognition.", "Ethics sheets have the potential for engaging various stakeholders of AI tasks towards responsible research and development.", "I hope that this work spurs the wider community to ask and document: What ethical considerations apply to my task?", "Note: See FAQ in the Appendix for practical considerations involved in who should create ethics sheets, when, for what tasks, etc. 8376 Acknowledgments I am grateful to Annika Schoene, Isar Nejadgholi, Mohamed Abdalla, and Tara Small for encouragement on the initial idea of Ethics Sheets for AI Tasks, the thoughtful discussions, and comments on earlier drafts.", "Many thanks to Emily Bender, Esma Balkir, Patricia Thaine, Brendan O'Connor, Cyril Goutte, and Sowmya Vajjala for thoughtful comments on an early draft.", "Many thanks to Mallory Feldman, Roman Klinger, Rada Mihalcea, and Peter Turney for thoughtful comments on the ethics sheet for emotion recognition." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain" ]
[ "This paper presents a neural relation extraction method to deal with the noisy training data generated by distant supervision.", "Previous studies mainly focus on sentence-level de-noising by designing neural networks with intra-bag attentions.", "In this paper, both intra-bag and inter-bag attentions are considered in order to deal with the noise at sentence-level and bag-level respectively.", "First, relation-aware bag representations are calculated by weighting sentence embeddings using intra-bag attentions.", "Here, each possible relation is utilized as the query for attention calculation instead of only using the target relation in conventional methods.", "Furthermore, the representation of a group of bags in the training set which share the same relation label is calculated by weighting bag representations using a similarity-based inter-bag attention module.", "Finally, a bag group is utilized as a training sample when building our relation extractor.", "Experimental results on the New York Times dataset demonstrate the effectiveness of our proposed intra-bag and inter-bag attention modules.", "Our method also achieves better relation extraction accuracy than state-of-the-art methods on this dataset 1 .", "Relation Extraction is a fundamental task in natural language processing (NLP), which aims to extract semantic relations between entities.", "For example, sentence [ Barack Obama ] e 1 was born in [ Hawaii ] e 2 expresses the relation BornIn between entity pair Barack Obama and Hawaii .", "Conventional relation extraction methods, such as (Zelenko et al., 2002; Culotta and Sorensen, 2004; Mooney and Bunescu, 2006), adopted supervised training and suffered from the lack of 1 The code is available at https://github.com/ZhixiuYe/Intra-Bag-and-Inter-Bag-Attentions .", "large-scale manually labeled data.", "To address this issue, the distant supervision method (Mintz et al., 2009) was proposed, which generated the data for training relation extraction models automatically.", "The distant supervision assumption says that if two entities participate in a relation, all sentences that mention these two entities express that relation.", "It is inevitable that there exists noise in the data labeled by distant supervision.", "For example, the precision of aligning the relations in Freebase to the New York Times corpus was only about 70% (Riedel et al., 2010).", "Thus, the relation extraction method proposed in (Riedel et al., 2010) argued that the distant supervision assumption was too strong and relaxed it to expressed-at-least-once assumption.", "This assumption says that if two entities participate in a relation, at least one sentence that mentions these two entities might express that relation.", "An example is shown by sentences S1 and S2 in Table", "1. This relation extraction method first divided the training data given by distant supervision into bags where each bag was a set of sentences containing the same entity pair.", "Then, bag representations were derived by weighting sentences within each bag.", "It was expected that the weights of the sentences with incorrect labels were reduced and the bag representations were calculated mainly using the sentences with correct labels.", "Finally, bags were utilized as the samples for training relation extraction models instead of sentences.", "In recent years, many relation extraction methods using neural networks with attention mechanism (Lin et al., 2016; Ji et al., 2017; Jat et al., 2018) have been proposed to alleviate the influence of noisy training data under the expressed-at-least-once assumption.", "However, these methods still have two deficiencies.", "First, only the target relation of each bag is used to calculate the attention weights for deriving bag representations from sentence embeddings at training stage.", "Here we argue that the bag representations should be calculated in a relation-aware way.", "For example, the bag B1 in Table 1 contains two sentences S1 and S2.", "When this bag is classified to relation BornIn , the sentence S1 should have higher weight than S2, but when classified to relation PresidentOf , the weight of S2 should be higher.", "Second, the expressed-at-least-once assumption ignores the noisy bag problem which means that all sentences in one bag are incorrectly labeled.", "An example is shown by bag B2 in Table", "1. In order to deal with these two deficiencies of previous methods, this paper proposes a neural network with multi-level attentions for distant supervision relation extraction.", "At the instance/sentence-level, i.e., intra-bag level, all possible relations are employed as queries to calculate the relation-aware bag representations instead of only using the target relation of each bag.", "To address the noisy bag problem, a bag group is adopted as a training sample instead of a single bag.", "Here, a bag group is composed of bags in the training set which share the same relation label.", "The representation of a bag group is calculated by weighting bag representations using a similarity-based inter-bag attention module.", "The contributions of this paper are threefold.", "First, an improved intra-bag attention mechanism is proposed to derive relation-aware bag representations for relation extraction.", "Second, an inter-bag attention module is introduced to deal with the noisy bag problem which is ignored by the expressed-at-least-once assumption.", "Third, our methods achieve better extraction accuracy than state-of-the-art models on the widely used New York Times (NYT) dataset (Riedel et al., 2010).", "Some previous work (Zelenko et al., 2002; Mooney and Bunescu, 2006) treated relation extraction as a supervised learning task and designed hand-crafted features to train kernel-based models.", "Due to the lack of large-scale manually labeled data for supervised training, the distant supervision approach (Mintz et al., 2009) was proposed, which aligned raw texts toward knowledge bases automatically to generate relation labels for entity pairs.", "However, this approach suffered from the issue of noisy labels.", "Therefore, some subsequent studies (Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) considered distant supervision relation extraction as a multi-instance learning problem, which extracted relation from a bag of sentences instead of a single sentence.", "With the development of deep learning techniques (LeCun et al., 2015), many neural-network-based models have been developed for distant supervision relation extraction.", "Zeng et al. (2015) proposed piecewise convolutional neural networks (PCNNs) to model sentence representations and chose the most reliable sentence as the bag representation.", "Lin et al. (2016) employed PCNNs as sentence encoders and proposed an intra-bag attention mechanism to compute the bag representation via a weighted sum of all sentence representations in the bag.", "Ji et al. (2017) adopted a similar attention strategy and combined entity descriptions to calculate the weights.", "Liu et al. (2017) proposed a soft-label method to reduce the influence of noisy instances.", "All these methods represented a bag with a weighted sum of sentence embeddings, and calculated the probability of the bag being classified into each relation using the same bag representation at training stage.", "In our proposed method, intra-bag attentions are computed in a relation-aware way, which means that different bag representations are utilized to calculate the probabilities for different relation types.", "Besides, these existing methods focused on intra-bag attentions and ignored the noisy bag problem.", "Some data filtering strategies for robust distant supervision relation extraction have also been proposed.", "Feng et al. (2018) and Qin et al. (2018b) both employed reinforcement learning to train instance selector and to filter out the samples with wrong labels.", "Their rewards were calcu-intra-bag attention 11 21 1 1 11 1 1 SE SE SER 1 inter-bag attention input sentence sentence encoder sentence representation bag representation intra-bag attention inter-bag attention representation of a group of bags 21 intra-bag attention 12 22 12 2 2 SE SE SER 2 22 intra-bag attention 1 2 1 SE SE SER 2 matrix vector module 2 2 Figure 1: The framework of our proposed neural network with intra-bag and inter-bag attentions for relation extraction.", "lated from the prediction probabilities and the performance change of the relation classifier respectively.", "Qin et al. (2018a) designed an adversarial learning process to build a sentence-level generator via policy-gradient-based reinforcement learning.", "These methods were proposed to filter out the noisy data at sentence-level and also failed to deal with the noisy bag problem explicitly.", "In this section, we introduce a neural network with intra-bag and inter-bag attentions for distant supervision relation extraction.", "Let g = { b 1 , b 2 , ..., b n } denote a group of bags which have the same relation label given by distant supervision, and n is the number of bags within this group.", "Let b i = { x i 1 , x i 2 , ..., x im i } denote all sentences in bag b i , and m i is the number of sentences in bag b i .", "Let x i j = { w i j 1 , w i j 2 , ..., w i jl ij } denote the j -th sentence in the i -th bag and l ij is its length (i.e., number of words).", "The framework of our model is shown in Fig. 1, which has three main modules.", "Sentence Encoder Given a sentence x ij and the positions of two entities within this sentence, CNNs or PCNNs (Zeng et al., 2015) are adopted to derive the sentence representation s i j .", "Intra-Bag Attention Given the sentence representations of all sentences within a bag b i and a relation embedding matrix R , attention weight vectors ik and bag representations b ik are calculated for all relations, where k is the relation index.", "Inter-Bag Attention Given the representations of all bags with the group g , a weight matrix is further calculated via similarity-based attention mechanism to obtain the representation of the bag group.", "Each word w ijk within the sentence x ij is first mapped into a d w -dimensional word embedding.", "To describe the position information of two entities, the position features (PFs) proposed in (Zeng et al., 2014) are also adopted in our work.", "For each word, the PFs describe the relative distances between current word and the two entities and are further mapped into two vectors p ijk and q ijk of d p dimensions.", "Finally, these three vectors are concatenated to get the word representation w ijk = [ e ijk ; p ijk ; q ijk ] of d w + 2 d p dimensions.", "For sentence x ij , the matrix of word representations W i j R l ij ( d w +2 d p ) is first input into a CNN with d c filters.", "Then, piecewise max pooling (Zeng et al., 2015) is employed to extract features from the three segments of CNN outputs, and the segment boundaries are determined by the positions of the two entities.", "Let S i R m i 3 d c represent the representations of all sentences within bag b i , and R R h 3 d c denote a relation embedding matrix where h is the number of relations.", "Different from conventional methods (Lin et al., 2016; Ji et al., 2017) where a unified bag representation was derived for relation classification, our method calculates bag representations b ik for bag b i on the condition of all possible relations as b ik = m i (cid:88) j =1 ikj s ij , (1) where k { 1 , 2 , ..., h } is the relation index and ikj is the attention weight between the k -th relation and the j -th sentence in bag b i .", "ikj can be further defined as ikj = exp ( e ikj ) (cid:80) m i j (cid:48) =1 exp ( e ikj (cid:48) ) , (2) where e ikj is the matching degree between the k -th relation query and the j -th sentence in bag b i .", "In our implementation, a simple dot product between vectors is adopted to calculate the matching degree as e ikj = r k s i (cid:62) j , (3) where r k is the k -th row of the relation embedding matrix R 2 .", "Finally, the representations of bag b i compose the matrix B i R h 3 d c in Fig. 1, where each row corresponds to a possible relation type of this bag.", "In order to deal with the noisy bag problem, a similarity-based inter-bag attention module is designed to reduce the weights of noisy bags dynamically.", "Intuitively, if two bags b i 1 and b i 2 are both labeled as relation k , their representations b i 1 k and b i 2 k should be close to each other.", "Given a group of bags with the same relation label, we assign higher weights to those bags which are close to other bags 2 We also tried r k As i (cid:62) j , where A was a diagonal matrix, in experiments and achieved similar performance.", "in this group.", "As a result, the representation of bag group g can be formulated as g k = n (cid:88) i =1 ik b ik , (4) where g k is the k -th row of the matrix G R h 3 d c in Fig. 1, k is the relation index and ik composes the attention weight matrix R n h .", "Each ik is defined as ik = exp ( ik ) (cid:80) ni =1 exp ( ik ) , (5) where ik describes the confidence of labeling bag b i with the k -th relation.", "Inspired by the self-attention algorithm (Vaswani et al., 2017) which calculates the attention weights for a group of vectors using the vectors themselves, we calculate the weights of bags according to their own representations.", "Mathematically, ik is defined as ik = (cid:88) i (cid:48) =1 ,...,n,i (cid:48) (cid:54) = i similarity( b ik , b i (cid:48) k ) , (6) where the function similarity is a simple dot product in our implementation as similarity( b ik , b i (cid:48) k ) = b ik b i (cid:48) (cid:62) k .", "And also, in order to prevent the influence of vector length, all bag representations b ik are normalized to unit length as b ik = b ik / || b ik || 2 before calculating", "Eq.(4)-(7).", "Then, the score o k of classifying bag group g into relation k is calculated via g k and relation embedding r k as o k = r k g (cid:62) k + d k , (8) where d k is a bias term.", "It should be noticed that the same relation embedding matrix R is used for calculating", "Eq.(3) and", "Eq.(8).", "Similar to Lin et al. (2016), the dropout strategy (Srivastava et al., 2014) is applied to bag representation B i to prevent overfitting.", "First of all, all sentences in the training set that contain the same two entities are accumulated into one bag.", "Then, we tie up every n bags that share the same relation label into a group.", "It should be noticed that a bag group is one training sample in our method.", "Therefore, the model can also be trained in mini-batch mode by packing multiple bag groups into one batch.", "In our implementation, the objective function is defined as", "where T is the set of all training samples and is the set of model parameters, including word embedding matrix, position feature embedding matrix, CNN weight matrix and relation embedding matrix.", "The model parameters are estimated by minimizing the objective function J( ) through mini-batch stochastic gradient descent (SGD).", "As introduced above, at the training phase of our proposed method, n bags which have the same relation label are accumulated into one bag group and the weighted sum of bag representations is calculated to obtain the representation G of the bag group.", "Due to the fact that the label of each bag is unknown at test stage, each single bag is treated as a bag group (i.e., n =1) when processing the test set.", "And also, similar to (Qin et al., 2018b), we only apply inter-bag attentions to positive samples, i.e., the bags whose relation label is not NA ( NoRelation ).", "The reason is that the representations of the bags that express no relations are always diverse and it's difficult to calculate suitable weights for them.", "In our implementation, a pre-training strategy is adopted.", "We first train the model with only intra-bag attentions until convergence.", "Then, the inter-bag attention module is added and the model parameters are further updated until convergence again.", "Preliminary experimental results showed Component Parameter Value word embedding dimension 50 position feature max relative distance 30 dimension 5 CNN window size 3 filter number 230 dropout dropout rate 0.5 optimization strategy SGD learning rate 0.1 batch size N p 50 batch size N t 10 group size n 5 gradient clip 5.0 Table 2: Hyper-parameters of the models built in our experiments.", "that this strategy can lead to better model performance than considering inter-bag attentions from the very beginning.", "The New York Times (NYT) dataset was adopted in our experiments.", "This dataset was first released by (Riedel et al., 2010) and has been widely used by previous research on distant supervision relation extraction (Liu et al., 2017; Jat et al., 2018; Qin et al., 2018a,b).", "This dataset was generated by aligning Freebase with the New York Times (NYT) corpus automatically.", "There were 52 actual relations and a special relation NA which indicated there was no relation between two entities.", "Following previous studies (Mintz et al., 2009; Liu et al., 2017), we evaluated our models on the held-out test set of the NYT dataset.", "Precision-recall (PR) curves, area under curve (AUC) values and Precision@N (P@N) values (Lin et al., 2016) were adopted as evaluation metrics in our experiments.", "All of the numerical results given by our experiments were the mean values of 10 repetitive trainings, and the PR curves were randomly selected from the repetitions because there was no significant visual difference among them.", "All of the hyperparameters used in our experiments are listed in Table", "2. Most of them followed the hyperparameter settings in (Lin et al., 2016).", "The 50-dimensional word embeddings released by (Lin et al., 2016) 3 were also adopted for initialization.", "The vocabulary contained the words which appeared more than 100 times in the NYT corpus.", "3 https://github.com/thunlp/NRE .", "Two different batch sizes N p and N t were used for pre-training and training respectively.", "In our experiments, a grid search is employed using training set to determine the optimal values of n , N p and N t among n { 3 , 4 , ..., 10 } , N p { 10 , 20 , 50 , 100 , 200 } and N t { 5 , 10 , 20 , 50 } .", "Note that increasing the bag group size n may boost the effect of inter-bag attentions but lead to less training samples.", "The effects of inter-bag attentions would be lost when n =1.", "For optimization, we employed mini-batch SGD with the initial learning rate of 0.1.", "The learning rate was decayed to one tenth every 100,000 steps.", "The pre-trained model with only intra-bag attentions converged within 300,000 steps in our experiments.", "Thus, the initial learning rate for training the model with inter-bag attentions was set as 0.001.", "Eight models were implemented for comparison.", "The names of these models are listed in Table 3, where CNN and PCNN denote using CNNs or piecewise CNNs in sentence encoders respectively, ATT BL means the baseline intra-bag attention method proposed by (Lin et al., 2016), ATT RA means our proposed relation-aware intra-bag attention method, and BAG ATT means our proposed inter-bag attention method.", "At the training stage of the ATT BL method, the relation query vector for attention weight calculation was fixed as the embedding vector associated with the distant supervision label for each bag.", "At the test stage, all relation query vectors were applied to calculate the posterior probabilities of relations respectively and the relation with the highest prob-0.0 0.1 0.2 0.3 0.4 0.5 Recall 0.4 0.5 0.6 0.7 0.8 0.9 1.0 P r e c i s i o n PCNN+ATT_BL PCNN+ATT_BL+BAG_ATT PCNN+ATT_RA PCNN+ATT_RA+BAG_ATT Figure 3: PR curves of different models using PCNN sentence encoders.", "ability was chosen as the classification result (Lin et al., 2016).", "The means and standard deviations of the AUC values given by the whole PR curves of these models are shown in Table 3 for a quantitative comparison.", "Following (Lin et al., 2016), we also plotted the PR curves of these models in Fig. 2 and 3 with recall smaller than 0 .", "5 for a visualized comparison.", "From Table 3, Fig. 2 and Fig. 3, we have the following observations.", "(1) Similar to the results of previous work (Zeng et al., 2015), PCNNs worked better than CNNs as sentence encoders.", "(2) When using either CNN or PCNN sentence encoders, ATT RA outperformed ATT BL .", "It can be attributed to that the ATT BL method only considered the target relation when deriving bag representations at training time, while the ATT RA method calculated intra-bag attention weights using all relation embeddings as queries, which improved the flexibility of bag representations.", "(3) For both sentence encoders and both intra-bag attention methods, the models with BAGATT always achieved better performances than the ones without BAG ATT .", "This result verified the effectiveness of our proposed inter-bag attention # of Test Sentences one two all P@N(%) 100 200 300 mean 100 200 300 mean 100 200 300 mean (Lin et al., 2016) 73.3 69.2 60.8 67.8 77.2 71.6 66.1 71.6 76.2 73.1 47.4 72.2 (Liu et al., 2017) 84.0 75.5 68.3 75.9 86.0 77.0 73.3 78.8 87.0 84.5 77.0 82.8 CNN+ATT BL 74.2 68.9 65.3 69.5 77.8 71.5 68.1 72.5 79.2 74.9 70.3 74.8 CNN+ATT RA 76.8 72.7 67.9 72.5 79.6 73.9 70.7 74.7 81.4 76.3 72.5 76.8 CNN+ATT BL+BAG ATT 78.6 74.2 69.7 74.2 82.4 76.2 72.1 76.9 83.0 78.0 74.0 78.3 CNN+ATT RA+BAG ATT 79.8 75.3 71.0 75.4 83.2 76.5 72.1 77.3 87.2 78.7 74.9 80.3 PCNN+ATT BL 78.6 73.5 68.1 73.4 77.8 75.1 70.3 74.4 80.8 77.5 72.3 76.9 PCNN+ATT RA 79.4 73.9 69.6 74.3 82.2 77.6 72.4 77.4 84.2 79.9 73.0 79.0 PCNN+ATT BL+BAG ATT 85.2 78.2 71.3 78.2 84.8 80.0 74.3 79.7 88.8 83.7 77.4 83.9 PCNN+ATT RA+BAG ATT 86.8 77.6 73.9 79.4 91.2 79.2 75.4 81.9 91.8 84.0 78.7 84.8 Table 4: P@N values of the entity pairs with different number of test sentences.", "method for distant supervision relation extraction.", "(4) The best AUC performance was achieved by combining PCNN sentence encoders with the intra-bag and inter-bag attentions proposed in this paper.", "The PR curves of several models in previous work and our best model PCNN+ATT RA+BAG ATT are compared in Fig. 4, where Mintz (Mintz et al., 2009), MultiR (Hoffmann et al., 2011) and MIMLRE (Surdeanu et al., 2012) are conventional feature-based methods, and (Lin et al., 2016) and (Liu et al., 2017) are PCNN-based ones 4 .", "For a fair comparison with (Lin et al., 2016) and (Liu et al., 2017), we also plotted the curves with only the top 2000 points.", "We can see that our model achieved better PR performance than all the other models.", "ATT BL+DSGAN (Qin et al., 2018a) and ATT BL+RL (Qin et al., 2018b) are two recent studies on distant supervision relation extraction with reinforcement learning for data filtering, which reported the AUC values of PR curves composed by the top 2000 points.", "Table 5 compares the AUC values reported in these two papers and the results of our proposed models.", "We can see that introducing the proposed ATT RA and BAG ATT methods to baseline models achieved larger improvement than using the methods proposed in (Qin et al., 2018a,b).", "Following (Lin et al., 2016), we evaluated our models on the entity pairs with more than one training sentence.", "One, two and all sentences for each test entity pair were randomly selected to construct three new test sets.", "The P@100, P@200, P@300 values and their means given by our proposed models on these three test sets are reported in Table 4 together with the best results of (Lin et al., 2016) and (Liu et al., 2017).", "Here, P@N bag sentence correct?", "means the precision of the relation classification results with the top N highest probabilities in the test set.", "We can see our proposed methods achieved higher P@N values than previous work.", "Furthermore, no matter whether PCNN or BAG ATT were adopted, the ATT RA method outperformed the ATT BL method on the test set with only one sentence for each entity pair.", "Note that the decoding procedures of ATT BL and ATT RA were equivalent when there was only one sentence in a bag.", "Therefore, the improvements from ATT BL to ATT RA can be attributed to that ATT RA calculated intra-bag attention weights in a relation-aware way at the training stage.", "We divided the training set into 5 parts according to the number of sentences in each bag.", "For each bag, the inter-bag attention weights given by the PCNN+ATT RA+BAG ATT model were recorded.", "Then, the mean and standard deviation of inter-bag attention weights for each part of the training set were calculated and are shown in Table 7.", "From this table, we can see that the bag with smaller number of training sentences were usually assigned with lower inter-bag attention weights.", "This result was consistent with the finding in (Qin et al., 2018b) that the entity pairs with fewer training sentences were more likely to have incorrect relation labels.", "A test set example of relation /location/location/contains is shown in Table 6.", "The bag group contained 3 bags, which consisted of 2, 1, and 2 sentences respectively.", "We calculated the intra-bag and inter-bag attentions for this bag group using our PCNN+ATT RA+BAG ATT model and the weights of the target relation are also shown in Table 6.", "In this example, the second bag was a noisy bag because the only sentence in this bag didn't express the relation /location/location/contains between the two entities Naugatuck and Connecticut .", "In conventional methods, these three bags were treated equally for model training.", "After introducing inter-bag attention mechanism, the weight of this noisy bag was reduced significantly as shown in the last column of Table 6.", "In this paper, we have proposed a neural network with intra-bag and inter-bag attentions to cope with the noisy sentence and noisy bag problems in distant supervision relation extraction.", "First, relation-aware bag representations are calculated by a weighted sum of sentence embeddings where the noisy sentences are expected to have smaller weights.", "Further, an inter-bag attention module is designed to deal with the noisy bag problem by calculating the bag-level attention weights dynamically during model training.", "Experimental results on New York Times dataset show that our models achieved significant and consistent improvements compared with the models using only conventional intra-bag attentions.", "To deal with the multi-label problem of relation extraction and to integrate external knowledge into our model will be the tasks of our future work.", "We thank the anonymous reviewers for their valuable comments." ]
[ "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "other" ]
[ "Shomir Wilson", "Abstract Organisations disclose their privacy practices by posting privacy policies on their websites.", "Even though internet users often care about their digital privacy, they usually do not read privacy policies, since understanding them requires a significant investment of time and effort.", "Natural language processing has been used to create experimental tools to interpret privacy policies, but there has been a lack of large privacy policy corpora to facilitate the creation of large-scale semi-supervised and unsupervised models to interpret and simplify privacy policies.", "Thus, we present the PrivaSeer Corpus of 1,005,380 English language website privacy policies collected from the web.", "The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies, and it surpasses the aggregate of unique websites represented in all other publicly available privacy policy corpora combined.", "We describe a corpus creation pipeline with stages that include a web crawler, language detection, document classification, duplicate and near-duplicate removal, and content extraction.", "We employ an unsupervised topic modelling approach to investigate the contents of policy documents in the corpus and discuss the distribution of topics in privacy policies at web scale.", "We further investigate the relationship between privacy policy domain PageRanks and text features of the privacy policies.", "Finally, we use the corpus to pretrain PrivBERT, a transformer-based privacy policy language model, and obtain state of the art results on the data practice classification and question answering tasks.", "A privacy policy is a legal document that an organisation uses to disclose how they collect, analyze, share, and protect users' personal information.", "Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users, and laws such as General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) place specific expectations upon privacy policies.", "However, although many internet users have concerns about their privacy (Madden, 2017), most fail to understand privacy policies (Meiselwitz, 2013).", "Studies show that privacy policies require a considerable investment in time to read (Obar and Oeldorf-Hirsch, 2018) and estimate that it would require approximately 200 hours to read all the privacy policies that an average person would come across every year (McDonald and Cranor, 2008).", "Natural language processing (NLP) provides an opportunity to automate the extraction of salient details from privacy policies, thereby reducing human effort and enabling the creation of tools for internet users to understand and control their online privacy.", "Existing research has achieved some success using expert annotated corpora of a few hundred or a few thousand privacy policies (Wilson et al., 2016; Zimmeck et al., 2019; Ramanath et al., 2014), but issues of accuracy, scalability and generalization remain.", "More importantly, annotations in the privacy policy domain are expensive.", "Privacy policies are difficult to understand and many tasks such as privacy practice classification (Wilson et al., 2016), privacy question answering (Ravichander et al., 2019), vague sentence detection (Lebanoff and Liu, 2018), and detection of compliance issues (Zimmeck et al., 2019) require skilled legal experts to annotate the dataset.", "In contrast, approaches involving large amounts of unlabeled privacy policies remain relatively unexplored.", "Modern robust language models, such as transformer-based architectures, benefit from increasingly large training sets.", "These models can be used on downstream tasks (Devlin et al., 2019) to improve performance.", "Results have shown that in-domain fine tuning of such pre-trained language models have produced a significant boost in performance on many tasks (Gururangan et al., 2020) in a variety of domains, suggesting a need for a larger collection of privacy policies to enable similar results in the privacy domain.", "To satisfy the need for a much larger corpus of privacy policies, we introduce the PrivaSeer Corpus of 1,005,380 English language website privacy policies.", "The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies (Amos et al., 2020), and it surpasses the aggregate of unique websites represented in all other publicly available web privacy policy corpora combined.", "We describe the corpus creation pipeline, with stages including a web crawler, language detection, document classification, duplicate and near-duplication removal, and content extraction.", "We then analyse the lengths and top level distribution of the privacy policies in the corpus and use topic modelling to explore the component topics.", "Subsequently, we pretrain PrivBERT, a transformer-based language model, using the corpus and evaluate it on data practice classification and question answering tasks.", "We release the corpus, a search engine for the corpus (Srinath et al., 2021), the document collection pipeline, and a language model to support further research in the privacy domain.", "1 2 Related Work Prior collections of privacy policy corpora have led to progress in privacy research.", "Wilson et al. (2016) released the OPP-115 Corpus, a dataset of 115 privacy policies with manual annotations of 23k fine-grained data practices, and they created a baseline for classifying privacy policy text into one of ten categories.", "The corpus was used to train models to extract opt-out choices from privacy policies (Sathyendra et al., 2016), to automatically identify policies on websites and find compliance issues (Story et al., 2019), and to classify privacy practices and answer privacy related non-factoid questions (Harkous et al., 2018).", "Other corpora similar to OPP-115 Corpus have enabled research on privacy practices.", "The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019).", "Similarly, 1 All artifacts are available at https://privaseer.", "Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague words and sentences in privacy policies and studied automatic vagueness detection.", "Sathyendra et al. (2017) presented a dataset and developed a model to automatically identify and label opt-out choices offered in privacy policies.", "Similarly, Zimmeck et al. (2019) released a set of over 400k URLs to Android app privacy policy pages collected by crawling the Google Play store.", "Amos et al. (2020) collected privacy policies from around 130,000 websites from over two decades and analysed the evolution of the online privacy landscape.", "Finally, Nokhbeh Zaeem and Barber (2021) collected a corpus of around 100k privacy policies using the domains from DMOZ, a website which maintained categories of websites on the internet.", "Prior work in privacy and human-computer interaction establishes the motivation for studying these documents.", "Although most internet users are concerned about privacy (Madden, 2017), Rudolph et al. (2018) reports that a significant number do not make the effort to read privacy notices because they perceive them to be too time-consuming or too complicated (Obar and Oeldorf-Hirsch, 2018).", "Responding to the opaqueness of these document, Schaub et al. (2015) introduced methods to ease the design of privacy notices and their integration, and Kelley et al. (2010) designed and tested a pri-vacy nutrition label approach to present privacy information visually.", "Suggestions to improve the presentation of privacy information, have not been adopted by many organisations.", "Apple has begun displaying privacy labels in its app stores having collected the information from App developers; however, concise privacy information for websites remains an open problem.", "To build the PrivaSeer corpus, we create a pipeline concentrating on focused crawling Chakrabarti et al. (1999); Diligenti et al. (2000) of privacy policy documents.", "We used Common Crawl, 2 described below, to gather seed URLs to privacy policies on the web.", "We filtered the Common Crawl URLs to gather a set of possible links to web site privacy policies.", "We then crawled the filtered set to obtain candidate privacy policy documents.", "The complete pipeline from the Common Crawl URL dump to the gold standard privacy policy corpus is 2 https://commoncrawl.org/ Figure 1: Corpus creation pipeline in Figure 1.", "The Common Crawl Foundation has been releasing large monthly internet web crawls along with their web graphs since 2008.", "Monthly crawl archives provide a snapshot of the web by including re-crawls of popular domains and crawls of new domains.", "We downloaded the URL dump of the May 2019 archive.", "3 Common Crawl reports that the archive contains 2.65 billion web pages or 220 TB of uncompressed content which were crawled between 19th and 27th of May, 2019.", "We applied a selection criteria on the downloaded URL dump to filter the URLs of likely privacy policy pages.", "Due to legal requirements, organizations typically include a link to their privacy policy in the footer of the website landing page commonly with the names Privacy Policy , Privacy Notice , and Data Protection .", "We selected those URLs which had the word privacy or the words data and pro-tection from the Common Crawl URL archive.", "We were able to extract 3.9 million URLs that fit this selection criterion.", "Informal experiments suggested that this selection of keywords was optimal for retrieving the most privacy policies with as few false positives as possible.", "To find the accuracy of this technique, we manually examined 115 English language website landing pages and their privacy policy URLs from the OPP-115 Corpus (Wilson et al., 2016) since it was built to cover the diverse distribution of privacy policies on the web, in terms of website popularity and sector of commerce.", "We found that out of 115 websites, 4 websites did not have their privacy policy links either on the landing page or one hop from the landing page and 5 other websites did not satisfy our URL selection criteria.", "Thus, our crawling technique would cover about 3 https://commoncrawl.s3.amazonaws.com/ crawl-data/CC-MAIN-2019-22/cc-index.paths.gz 92.17% 6.51% of English privacy policies on the web with a 95% confidence interval.", "We crawled the 3.9 million selected URLs using Scrapy 4 for about 48 hours between the 4th and 10th of August 2019, for a few hours each day.", "3.2 million URLs were successfully crawled, henceforth referred to as candidate privacy policies, while 0.4 million led to error pages and 0.3 million URLs were discarded as duplicates.", "Language Detection.", "We focused on privacy policies written in the English language, to enable comparisons with prior corpora of privacy policies.", "To identify the natural language of each candidate document, we used the open-source Python package Langid (Lui and Baldwin, 2012).", "Langid is a Naive Bayes-based classifier pretrained on 97 different languages, designed to achieve consistently high accuracy over a wide range of languages, domains, and lengths of text.", "The complete set of documents was divided into 97 languages and an unknown language category.", "We found that the vast majority of documents were in English.", "We set aside candidate documents that were not identified as English by Langid and were left with 2.1 million candidates.", "Content Extraction.", "Manual inspection of the English language web pages showed that they included content other than the main text: often they had a header, a footer, a navigation menu, and banners.", "We refer to this extra content in a web page as boilerplate .", "Boilerplate draws away from the focus of the main content in a web page and therefore various techniques have been used to remove boilerplate from web pages (Gottron, 2007; Weninger et al., 2016).", "After manual comparison of a number of content extraction tools, we used the open-source Python package boilerpipe (Kohlschutter 4 https://scrapy.org/ et al., 2010) due to its superior performance.", "Boilerpipe effectively strips web pages of boilerplate using shallow text features, structural features and density based features.", "Document Classification.", "Some of the web pages in the English language candidate document set may not have been privacy policies and instead simply satisfied our URL selection criteria.", "To separate privacy policies from other web documents we used a supervised machine learning approach.", "Two researchers in the team labeled 1,600 randomly selected candidate documents based on a preset scheme in consultation with a privacy expert.", "While both the researchers had substantial prior experience with privacy policies, the privacy expert was consulted to eliminate uncertainty in the annotations of a few documents.", "Lack of agreement in the annotations occurred for six documents, which were settled by discussion with the expert.", "Out of 1,600 documents, 1,145 were privacy policies and 455 were not privacy policies.", "We trained four supervised machine learning models using the manually labelled documents with features extracted from the URLs and the words in the web page.", "We trained three random forest models and fine-tuned a transformer based pretrained language model, namely RoBERTa (Liu et al., 2019).", "The three random forest models were trained on three different sets of features: one using the features extracted from the URL, one using the features extracted from the document content, and a combined model using features from both.", "For the URL model, the words in the URL path were extracted and the tf-idf of each term was recorded to create the features (Baykan et al., 2009).", "As privacy policy URLs tend to be shorter and have fewer path segments than typical URLs, length and the number of path segments were added as features.", "Since the classes were unbalanced, we over-sampled from the minority class using the synthetic minority over-sampling technique (SMOTE) (Chawla et al., 2002).", "Similarly, for the document model, we used tf-idf features after tokenizing the document using a regex tokenizer and removing stop words.", "The combined model was a combination of the URL and document features.", "To train the RoBERTa model on the privacy policy classification task, we used the sequence classification head of the pretrained language model from HuggingFace (Wolf et al., 2019).", "We used the pretrained RoBERTa tokenizer to tokenize text extracted from the documents.", "Since Roberta accepts a maximum of 512 tokens as input, only the first 512 tokens of text from the documents were used for training while the rest was discarded.", "As shown in the analysis section, the average length of a privacy policy in terms of the number of words is 1,871.", "Thus 512 tokens would take into account about a fourth of an average privacy policy.", "The 1,600 labelled documents were randomly divided into 960 documents for training, 240 documents for validation and 400 documents for testing.", "Using 5-fold cross-validation, we tuned the hyperparameters for the models separately with the validation set and then used the held-out test set to report the test results.", "Due to its size, it was possible for the held out test set to have a biased sample.", "Thus we repeated the sampling and training processes with a 5-fold cross-validation approach.", "Table 1 shows performance of the models after the results from test sets were averaged.", "Since the transformer based model had the best results, we ran it on all the the candidate privacy policies.", "Out of 2.1 million English candidate privacy polices, 1.54 million were classified as privacy policies and the rest were discarded.", "URL Cross Verification.", "Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users.", "As a result, most organisations include a link to their privacy policy in the footer of their website landing page.", "In order to focus PrivaSeer Corpus on privacy policies that users are intended to read, we cross-verified the URLs of the privacy policies in our corpus with those that we obtained by crawling the homepages (landing page) of these domains.", "Between the 8th and 10th November 2019, we crawled the landing pages and pages one hop from the landing pages for all the domains of the URLs in our corpus.", "We then gathered the URLs satisfying our selection criteria and cross-verified them with the URLs in our existing corpus.", "After cross-verifying the URLs, we were left with a set of 1.1 million web pages.", "Duplicate and Near-Duplicate Detection.", "Examination of the corpus revealed that it contained many duplicate and near-duplicate documents.", "We removed exact duplicates by hashing all the raw documents and discarding multiple copies of exact hashes.", "Through manual inspection, we found that a number of privacy policies from different domains had very similar wording, differing only by the organisation or website name.", "We reason that this similarity could be due to the use of privacy policy templates or generators.", "We also found abundant examples of near-duplicate privacy policies on the same website.", "We reason that this similarity could be due to the presence of archived versions of privacy policies on the website.", "Since we aimed to collect a comprehensive corpus of contemporary policies, we only removed similar policies (near-duplicates) from same domain domains.", "To remove near-duplicates from within the same domain we used Simhashing (Charikar, 2002).", "Simhashing is a hashing technique in which similar inputs produce similar hashes.", "After creating shingles (Broder et al., 1997) of size three, we created 64 bit document Simhashes and measured document similarity by calculating the Hamming distance (Manku et al., 2007) between document Simhashes of privacy policies within the same domain.", "We then obtained a list of all pairs of similar documents based on a distance threshold (measured based on the number of differing bits) that was determined after manual examination of a number of pairs of privacy policies.", "We then filtered the duplicates based on a greedy approach retaining policies that were longer in length.", "The remaining documents comprised the corpus.", "The PrivaSeer Corpus consists of 1,005,380 privacy policies from 995,475 different web domains.", "Privacy policies in this corpus have a mean word length of about 1,871 words and range between a minimum of 143 words and a maximum of 16,980 words.", "The corpus contains policies from over 800 different top level domains (TLDs).", ".com ,", ".org , and", ".net make up a major share of the corpus covering 63%, 5% and 3% respectively.", "Country-level domains like", ".uk ,", ".au ,", ".ca and", ".du show the geographic variety of the sources of the corpus covering 12%, 4%, and 2% respectively.", "The distribution of popular TLDs (.com, .org, .net) roughly matches internet TLD trends suggesting that the corpus contains a random sample of internet web domains.", "Moreover, CommonCrawl release statistics estimating the representativeness of monthly crawls which support the claim that monthly crawl archives and in turn the PrivaSeer Corpus are a representative sample of the web.", "In addition to monthly crawl dumps, Common Crawl releases web graphs with PageRanks of the domains in a crawl.", "The PageRank values were calculated from the web graph using the Gauss-Seidel algorithm (Arasu et al., 2002).", "PageRank values can be used as a substitute for popularity where higher values suggest more popular domains.", "Readability.", "Readability of a text can be de-fined as the ease of understanding or comprehension due to the style of writing (Klare et al., 1963).", "Along with length, readability plays a role in internet users' decisions to either read or ignore a privacy policy (Ermakova et al., 2015).", "While prior studies on readability have shown that privacy policies are difficult to understand for the average internet user, they were conducted using small samples of policies and therefore may not be representative of the larger internet (Fabian et al., 2017).", "While there are a variety of readability metrics, we calculated the readability of the policies in the corpus using the Flesh-Kincaid Grade Level (FKG) metric for comparison with prior literature and since it is the the most widely used metric.", "The FKG metric presents the readability score as a U.S. grade level.", "We obtained a mean FKG score of 14.87 and a standard deviation of 4.8.", "This score can be interpreted as an average of 14.87 years of education in the U.S. (roughly two years of college education) is required to understand a privacy policy.", "In contrast, Fabian et al. (2017) found that the mean FKG score is 13.6 when they conducted an analysis of readability of privacy policies using 50k documents.", "Topic Modelling.", "Topic modelling is an unsupervised machine learning method that extracts the most probable distribution of words into topics through an iterative process (Wallach, 2006).", "We used topic modelling to explore the distribution of themes of text in our corpus.", "Topic modelling using a large corpus such as PrivaSeer helps investigate the themes present in privacy policies at web scale and also enables the comparison of themes that occur in the rapidly evolving online privacy landscape.", "We used Latent Dirichlet Allocation (LDA), as our approach to topic modelling (Blei et al., 2003).", "Since LDA works well when each input document deals with a single topic, we divided each privacy policy into its constituent paragraphs (Sarne et al., 2019), tokenized the paragraphs using a regex character matching tokenizer and lemma-tized the individual words using NLTK's WordNet lemmatizer.", "We experimented with topics sizes of 7, 8, 9, 10, 11, 13 and 15.", "We manually evaluated the topic clusters by inspecting the words that most represented the topics.", "We noted that the cohesiveness of the topics decreased as the number of topics increased.", "We chose a topic size of 9, since larger topic sizes produced markedly less coherent topics.", "For each topic, we identified a corresponding entry from the OPP-115 annotation scheme (Wilson et al., 2016), which was created by legal experts to label the contents of privacy policies.", "While Wilson et al. (2016) followed a bottom-up approach and identified different categories from analysis of data practices in privacy policies, we followed a top-down approach and applied topic modelling to the corpus in order to extract common themes for paragraphs.", "The categories identified in the OPP-115 Corpus can be found in Table 2.", "We found that two LDA topics contained vocabulary corresponding with the OPP-115 category First Party Collection/Use , one dealing with purpose and information type collected and the other dealing with collection method.", "Two LDA topics corresponded with the OPP-115 category Third Party Sharing and Collection , one detailing the action of collection, and one explaining its purpose and effects(advertising and analytics).", "One of the LDA topics exclusively comprised of vocabulary related to cookies which could be related to both first party or third party data collection techniques.", "The OPP-115 categories Privacy Contact Information , Data Security and Policy Change appeared as separate topics while a topic corresponding to the OPP-115 category International and Specific Audiences appeared to be primarily related to European audiences and GDPR.", "It is likely that the divergence between OPP-115 categories and LDA topics comes from a difference in approaches: the OPP-115 categories represent themes that privacy experts expected to find in privacy policies, which diverge from the actual distribution of themes in this text genre.", "Figure 2 shows the percentage of privacy policies in the corpus that contain each topic.", "From the figure we see that information regarding the type and purpose of data collected by first and third party sources are Figure 2: Topic distribution the most common topics.", "About 77% of policies contain language regarding third parties.", "This is consistent with prior research on third party data collection (Libert, 2018).", "In contrast, language regarding advertising and analytics appears in only 38% of policies in the corpus.", "Topics corresponding to data security, policy change and contact information also occur in a majority of privacy policies.", "Language corresponding to the GDPR and European audiences appears in 55% of policies.", "A study of the distribution of privacy policy topics on the web is important since they inform us about real-world trends and the need for resource allocation to enforce of privacy regulations.", "Figure 3 shows how the number of topics in privacy policies vary with respect to the PageRank value.", "The whiskers in the plot represent the 95% confidence interval of the means of the number of topics in the privacy policies in each PageRank value bin.", "The PageRank values were binned with a constant value of 0.25 such that each bin had at least 1k privacy policies.", "The plot suggests that more popular domains (as given by PageRank value) tend to address a greater number of topics in their privacy policies.", "This behaviour is consistent with manual inspection and is likely due to a larger and more diverse user base as well as the greater levels of regulatory scrutiny that accompany it in the case of more popular domains.", "For example, popular organisations tend to be multinational thereby requiring to address privacy laws from multiple jurisdictions such as GDPR from the European Union and CCPA from the United States.", "We found a similar pattern between privacy policy length and PageRank value thereby further supporting our claim that the more popular domain privacy policies tend to address a greater number of topics.", "In addition we found that readability and PageRank follow a similar pattern where privacy policies of more popular domains (as given by PageRank values) tend to be slightly more difficult to read.", "BERT is a contextualized word representation model that is pretrained using bidirectional transformers (Devlin et al., 2019).", "It was pretrained on the masked language modelling and the next sentence prediction tasks and has been shown to achieve state of the art results in many NLP tasks.", "RoBERTa improved upon the results achieved by BERT by making improvements to the training technique (Liu et al., 2019).", "We pretrain PrivBERT starting with the pretrained RoBERTa BASE model (12 layers, 768 hidden size, 12 attention heads, 110M parameters).", "RoBERTa was trained on corpora of books, news articles, Wikipedia and social media comments and works well as a general purpose language model.", "Privacy policies written in legalese differ significantly in language when compared to the corpora used to train BERT and its variants, thereby prompting the need for a separate pretrained language model.", "Prior literature has shown that in-domain language models such as SciBERT (Beltagy et al., 2019) and BioBERT (Lee et al., 2020) perform significantly better on tasks in their respective domains.", "We use the byte pair encoding tokenization technique utilized in RoBERTa and retain its cased vocabulary.", "We did not create a new vocabulary since the two vocabularies are not significantly different and any out-of-vocabulary words can be represented and tuned for the privacy domain using the byte pair encoding vocabulary of RoBERTa.", "We preprocessed the privacy policy documents to create sequences of a maximum length of 512 tokens.", "Inputs significantly shorter than the maximum length occasionally occurred since we did not create sequences that crossed document boundaries.", "We trained PrivBERT using dynamic masked language modelling (Liu et al., 2019) for 50k steps with a batch size of 512 using the gradient accumulation technique on two NVIDIA Titan RTX for 8 days with a peak learning rate of 8e-5.", "Other hyperparameters were set similar to RoBERTa.", "For the data practice classification task, we leveraged the OPP-115 Corpus introduced by Wilson et al. (2016).", "The OPP-115 Corpus contains manual annotations of 23K fine-grained data practices on 115 privacy policies annotated by legal experts.", "To the best of our knowledge, this is the most detailed and widely used dataset of annotated privacy policies in the research community.", "The OPP-115 Corpus contains paragraph-sized segments annotated according to one or more of the twelve coarse-grained categories of data practices.", "We fine-tuned PrivBERT on the OPP-115 Corpus to predict the coarse-grained categories of data practices.", "We divided the corpus in the ratio 3:1:1 for training, validation and testing respectively.", "Since each segment in the corpus could belong to more than one category and there are twelve categories in total, we treated the problem as a multi-class, multi-label classification problem.", "After manually tuning hyperparameters, we trained the model with a dropout of 0.15 and a learning rate of 2.5e-5.", "Table 2 shows the results for the data practice classification task comparing the performance between RoBERTa, PrivBERT and Polisis (Harkous et al., 2018), a CNN based classification model.", "We report reproduced results for Polisis since the original paper takes into account both the presence and absence of a label while calculating the score for each label (Nejad et al., 2020).", "Due to the unbalanced nature of the dataset, we report the macro-average and micro-average scores.", "PrivBERT achieves state of the art results improving not only on the macro-average F1 score of RoBERTa by about 4% but also improving on the F1 score for every category in the task.", "PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents.", "While crowdworkers were asked to come up with privacy related questions based on public information about an application from the Google Play Store, legal experts were recruited to identify relevant evidence within respective privacy policies that answered the question asked by the crowdworkers.", "The goal of the question answering task is to identify a set sentences in the privacy policy that has information relevant to the question.", "Ravichander et al. (2019) divided the corpus into 1,350 questions for training and validation and 400 questions for testing where each question in the test set is annotated by at least three experts.", "We fine-tuned PrivBERT on the training set as a binary classification task on each question-answer sentence pair to identify if the sentence is evidence for the question or not.", "We trained the model with a dropout of 0.2 and a learning rate of 3e-6 with the positive and negative classes weighted in the ratio 8:1 during training.", "We used sentence level F1 as the evaluation metric as described by Ravichander et al. (2019), where precision and recall are calculated by measuring the overlap between the predicted sentences and gold standard sentences.", "Table 3 shows the results for the answer sentence selection task comparing the performance between BERT and PrivBERT.", "Results from BERT are as reported by Ravichander et al. (2019).", "PrivBERT achieves state of the art results improving on the results of BERT by about 6%.", "PrivBERT therefore Model Precision Recall F1 BERT 0.442 0.348 0.39 PrivBERT 0.483 0.424 0.452 Table 3: Performance comparison on the answer sentence selection task has been shown to achieve state of the art results in two significantly disparate tasks in the privacy domain suggesting that it can be used to improve the performance on various real-world tasks and application in the privacy domain.", "We created the PrivaSeer Corpus which is the first large scale corpus of contemporary website privacy policies and consists of just over 1 million documents.", "We designed a novel pipeline to build the corpus, which included web crawling, language detection, document classification, duplicate removal, document cross verification, content extraction, and near duplicate removal.", "Topic modelling showed the distribution of themes of privacy practices in policies, corresponding to the expectations of legal experts in some ways, but differing in others.", "The positive relationship between PageRank of a domain and the number of topics covered in its policy indicates that more popular domains have a slightly greater coverage of these topics.", "We hypothesize that this is because more popular domains tend to have a larger and more diverse user base prompting the privacy policies to address laws from various jurisdictions.", "Prior research on the readability based on small corpora of privacy policies had found that they were generally hard to understand for the average internet user.", "Our large scale analysis using the Flesch-Kincaid readability metric was consistent with prior findings.", "We found that on average about 14.87 years or roughly about two years of U.S. college education was required to understand a privacy policy.", "We pretrained PrivBERT a language model for the privacy domain based on RoBERTa.", "We evaluated PrivBERT on the data practice classification and the question answering tasks and achieved state of the art results.", "We believe that the PrivaSeer Corpus will help advance research techniques to automate the extraction of salient details from privacy policies.", "PrivBERT will help improve results on various tasks in the privacy domain and help build stable and reliable privacy preserving technology.", "This should benefit internet users, regulators, and researchers in many ways.", "This work was partly supported by a seed grant from the College of Information Sciences and Technology at the Pennsylvania State University.", "We also acknowledge Adam McMillen for technical support." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "method", "result", "abstain", "result", "method", "abstain", "abstain", "other", "other" ]
[ "Coreference resolution is essential for natural language understanding and has been long studied in NLP.", "In recent years, as the format of Question Answering (QA) became a standard for machine reading comprehension (MRC), there have been data collection efforts, e.g., Dasigi et al. (2019), that attempt to evaluate the ability of MRC models to reason about coreference.", "However, as we show, coreference reasoning in MRC is a greater challenge than earlier thought; MRC datasets do not reflect the natural distribution and, consequently, the challenges of coreference reasoning.", "Specifically, success on these datasets does not reflect a model's proficiency in coreference reasoning.", "We propose a methodology for creating MRC datasets that better reflect the challenges of coreference reasoning and use it to create a sample evaluation set.", "The results on our dataset show that state-of-the-art models still struggle with these phenomena.", "Furthermore, we develop an effective way to use naturally occurring coreference phenomena from existing coreference resolution datasets when training MRC models.", "This allows us to show an improvement in the coreference reasoning abilities of state-of-the-art models.", "1 1 Introduction Machine reading comprehension is the ability to read and understand the given passages and answer questions about them.", "Coreference resolution is the task of finding different expressions that refer to the same real-world entity.", "The tasks of coreference resolution and machine reading comprehension have moved closer to each other.", "Converting coreference-related datasets into an MRC format 1 The code and the resulting dataset are available at https://github.com/UKPLab/ coref-reasoning-in-qa .", "improves the performance on some coreference-related datasets (Wu et al., 2020b; Aralikatte et al., 2019).", "There are also various datasets for the task of reading comprehension on which the model requires to perform coreference reasoning to answer some of the questions, e.g., DROP (Dua et al., 2019), DuoRC (Saha et al., 2018), MultiRC (Khashabi et al., 2018), etc.", "Quoref (Dasigi et al., 2019) is a dataset that is particularly designed for evaluating coreference understanding of MRC models.", "Figure 1 shows a QA sample from Quoref in which the model needs to resolve the coreference relation between his and John Motteux to answer the question.", "Recent large pre-trained language models reached high performance on Quoref.", "However, our results and analyses suggest that this dataset contains artifacts and does not reflect the natural distribution and, therefore, the challenges of coreference reasoning.", "As a result, high performances on Quoref do not necessarily reflect the coreference reasoning capabilities of the examined models and answering questions that require coreference reasoning might be a greater challenge than current scores suggest.", "In this paper, we propose two solutions to address this issue.", "First, we propose a methodology for creating MRC datasets that better reflect the coreference reasoning challenge.", "We release a sample challenging evaluation set containing 200 examples by asking an annotator to create new question-answer pairs using our methodology and based on existing passages in Quoref.", "We show that this dataset contains fewer annotation artifacts, and its distribution of biases is closer to a coreference resolution dataset.", "The performance of state-of-the-art models on Quoref considerably drops on our evaluation set suggesting that (1) coreference reasoning is still an open problem for MRC models, and (2) our methodology opens a promising direction to create future challenging MRC datasets.", "Second, we propose to directly use coreference resolution datasets for training MRC models to improve their coreference reasoning.", "We automatically create a question whose answer is a coreferring expression m 1 using the BART model (Lewis et al., 2020).", "We then consider this question, m 1 's antecedent, and the corresponding document as a new (question, answer, context) tuple.", "This data helps the model learning to resolve the coreference relation between m 1 and its antecedent to answer the question.", "We show that incorporating these additional data improves the performance of the state-of-the-art models on our new evaluation set.", "Our main contributions are as follows: We show that Quoref does not reflect the natural challenges of coreference reasoning and propose a methodology for creating MRC datasets that better reflect this challenge.", "We release a sample challenging dataset that is manually created by an annotator using our methodology.", "The results of state-of-the-art MRC models on our evaluation set show that, despite the high performance of MRC models on Quoref, answering questions based on coreference reasoning is still an open challenge.", "We propose an approach to use existing coreference resolution datasets for training MRC models.", "We show that, while coreference resolution and MRC datasets are independent and belong to different domains, our approach improves the coreference reasoning of state-of-the-art MRC models.", "One of the known drawbacks of many NLP datasets is that they contain artifacts.", "2 Models tend to ex-2 I.e., the conditional distribution of the target label based on specific attributes of the training domain diverges while testing on other domains.", "ploit these easy-to-learn patterns in the early stages of training (Arpit et al., 2017; Liu et al., 2020; Utama et al., 2020b), and therefore, they may not focus on learning harder patterns of the data that are useful for solving the underlying task.", "As a result, overfitting to dataset-specific artifacts limits the robustness and generalization of NLP models.", "There are two general approaches to tackle such artifacts: (1) adversarial filtering of biased examples, i.e., examples that contain artifacts, and (2) debiasing methods.", "In the first approach, potentially biased examples are discarded from the dataset, either after the dataset creation (Zellers et al., 2018; Yang et al., 2018a; Le Bras et al., 2020; Bartolo et al., 2020), or while creating the dataset (Dua et al., 2019; Chen et al., 2019; Nie et al., 2020).", "In the second approach, they first recognize examples that contain artifacts, and use this knowledge in the training objective to either skip or downweight biased examples (He et al., 2019; Clark et al., 2019a), or to regularize the confidence of the model on those examples (Utama et al., 2020a).", "The use of this information in the training objective improves the robustness of the model on adversarial datasets (He et al., 2019; Clark et al., 2019a; Utama et al., 2020a), i.e., datasets that contain counterexamples in which relying on the bias results in an incorrect prediction.", "In addition, it can also improve in-domain performances as well as generalization across various datasets that represent the same task (Wu et al., 2020a; Utama et al., 2020b).", "While there is an emerging trend of including adversarial models in data collection, their effectiveness is not yet compared with using debiasing methods, e.g., whether they are still beneficial when we use debiasing methods or vice versa.", "There are a few studies on the joint understanding of coreference relations and reading comprehension.", "Wu et al. (2020b) propose to formulate coreference resolution as a span-prediction task by generating a query for each mention using the surrounding context, thus converting coreference resolution to a reading comprehension problem.", "They leverage the plethora of existing MRC datasets for data augmentation and improve the generalization of the coreference model.", "In parallel to Wu et al. (2020b), Aralikatte et al. (2019) also cast ellipsis and coreference resolution as reading comprehension tasks.", "They leverage the existing neural architectures designed for MRC for ellipsis resolution and outperform the previous best results.", "In a similar direction, Hou (2020) propose to cast bridging anaphora resolution as question answering and present a question answering framework for this task.", "However, none of the above works investigate the impact of using coreference data on QA.", "Dua et al. (2020) use Amazon Mechanical Turk-ers to annotate the corresponding coreference chains of the answers in the passages of Quoref for 2,000 QA pairs.", "They then use this additional coreference annotation for training a model on Quoref.", "They show that including these additional coreference annotations improves the overall performance on Quoref.", "The proposed method by Dua et al. (2020) requires annotating additional coreference relations on every new coreference-aware QA dataset.", "Contrary to this, our approach uses existing coreference resolution datasets, and therefore, applies to any new QA dataset without introducing any additional cost.", "For creating the Quoref dataset, annotators first identify coreferring expressions and then ask questions that connect the two coreferring expressions.", "Dasigi et al. (2019) use a BERT-base model (De-vlin et al., 2019) that is fine-tuned on the SQuAD dataset (Rajpurkar et al., 2016) as an adversarial model to exclude QA samples that the adversarial model can already answer.", "The goal of using this adversarial model is to avoid including question-answer pairs that can be solved using surface cues.", "They claim that most examples in Quoref cannot be answered without coreference reasoning.", "If we fine-tune a RoBERTa-large model on Quoref, it achieves 78 F1 score while the estimated human performance is around 93 F1 score (Dasigi et al., 2019).", "This high performance, given that RoBERTa can only predict continuous span answers while Quoref also contains discontinuous answers, indicates that either (1) Quoref presents coreference-aware QA very well so that the model can properly learn coreference reasoning from the training data, (2) pretrained transformer-based models have already learned coreference reasoning during their pre-training, e.g., as suggested by Tenney et al. (2019) and Clark et al. (2019b), or (3) coreference reasoning is not necessarily required for solving most examples.", "In this section, we investigate whether Quoref contains the known artifacts of QA datasets, and therefore, models can solve some of the QA pairs without performing coreference reasoning.", "Figure 2 shows such an example where simple lexical cues are enough to answer the question despite the fact that coreference expressions Frankie and his were included in the corresponding context.", "We investigate five artifacts (biases) as follows: Random named entity: the majority of answers in Quoref are person names.", "To evaluate this artifact, we randomly select a PERSON named entity from the context as the answer.", "3 Wh-word (Weissenborn et al., 2017): to recognize the QA pairs that can be answered by only using the interrogative adverbs from the question, we train a model on a variation of the training dataset in which questions only contain interrogative adverbs.", "Empty question (Sugawara et al., 2020): to recognize QA pairs that are answerable without considering the question, 4 we train a QA model only on the contexts and without questions.", "Semantic overlap (Jia and Liang, 2017): for this artifact, we report the ratio of the QA pairs whose answers lie in the sentence of the context that has the highest semantic similarity to the question.", "We use sentence-BERT (Reimers and Gurevych, 2019) to find the most similar sentence.", "Short distance reasoning: for this bias, we train a model only using the sentence of the context that is the most similar to the question, instead of the whole context.", "We exclude the question-answer pairs in which the most similar sentence does not contain the answer.", "This model will not learn to perform coreference reasoning when the related coreferring pairs are not in the same sentence.", "3 We use spaCy (Honnibal and Johnson, 2015) for NER.", "4 E.g., this can indicate the bias of the model to select the most frequent named entity in the context as the answer.", "For wh-word, empty question, and short distance reasoning, we use the TASE model (Segal et al., 2020) to learn the bias.", "Biased examples are then those that can be correctly solved by these models.", "We only change the training data for biased example detection, if necessary, and the development set is unchanged.", "The Quoref column in Table 1 reports the proportion of biased examples in the Quoref development set.", "We also investigate whether these biases have similar ratios in a coreference resolution dataset.", "We use the CoNLL-2012 coreference resolution dataset (Pradhan et al., 2012a) and convert it to a reading comprehension format, i.e., CoNLL bart in Section 5.", "5 This data contains question-answer pairs in which the question is created based on a coreferring expression in CoNLL-2012, and the answer is its closest antecedent.", "We split this data into training and test sets and train bias models on the training split.", "The CoNLL bart column in Table 1 shows the bias proportions on this data.", "As we see, the short distance reasoning is the most prominent bias in the Quoref dataset.", "However, the ratio of such biased examples is only around 10% in CoNLL-2012.", "Therefore, apart from the examples that can be solved without coreference reasoning, 6 the difficulty of the required coreference reasoning in the remaining examples is also not comparable with naturally occurring coreference relations in a coreference resolution dataset.", "As a result, high performance on Quoref does not necessarily indicate that the model is adept at performing coreference reasoning.", "5 We report the bias ratios of CoNLL dec in Section 5 in the appendix.", "6 E.g., about 20% of examples can be answered without considering the question.", "There is a growing trend in using adversarial models for data creation to make the dataset more challenging or discard examples that can be solved using surface cues (Bartolo et al., 2020; Nie et al., 2020; Yang et al., 2018a; Zellers et al., 2018; Yang et al., 2018b; Dua et al., 2019; Chen et al., 2019;", "Dasigi et al., 2019).", "Quoref is also created using an adversarial data collection method to discard examples that can be solved using simple lexical cues.", "The assumption is that it is hard to avoid simple lexical cues by which the model can answer questions without coreference reasoning.", "Therefore, an adversarial model ( A ) is used to discard examples that contain such lexical cues.", "While this adversarial filtering removes examples that are easy to solve by A , it does not ensure that the remaining examples do not contain shortcuts that are not explored by A .", "First, the adversarial model in Quoref is trained on another dataset, i.e., SQuAD.", "Thus, the failure of A on Quoref examples may be due to (1) Quoref having different lexical cues than those in SQuAD, or (2) domain shift.", "Second, and more importantly, as argued by Dunietz et al. (2020), making the task challenging by focusing on examples that are more difficult for existing models is not a solution for more useful reading comprehension.", "7 We instead propose a methodology for creating question-answer pairs as follows: Annotators should create a question that connects the referring expression m 1 to its antecedent m 2 so that (1) m 2 is more informative than m 1 , 8 and (2) m 1 and m 2 reside in a different sentence.", "Candidate passages for creating QA pairs are selected according to their number of named entities and pronouns.", "The number of distinct named entities is an indicator of the number of entities in the text.", "Therefore, there would be more candidate entities for resolving referring expressions.", "The number of pronouns indicates that we have enough candidate m 1 s that have more informative antecedents.", "We provide this guideline to a student from the 7 As put by them: the dominant MRC research paradigm is like trying to become a professional sprinter by glancing around the gym and adopting any exercises that look hard.", "8 Proper names are more informative than common nouns, and they are more informative than pronouns (Lee et al., 2013).", "Computer Science department for generating new QA pairs from the existing passages in the Quoref development set.", "We use Quoref passages to ensure that the source of performance differences on our dataset vs. Quoref is not due to domain differences.", "This results in 200 new QA pairs.", "Table 2 presents examples from our dataset.", "Table 3 shows the results of the examined biases on our dataset.", "By comparing Table 3 and Table 1, we observe that the examined biases are less strong in our dataset, and their distribution is closer to those in CoNLL-2012.", "As we will see in Table 5, the performance of state-of-the-art models on Quoref drops more than 10 points, i.e., 13-18 points, on our challenge dataset.", "9 Bias Ours random named entity 3.03 wh-word 13.64 empty question 11.62 semantic overlap 24.50 short-distance reasoning 35.35 Table 3: Proportion of biased examples in our dataset.", "While we do not have access to many coreference annotations for the task of coreference-aware MRC, there are various datasets for the task of coreference resolution.", "Coreference resolution datasets contain the annotation of expressions that refer to the same entity.", "In this paper, we hypothesize that we can directly use coreference resolution corpora to improve the coreference reasoning of MRC models.", "We propose an effective approach to convert coreference annotations into QA pairs so that models learn to perform coreference resolution by answering those questions.", "In our experiments, we use the 9 We examine 50 randomly selected examples from our challenge set, and they were all answerable by a human.", "CoNLL-2012 dataset (Pradhan et al., 2012b) that is the largest annotated dataset with coreference information.", "The existing approach to convert coreference annotations into (question, context, answer) tuples, which is used to improve coreference resolution performance (Wu et al., 2020b; Aralikatte et al., 2019), is to use the sentence of the anaphor as a declarative query, and its closest antecedent as the answer.", "The format of these queries is not compatible with questions in MRC datasets, and therefore, the impact of this data on MRC models may be limited.", "In this work, we instead generate questions from those declarative queries using an automatic question generation model.", "We use the BART model (Lewis et al., 2020) that is one of the state-of-the-art text generation models.", "Below we explain the details of each of these two approaches for creating QA data from CoNLL-2012.", "Table 4 shows examples from both approaches.", "CoNLL dec : Wu et al. (2020b) and Aralikatte et al. (2019) choose a sentence that contains an anaphor as a declarative query, the closest non-pronominal antecedent of that anaphor as the answer, and the corresponding document of the expressions as the context.", "10 We remove the tuples in which the anaphor and its antecedent are identical.", "The reason is that (1) Quoref already contains many examples in which the coreference relation is between two mentions with the same string, and (2) even after removing such examples, CoNLL dec contains around four times more QA pairs than the Quoref training data.", "CoNLL bart : we use a fine-tuned BART model (Lewis et al., 2020) released by Durmus et al. 10 We use the code provided by Aralikatte et al. (2019).", "(2020) for question generation and apply it on the declarative queries in CoNLL dec .", "The BART model specifies potential answers by masking noun phrases or named entities in the query and then generates questions for each masked text span.", "We only keep questions whose answer, i.e., the masked expression, is a coreferring expression and replace that answer with its closest non-pronominal antecedent.", "We only keep questions in which the masked expression and its antecedent are not identical.", "Such QA pairs enforce the model to resolve the coreference relation between the two coreferring expressions to answer generated questions.", "We use two recent models from the Quoref leader-board: RoBERTa (Liu et al., 2019) and TASE (Se-gal et al., 2020), from which TASE has the state-of-the-art results.", "We use RoBERTa-large from Hug-gingFace (Wolf et al., 2020).", "TASE casts MRC as a sequence tagging problem to handle questions with multi-span answers.", "It assigns a tag to every token of the context indicating whether the token is a part of the answer.", "We use the TASEIO +SSE setup that is a combination of their multi-span architecture and single-span extraction with IO tagging.We use the same configuration and hyper-parameters for TASEIO +SSE as described in Segal et al. (2020).", "We train all models for two epochs in all experiments.", "11 We use the F1 score that calculates the number of shared words between predictions and gold answers for evaluation.", "Training Strategies.", "To include the additional training data that we create from CoNLL-2012 using coreference-to-CoNLL conversion methods, we use two different strategies: Joint : we concatenate the training examples from Quoref and CoNLL-to-QA converted 11 The only difference of TASE in our experiments and the reported results in Segal et al. (2020) is the number of training epochs.", "For a fair comparison, we train all models for the same number of iterations.", "datasets.", "Therefore, the model is jointly trained on the examples from both datasets.", "Transfer : Since the CoNLL-to-QA data is automatically created and is noisy, we also examine a sequential fine-tuning setting in which we first train the model on the CoNLL-to-QA converted data, and then fine-tune it on Quoref.", "Quoref : the official development and test sets of Quoref, i.e., Quoref dev and Quoref test , respectively.", "Our challenge set : our new evaluation set described in Section 4.", "Contrast set : the evaluation set by Gardner et al. (2020) that is created based on the official Quoref test set.", "For creating this evaluation set, the authors manually performed small but meaningful perturbations to the test examples in a way that it changes the gold label.", "This dataset is constructed to evaluate whether models decision boundaries align to true decision boundaries when they are measured around the same point.", "MultiRC : Multi-Sentence Reading Comprehension set (Khashabi et al., 2018) is created in a way that answering questions requires a more complex understanding from multiple sentences.", "Therefore, coreference reasoning can be one of the sources for improving the performance on this dataset.", "Note that MultiRC is from a different domain than the rest of evaluation sets.", "12 12 To use the MultiRC development set, which is in a multi-choice answer selection format, we convert it to a reading comprehension format by removing QA pairs whose answers cannot be extracted from the context.", "The Contrast set and MultiRC datasets are not designed to explicitly evaluate coreference reasoning.", "However, we include them among our evaluation sets to have a broader view about the impact of using our coreference data in QA.", "Table 6 reports the statistics of these QA datasets.", "In addition, it reports the number of examples in CoNLL dec and CoNLL bart datasets that we create by converting the CoNLL-2012 training data into QA examples.", "Since the question generation model cannot generate a standard question for every declarative sentence, CoNLL bart contains a smaller number of examples.", "We also include the statistics of SQuAD in Table 6 as we use it for investigating whether the resulting performance changes are due to using more training data or using coreference-aware additional data.", "Table 5 presents the results of evaluating the impact of using coreference annotations to improve coreference reasoning in MRC.", "We report the results for both of the examined state-of-the-art models, i.e., TASE and RoBERTa-large, using both training settings: (1) training the model jointly on Quoref and CoNLL-to-QA converted data (Joint), and (2) pre-training the model on CoNLL-to-QA data first and fine-tuning it on Quoref (Transfer).", "Baseline represents the results of the examined models that are only trained on Quoref.", "CoNLL bart represents the results of the models that are only trained on the CoNLL bart data.", "Transfer-SQuAD reports the results of the sequential training when the model is first trained on the SQuAD training dataset (Rajpurkar et al., 2016) and is then fine-tuned on Quoref.", "Based on the results of Table 5, we make the following observations.", "First, the most successful setting for improving coreference reasoning, i.e., improving the performance on our challenge evaluation set, is Transfer-CoNLL bart .", "Pre-training the TASE model on CoNLL bart improves its performance on all of the examined evaluation sets.", "However, it only improves the performance of RoBERTa on our challenge set.", "Second, SQuAD contains well-formed QA pairs while CoNLL bart and CoNLL dec contain noisy QA.", "Also, SQuAD and Quoref are both created based on Wikipedia articles, and therefore, have similar domains.", "However, the genres of the documents in CoNLL-2012 include newswire, broadcast news, broadcast conversations, telephone conversations, Model Semantic overlap Semantic overlap Short reasoning Short reasoning dev Ours dev Ours dev Ours dev Ours TASE Baseline 81.69 77.2 84.86 62.96 94.84 89.04 72.95 54.15 Joint-CoNLL dec +2.07 -5.80 -0.30 +1.19 +0.94 -1.65 -0.34 +0.03 Joint-CoNLL bart +0.86 -3.00 +0.03 +4.82 +0.64 +1.20 -0.16 +3.80 Transfer-CoNLL dec +1.29 +8.56 +0.83 +5.94 +1.07 +8.82 +0.83 +5.37 Transfer-CoNLL bart +1.74 +1.23 +0.85 +8.26 +1.54 +4.70 +0.60 +7.51 Transfer-SQuAD +0.84 +0.58 +1.19 +0.10 -0.91 +2.3 +0.33 +2.15 RoBERTa baseline 78.09 67.39 80.19 63.36 90.04 84.23 68.97 53.48 Joint-CoNLL dec -5.55 -10.04 -4.15 -6.55 -2.53 -7.32 -6.54 -7.46 Joint-CoNLL bart 0.00 +1.94 -1.28 +2.97 -0.48 -1.36 -1.43 +4.95 Transfer-CoNLL dec -4.20 +1.79 -6.02 -6.27 -3.52 -7.00 -7.65 -2.77 Transfer-CoNLL bart -1.02 -0.55 -1.58 +3.18 -0.95 -5.06 -1.94 +6.27 Transfer-SQuAD +1.32 -1.08 +0.25 +1.05 +0.45 -9.46 +0.6 +5.99 Table 7: F1 score differences of various TASE and RoBERTa models on the Quoref dev and our dataset splits that are created based on the semantic overlap and short distance reasoning biases.", "weblogs, magazines, and Bible, which are very different from those in Quoref.", "As a result, pretraining on SQuAD has a positive impact on the majority of datasets.", "However, this impact is less pronounced on our challenge dataset, as it requires coreference reasoning while this skill is not present in SQuAD examples.", "Finally, while using the sentence of coreferring mentions as a declarative query (CONLL dec ) is the common method for converting coreference resolution datasets into QA format in previous studies, our results show using CoNLL bart has a more positive impact compared to using CoNLL dec .", "To analyze what kind of examples benefit more from incorporating the coreference data, we split Quoref dev and our dataset into different subsets based on the semantic overlap and short distance reasoning biases, which are the most common types of biases in both datasets.", "The semantic overlap column in Table 7 represents the results on the subset of the data in which answers reside in the most similar sentence of the context, and the semantic overlap column contains the rest of the examples in each of the examined datasets.", "The short reasoning column presents the results on the subset of the data containing examples that can be solved by the short distance reasoning bias model, and short reasoning presents the results on the rest of the examples.", "Table 7 shows the performance differences of the TASE and RoBERTa models on these four subsets for each of the two datasets.", "Surprisingly, the performance of the baseline models is lower on the semantic overlap subset compared to semantic overlap on Quoref dev .", "This can indicate that examples in the semantic overlap subset of Quoref dev contain other types of biases that make QA less challenging on this subset.", "The addition of the coreference resolution annotations in all four training settings reduces the performance gap of the TASE model on the semantic overlap and semantic overlap subsets for both datasets.", "Incorporating coreference data for RoBERTa, on the other hand, has a positive impact using the CoNLL bart data and on the harder subsets of our challenge evaluation set, i.e., semantic overlap and short reasoning .", "Finally, there is still a large performance gap between short reasoning and short reasoning subsets.", "In our coreference-to-QA conversion methods, we consider the closest antecedent of each anaphor as the answer.", "A promising direction for future work is to also create QA pairs based on longer distance coreferring expressions, e.g., to create two QA pairs based on each anaphor, one in which the answer is the closest antecedent, and the other with the first mention of the entity in the text as the answer.", "We show that the high performance of recent models on the Quoref dataset does not necessarily indicate that they are adept at performing coreference reasoning, and that QA based on coreference reasoning is a greater challenge than current scores", "suggest.", "We then propose a methodology for creating a dataset that better presents the coreference reasoning challenge for MRC.", "We provide our methodology to an annotator and create a sample dataset.", "Our analysis shows that our dataset contains fewer biases compared to Quoref, and the performance of state-of-the-art Quoref models drops considerably on this evaluation set.", "To improve the coreference reasoning of QA models, we propose to use coreference resolution datasets to train MRC models.", "We propose a method to convert coreference annotations into an MRC format.", "We examine the impact of incorporating this coreference data on improving the coreference reasoning of QA models using two top-performing QA systems from the Quoref leader-board.", "We show that using coreference datasets improves the performance of both examined models on our evaluation set, indicating their improved coreference reasoning.", "The results on our evaluation set suggest that there is still room for improvement, and reading comprehension with coreference understanding remains a challenge for existing QA models, especially if the coreference relation is between two distant expressions.", "This work has been supported by the German Research Foundation (DFG) as part of the QASciInf project (grant GU 798/18-3), and the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cy-bersecurity ATHENE.", "Dan Roth's work is partly supported by contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA).", "The authors would like to thank Michael Bugert, Max Glockner, Yevgeniy Puzikov, Nils Reimers, Andreas Ruckle, and the anonymous reviewers for their valuable feedback." ]
[ "abstain", "abstain", "result", "abstain", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "objective", "result", "method", "objective", "method", "objective", "abstain", "objective", "objective", "method", "result", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "abstain", "other", "objective", "other", "objective", "method", "method", "method", "abstain", "abstain", "method", "abstain", "other", "method", "other", "method", "other", "abstain", "other", "objective", "method", "other", "method", "other", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "abstain", "method", "other", "other", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "abstain", "objective", "method", "result", "objective", "objective", "result", "result", "result", "other", "other", "other" ]
[ "Multi-document summarization (MDS) has made significant progress in recent years, in part facilitated by the availability of new, dedicated datasets and capacious language models.", "However, a standing limitation of these models is that they are trained against limited references and with plain maximum-likelihood objectives.", "As for many other generative tasks, reinforcement learning (RL) offers the potential to improve the training of MDS models; yet, it requires a carefully-designed reward that can ensure appropriate leverage of both the reference summaries and the input documents.", "For this reason, in this paper we propose fine-tuning an MDS baseline with a reward that balances a reference-based metric such as ROUGE with coverage of the input documents.", "To implement the approach, we utilize RELAX (Grathwohl et al., 2018), a contemporary gradient estimator which is both low-variance and unbiased, and we fine-tune the baseline in a few-shot style for both stability and computational efficiency.", "Experimental results over the Multi-News and WCEPMDS datasets show significant improvements of up to +0 .", "95 pp average ROUGE score and +3 .", "17 pp METEOR score over the baseline, and competitive results with the literature.", "In addition, they show that the coverage of the input documents is increased, and evenly across all documents.", "Multi-document summarization (MDS) aims to consolidate salient points of information across a set of documents into a concise summary.", "The main requirement for the summary is that it adequately represent the document set, with low redundancy and high coverage across all documents, while at the same time being readable and fluent.", "Combined with this, is the need to develop techniques that can handle the significant memory complexity required to tackle MDS.", "Recently, the release of dedicated datasets (Fabbri et al., 2019; Gholipour Ghalandari et al., 2020), and intelligently designed Transformer models (Liu et al., 2018; Liu and Lapata, 2019; Beltagy et al., 2020), have helped drive advancements in multi-document summarization, generally improving the accuracy and fluency of the predicted summaries.", "However, aspects such as the requirement to cover as much salient information from the input documents as possible, whilst still maintaining low repetition and low redundancy, have certainly been less explored to date (Nayeem et al., 2018; Mao et al., 2020).", "Within the sphere of contemporary neural MDS models, two main lines of investigation can be iden-tified: graph-based approaches (Li et al., 2020; Pasunuru et al., 2021), and concatenation approaches (Liu et al., 2018; Zhang et al., 2020a).", "The former are approaches that rely on the construction of graphs to capture the interand intra-document relations.", "While powerful, they need to elicit the relations explicitly.", "The latter instead assume that all the input documents within a document set can be simply concatenated, possibly with document separators and tags, such that the relations can be discovered by the model.", "Like ordinary summarization, also MDS comes in two remarkably different styles: extractive , where the generated summaries consist of verbatim sentences from the original input documents (Nallapati et al., 2017), and abstractive , where the model is instead encouraged to generate a paraphrased understanding of the input documents.", "The intrinsic appeal of abstractive summaries and the advent of sequence-to-sequence models have increasingly shifted the trend toward abstractive summarization (See et al., 2017; Paulus et al., 2018; Fabbri et al., 2019; Lewis et al., 2020; Zhang et al., 2020a).", "As for what models are concerned, abstractive MDS has made increasing use of transformers, both conventional (Lewis et al., 2020; Zhang et al., 2020a) and modified to accommodate the characteristic input length 5112 of multi-document sets (Beltagy et al., 2020; Za-heer et al., 2020).", "Similarly to general summarization, the majority of MDS models are trained using the negative log-likelihood (NLL) as training objective, which aims to maximize the conditional log-likelihood of the tokens of a given reference summary.", "Despite its speed and efficacy, the NLL exhibits both the wrong-objective problem (Ding and Soricut, 2017), where the model is trained on a convenient objective rather than a desirable one, and the wellknown exposure bias problem (Bengio et al., 2015; Ranzato et al., 2016).", "To alleviate these issues, reinforcement learning has been adopted in summarization, as in other language generation tasks, to train the model with a more appropriate objective (Li et al., 2019; Parnell et al., 2021).", "However, its effective use for MDS requires a reward function that can appropriately balance the reference summary and the multiple input documents in the document set.", "For this reason, in this paper we propose exploring a reward that combines a reference-based metric such as ROUGE with a coverage term over the input documents.", "To implement the reinforcement learning approach, we employ a contemporary gradient estimator of the policy gradient, RELAX (Grathwohl et al., 2018), which is both low-variance and unbiased.", "In addition, to limit the computation and the risk of parameter drift, we apply the objective to fine-tune an NLL-pretrained model in a few-shot manner.", "In light of the above, this paper makes the following contributions: 1. a reward for reinforcement learning that combines a ROUGE score and a multi-document coverage score, to simultaneously adhere to both the reference summaries and the input documents; 2. a reinforcement learning implementation that leverages a low-variance and unbiased gradient estimator of the policy gradient, RELAX; 3. experimental results and a comprehensive analysis over two MDS datasets (Multi-News and WCEP), showing the empirical effectiveness of the proposed approach.", "The rest of this paper is organized as follows: first the related work is reviewed in Section 2, and then the proposed approach is introduced in Section 3. Section 4 describes the experimental set-up and main results, while Section 5 presents a more detailed analysis of the main components of the proposed approach.", "Eventually, Section 6 summarizes our findings and concludes the paper.", "Early work in multi-document summarization (MDS) that pre-dates the neural era (Mani and Bloedorn, 1997; Erkan and Radev, 2004; Christensen et al., 2013) was shaped around the notion of MDS as a collection of graph structures.", "As approaches in language generation naturally evolved into neural-based (Rush et al., 2015; Ranzato et al., 2016), later improved with the emergence of large, pre-trained language models (Devlin et al., 2019; Lewis et al., 2020; Zhang et al., 2020a), the effort shifted to integrating these graph structures into the models, often building on top of strong single-document summarization (SDS) baselines (Lebanoff et al., 2018; Zhang et al., 2018).", "Concurrently, the growing interest in multi-document summarization has led to the development of dedicated, multi-document datasets such as WikiSum (Liu et al., 2018), Multi-News (Fab-bri et al., 2019), Wikipedia Current Events Portal (WCEP) (Gholipour Ghalandari et al., 2020) and others.", "The typical amount of input data that comes with these datasets has increased the pressure on the models to be able to handle larger inputs.", "For instance, WCEP has up to 100 documents in each document set, and 63.7 on average.", "As such, the standard transformers used to develop successful SDS models such as BART (Lewis et al., 2020) and PEGASUS (Zhang et al., 2020a) have proved inadequate for MDS due to their limited maximum input length (in the order of 10 3 tokens) and quadratic memory complexity (Beltagy et al., 2020).", "In turn, this has prompted the development of long transformer models such as Longformer (Beltagy et al., 2020) (built upon BART) and BigBird (Zaheer et al., 2020) (built upon PEGASUS) which, thanks to their smart attention layers that scale linearly with the input length, have opened up the possibility of presenting the input documents at once, allowing these re-designed attention mechanisms to discover both interand intra-document relations.", "Document summarization, as have other language generation tasks, has often been criticized for using maximum-likelihood training objectives that may prove limitative for the eventual performance of the models (Ding and Soricut, 2017).", "For this reason, reinforcement learning has been employed 5113 as an alternative, to directly optimize the models over evaluation metrics and explicitly reward the quality of the model's predictions.", "Reinforcement learning approaches have used metrics such as ROUGE-1, ROUGE-2 and ROUGE-L F1 (Paulus et al., 2018), and also more contemporary scoring functions such as BERTScore (Zhang et al., 2020b) as rewards, often mixed with maximum-likelihood objectives.", "When applying reinforcement learning to MDS, we contend that the reward should not simply be a ROUGE score against the reference summary, since this would dismiss key characteristics of the task such as inter-document information transfer.", "For instance, Mao et al. (2020) have leveraged maximal marginal relevance (Carbonell and Goldstein, 1998) to mollify higher-order information redundancy between the input documents.", "Several other performance measures could potentially be included in the reward, such as extractive fragment coverage and density (Grusky et al., 2018) and MINT (Dreyer et al., 2021), but to the best of our knowledge they have never been utilized as, or for, training objectives.", "To address this gap, in this paper we propose leveraging a modified coverage reward to improve information coverage across all the documents in the input set, jointly with a principled policy gradient estimator (RELAX) and a performing long transformer model (the BART Longformer Encoder-Decoder, or BART-LED), in the hope of benefiting from the synergy between these components.", "In this section, we present the details of the proposed approach, including the reinforcement learning framework (Section 3.1), the multi-document coverage reward (Section 3.2), and the overall training objective (Section 3.3).", "Given a set of documents in input, simply noted as x , and a summary with T tokens, y = { y 1 , . . . , y T } , the predictive distribution, also known as policy in reinforcement learning, can be noted as p ( y t | y 1 , . . . , y t 1 , x ) .", "The policy gradient theorem (Sutton et al., 1999) states that an estimator for the gradient of the reinforcement learning risk can be expressed as: = r T (cid:88) t =1 log p ( y st | y s 1 , . . . , y st 1 , x ) (1) where y s 1 , . . . , y sT is a sequence sampled from the policy, r is a function that rewards its quality, and collectively denotes all the policy's parameters.", "This estimator is the well-known REINFORCE (Williams, 1992) and is a baseline of reinforcement learning.", "At its turn, the gradient can be easily turned into a loss function to be used with automatic differentiation: LREINFORCE = r T (cid:88) t =1 log p ( y st | y s 1 , . . . , y st 1 , x ) = r log p ( y s ) (2) The sampled sequence in (2), y s = { y s 1 , . . . , y sT } , can be obtained with any usual sampling approach such as teacher-forcing, student-forcing, or scheduled sampling (Bengio et al., 2015).", "While the samples can be drawn from a standard categorical distribution, in our experiments we utilize the Gumbel-Softmax re-parameterization (Jang et al., 2017) to obtain the categorical samples from transformed samples of a Gumbel distribution.", "The reason for the re-parameterization is that the Gumbel-Softmax samples are needed for the RELAX estimator that we introduce in the following.", "For a generic sample, y st , the re-parameterization can be concisely expressed as: y st = argmax ( z t ) z t Gumbel-Softmax ( p t , ) (3) where z t is a Gumbel-Softmax sample of size equal to that of the vocabulary that acts as a soft prediction, p t is the probability vector over the vocabulary at slot t , is a temperature parameter controlling the sparsity of z t , and argmax ( z t ) returns the index of z t 's largest value.", "This reparameterization is provenly equivalent to directly sampling y st from Cat ( p t ) (the reader can refer to Jang et al. (2017) for details).", "REINFORCE is an unbiased estimator of the theoretical gradient, but it typically suffers from a high variance which can affect the convergence and effectiveness of training.", "To curb its high variance, techniques based on control variates and the 5114 subtraction of simple baselines have been proposed and even applied to summarization (Rennie et al., 2017; Paulus et al., 2018).", "However, our early experiments showed that these approaches were not promising for the given task.", "In addition, some of these estimators introduce a bias, i.e. a mean difference with respect to the theoretical gradient.", "More recently, the RELAX gradient estimator has been shown to empirically outperform REINFORCE, thanks to its ability to reduce the variance while remaining unbiased (Grathwohl et al., 2018).", "The corresponding RELAX loss can be expressed as: LRELAX = [ r c ( z )] log p ( y s ) + c ( z ) c ( z ) (4) In (4), c ( z ) is a control variate of parameters which is expected to correlate tightly with the reward to reduce the variance, and term c ( z ) c ( z ) ensures that the overall gradient remains an unbiased estimator of the theoretical gradient.", "Variable z = { z 1 , . . . , z T } denotes the sequence of the Gumbel-Softmax samples, while variable z denotes the sequence of samples from a Gumbel-Softmax distribution conditioned on the observed values of y s .", "Operationally, z t is sampled first, unconditionally, then y st is derived with the argmax, and finally z t is sampled from a suitably conditioned Gumbel-Softmax distribution; details can be found in Grathwohl et al. (2018), Appendix B Categorical.", "Overall, the RELAX estimator is both unbiased and low-variance.", "The control variate in our experiments is a simple two-layer feed-forward network that is constructed to correlate with the ROUGE scoring function.", "We obtain this by feeding the concatenation of the soft predictions, z (or, in turn, z ), and the reference summary, y , as input to the control variate.", "This allows the model to learn to score the soft predictions and their targets in a way that mimics the ROUGE prediction-reference score.", "In detail, the architecture consists of two fully-connected linear layers, each followed by a ReLU linear activation function, and a final sigmoid activation function that normalizes the output of the last layer.", "Eventually, the output of the sigmoid is averaged to produce the control variate.", "1 1 We release our code to permit complete reproducibility of our experiments: https://github.", "The design of an effective reward is another key aspect of a reinforcement learning objective.", "In our work, we have aimed to design an overall reward that could simultaneously remain faithful to:", "a) the reference summary, to ensure adequate generation performance, and", "b) the input documents, to cover as many important details as possible, and hopefully, support generalization.", "Relying solely on the reference summaries, given the large input size, does not seem to promise sufficient guidance, and our experiments have confirmed that.", "To implement the reward, we have chosen to use ROUGE-L F1 for the references and a multi-document coverage score for the input documents that we describe hereafter.", "Several quantitative measures of coverage exist in the literature, and have found ample use in describing the properties of summarization datasets and the performance of models.", "For our work, we have adopted the extractive fragment coverage (EFC) of Grusky et al. (2018).", "The EFC measures the percentage of words in a summary that are part of extractive fragments within an input document, which are simply multi-word phrases shared between the input document and the summary.", "It is a simple precision-type measurement that looks at how much of the prediction is in the input document.", "Noting an individual document as D , a summary as y and an extractive fragment as f , the EFC can be expressed as: EF C ( y, D ) = 1 | y | (cid:88) f F ( y,D ) | f | (5) where the | | operator is used to denote length.", "To promote an even improvement in coverage across the input documents, we propose a multi-document extension of the EFC that reaches its highest value when the coverage across the input documents is evenly distributed.", "Let us note the input document set here as D , and the EFC coverage vector over the document set as cov ( y, D ) .", "We also note the sample mean of a vector x as ( x ) , the sample standard deviation as ( x ) , and their ratio (the inverse co-efficient of variation) as c 1 v ( x ) .", "This allows us to compute a normalized coverage score for a summary, c 1 v ( cov ( y, D )) , which takes larger values the more the scores are uniform across the document set.", "In addition, inspired by Kryscinski et al. (2018), we define a reward that pits the normalized coverage score of the prediction, y s , against that of 5115 the reference, y : r cov = c 1 v ( cov ( y s , D )) c 1 v ( cov ( y, D )) c 1 v ( cov ( y s , D )) (6) Eventually, to ensure that short summaries are not unfairly rewarded with high coverage scores, we normalize the reward by the length ratio of the prediction and the reference: r cov = r cov | y s | | y | (7) Overall, the r cov reward regards a prediction as good if it enjoys high average coverage of the input documents, the coverage is evenly distributed, and the prediction is of sufficient length.", "The reference summary acts as a baseline, making the reward additive if the prediction outperforms the reference, and subtractive if otherwise.", "Since ROUGE-L F1 and the coverage reward are not necessarily up to scale, to obtain the final reward, r , we perform a convex combination with a scaling coefficient, : r = ROUGE-L F1 ( y s , y ) + r cov (8) 3.3 Overall Training Objective As training strategy, we first train the model with the negative log-likelihood and choose the best model with a criterion based on the validation performance.", "After that, the model is fine-tuned with the reinforcement learning objective.", "In many past works, the reinforcement learning objective has been used mixed with the NLL for stability (Paulus et al., 2018; Li et al., 2019; Parnell et al., 2021).", "However, we assume that the model has already warmed up to the training data during its NLL pretraining stage, and only use either LREINFORCE (2) or LRELAX (4) for fine-tuning.", "To prevent excessive drifting from the NLL pre-trained model, we limit the fine-tuning to a few ( 1 , 000 ) shots and a relatively low learning rate ( 3 10 6 ).", "We have carried out multiple experiments over two MDS datasets in the news domain: Multi-News (Fabbri et al., 2019) and Wikipedia Current Events Portal (WCEP) (Gholipour Ghalandari et al., 2020).", "For WCEP, we specifically use the WCEP-100 version, which exclusively limits the number of articles within a document set to 100.", "We have chosen these datasets as they cover an ample spread of summary lengths and numbers of input documents, with Multi-News having longer reference summaries on average.", "Appendix A.2 reports the datasets' main statistics as presented in the original papers (Fabbri et al., 2019; Gholipour Ghalandari et al., 2020) 2 .", "Like most previous works, we use the F1 variants of the ROUGE-N scores 3 (Lin, 2004) for performance evaluation.", "In our use of ROUGE, we choose not to stem the predictions and the references during scoring.", "Since we use the ROUGE-L F1 score in our reward, to avoid circularity we also include METEOR 4 (Lavie and Agarwal, 2007) in the performance evaluation.", "Differently from our ROUGE implementation, METEOR uses stemming, synonyms, and other paraphrastic matching in the n -gram matching stage.", "In a recent study, both ROUGE and METEOR have displayed high correlation with a number of desirable summarization properties such as coherence, consistency, fluency, and relevance (Fabbri et al., 2021).", "We have implemented our approach on top of BART-LED (Beltagy et al., 2020).", "We utilize the generous maximum encoding length (16384 tokens) of this long-input transformer, by concatenating all the documents in a document set to form a single input to the model.", "The individual documents are separated by an [END] token, and the input is truncated to the maximum length.", "For every experiment, we report the average of three independently-initialized training runs.", "For each result, we have also run a nonparametric bootstrap test for statistical significance, and highlighted the results that are significantly different from the baseline.", "In the reward, the hyperparameter has been set to 1.0 with a validation described in Appendix A.3.", "All other hyperparameters are described in Appendix A.1.", "Multi-News .", "Table 1 compares the results over the Multi-News test set for the baseline, our proposed approaches and previous work from the literature.", "We first note that our BART-LED model has performed as a strong baseline, with its results being comparable to those of BART-Long (Pasunuru et al., 2021), which is based on the same BART Longformer architecture.", "In detail, BART-Long has reported a higher ROUGE-1 score, our baseline has reported a higher ROUGE-L score, and both have reported similar ROUGE-2 scores.", "Therefore, we regard our performance as comparable on the whole, with the differences most likely due to different hyperparameters.", "Amongst our results, the models fine-tuned with REINFORCE have achieved worse results than the baseline.", "This is evidence that a vanilla implementation of the policy gradient is not necessarily better than a standard NLL objective.", "Conversely, the models fine-tuned with RELAX have surpassed both the NLL baseline and virtually all the previous work.", "The best results have been achieved with the inclusion of the coverage term, with an improvement of +0 .", "36 ROUGE-2 pp over the NLL baseline and a marked improvement of +0 .", "92 METEOR pp.", "In addition, both results have reported a p -value < 0 .", "01 .", "These results give evidence to both the improved performance provided by the RELAX gradient estimator and the usefulness of the coverage term.", "In Appendix B, we also provide a qualitative example which shows that the increase in METEOR score is most likely given by the positive impact of the coverage term, which has allowed the model to retrieve relevant phrases from the input documents.", "WCEP .", "Table 2 shows the results over the WCEP test set.", "The trend is similar to that over Multi-News, but the improvements with the proposed models have been even more pronounced.", "In the first place, the NLL baseline has set a very strong performance compared to the previous work, showing the full potential of a long-input model such as the Longformer for MDS.", "As for Multi-News, the best results have been achieved with the RELAX gradient estimator, with improvements of 5117 up to +1 .", "32 ROUGE-1 pp and +3 .", "17 METEOR pp over the NLL baseline.", "The inclusion of the coverage term with RELAX has not been able to increase the ROUGE scores, but has increased METEOR by +1 .", "64 pp.", "Again, we attribute this to the model's improved coverage of the input documents, which leads to an increased number of matches under METEOR's more relaxed matching scheme.", "A qualitative example is discussed in Appendix B. 5 Analysis In this section, we present a more detailed analysis of the impact of the coverage term, the few-shot fine-tuning, and the RELAX gradient estimator using the Multi-News validation set as reference.", "For a further insight into the coverage reward, we also include an analysis of its trajectory during training.", "All the selected hyperparameters are listed in Appendix A.1.", "Our rationale for including a coverage term in the reward is to ensure coverage of the input documents beyond what can be driven by the reference summaries alone.", "We note that this may or may not translate into an improvement of the evaluation metrics, but it seems to add intrinsic value to the summaries nevertheless.", "For this reason, we further analyze the impact of the coverage term hereafter.", "Figure 1 shows the average EFC coverage (5) for the documents in the input sets, indexed by the document position in the set (first, second etc).", "The figure shows that the inclusion of the coverage term with RELAX has led to a marked increase of the coverage, and almost evenly distributed across all the documents in the input set.", "In particular, the document in the last position has achieved the largest coverage improvement.", "In turn, Figure 2 shows the average ROUGE score for the documents in the input sets, obtained by averaging the ROUGE-1, ROUGE-2, and ROUGE-L scores computed between the predicted summary and the document (NB: not the reference summary).", "The figure shows that the improvements in ROUGE score across the document set are similar to those in EFC coverage, rather evenly distributed, and with an improvement of over +4 pp for the document in the last position.", "This is further evidence that the normalized coverage reward (7) is able to drive the model towards predictions that cover the input set more uniformly.", "To explore the behavior of the few-shot fine-tuning, we compare the validation-set performance on Multi-News with varying number of training examples, from 10 to 2000 .", "The model's configuration is the best, with RELAX and the coverage term in the reward.", "Table 3 shows that the performance is the highest with 1000 examples, and starts to drop beyond this number.", "This is an important observation, as it shows that the reinforcement learning objective may lead to undesirable parameterizations beyond a point, and that the number of fine-tuning samples has to be treated as a hyperparameter.", "The RELAX gradient estimator introduces two new hyperparameters: the temperature parameter, , and the control variate, c .", "Hereafter, we discuss their impact and design.", "Temperature parameter .", "The RELAX gradient estimator uses a temperature parameter, , in the Gumbel-Softmax sampling (3).", "This parameter is maintained in log scale for convenience and is learnable alongside all other parameters; yet, its initial value can have a significant impact on the final model.", "To explore its behavior, Figure 3 shows the trajectory of parameter log over 1000 Multi-News training steps for different initializations (0.25, 0.5 and 1.0).", "The trajectories show that, irrespective of its initial value, log converges to a stable value within approximately 400 training steps.", "For the initializations at 0.25 and 1.0, within the first 200-300 training steps log drifts significantly ( 0 . 25 units) from its initial value.", "Conversely, with the intermediate initialization at 0.5, the value remains substantially stable over the whole trajectory.", "Since limiting drift during fine-tuning is generally desirable, we have initialized log to 0.5 in all experiments.", "Control variate size .", "Many different architectures could be used for the control variate, but given our choice described in Section 3.1, the main parameter is the feed-forward layers' hidden size.", "To explore its impact, Table 4 shows the average values of the ROUGE score and the coverage score over the Multi-News validation set with different hidden sizes (128, 256, and 512).", "The ROUGE score is computed between the prediction and the reference and is the average of ROUGE-1/2/L, while the coverage score is the average EFC of all the input documents.", "The values in Table 4 show that, the larger the control variate, the more Figure 3: Trajectory of the log( ) temperature parameter over 1000 Multi-News training steps for different initializations.", "the model is able to increase the coverage score.", "However, the average ROUGE score drops beyond a size of 256.", "We speculate that this behavior is due to the larger scale of the coverage reward, as by providing more capacity to the network, we allow the control variate to increasingly correlate with the multi-document coverage reward rather than the ROUGE reward.", "To strike a satisfactory trade-off, we have therefore chosen 256 as the hidden size for all experiments with Multi-News, and carried out an equivalent selection for WCEP.", "In a reinforcement learning framework, it could be useful to monitor the value of the reward over the training steps.", "Typically, the reward should exhibit an upward trajectory, since the reward should tend to increase as the model learns to make better predictions.", "In our case, we look to explore the impact of our coverage reward on the coverage distribution over the input documents.", "In particular, we want to verify whether the coverage reward is able to promote predictions that cover the input documents more evenly, which should translate into a decreased standard deviation.", "To this aim, Figure 4 shows a plot of the standard deviation of the cov-5119 erage scores (EFC) across the input document set against the training step.", "The trajectories show that both REINFORCE and RELAX have been able to decrease the standard deviation of the predictions to approximately 0 .", "05 units from initial values of 0 .", "08 0 .", "09 .", "The drop in standard deviation occurs quite quickly during training, coinciding with the improvement in the reward value of the predictions.", "Comparing REINFORCE with RELAX also shows that RELAX has been able to achieve lower standard deviation values throughout the training, with the exception of the very start.", "In this paper, we have proposed fine-tuning a multi-document summarization model with a reward that balances the use of the reference summaries with the coverage of the input documents within a reinforcement learning framework.", "The rationale for the proposed reward is that the reference summaries alone may not be sufficient for an effective fine-tuning of the model in the presence of very large inputs such as those typical of MDS datasets.", "Another key component of the proposed approach is the use of a modern gradient estimator of the policy gradient, RELAX.", "The experimental results over two news-based MDS datasets, Multi-News and WCEP, have shown that the proposed approach has been able to achieve a marked improvement of ROUGE and METEOR scores compared to its NLL-pretrained baseline, and prove competitive against most existing approaches.", "In addition, the proposed approach has been able to increase the coverage of the input documents, and evenly across the entire document set.", "As future work, we aim to explore ways to prevent or mollify the model's drift with larger number of training steps, and explore alternative architectures and configurations for the control variate of the RELAX estimator." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "other", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "objective", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective" ]
[ "Although deep neural networks are effective at extracting high-level features, classification methods usually encode an input into a vector representation via simple feature aggregation operations ( e.g. pooling).", "Such operations limit the performance.", "For instance, a multi-label document may contain several concepts.", "In this case, one vector can not sufficiently capture its salient and discriminative content.", "Thus, we propose Hyperbolic Capsule Networks (HYPERCAPS ) for Multi-Label Classification (MLC), which have two merits.", "First, hyperbolic capsules are designed to capture fine-grained document information for each label, which has the ability to characterize complicated structures among labels and documents.", "Second, Hyperbolic Dynamic Routing (HDR) is introduced to aggregate hyperbolic capsules in a label-aware manner, so that the label-level discriminative information can be preserved along the depth of neural networks.", "To efficiently handle large-scale MLC datasets, we additionally present a new routing method to adaptively adjust the capsule number during routing.", "Extensive experiments are conducted on four benchmark datasets.", "Compared with the state-of-the-art methods, HYPERCAPS significantly improves the performance of MLC especially on tail labels.", "The main difference between Multi-Class Classification (MCC) and Multi-Label Classification (MLC) is that datasets in MCC have only serval mutually exclusive classes, while datasets in MLC contain much more correlated labels.", "MLC allows label co-occurrence in one document, which indicates that the labels are not disjointed.", "In addition, a large fraction of the labels are the infrequently occurring tail labels (Bhatia et al., 2015), which is also referred as the power-law label distribution.", "Figure 1 illustrates the label distribution of EUR-LEX 57 K (Chalkidis et al., 2019).", "A multi-label document usually has serval head and tail labels, and hence contain several concepts about both its head and tail labels simultaneously.", "Recent works for text classification, such as CNN-KIM (Kim, 2014) and FASTTEXT (Joulin et al., 2017), focus on encoding a document into a fixed-length vector as the distributed document representation (Le and Mikolov, 2014).", "These encoding based deep learning methods use simple operations ( e.g. pooling) to aggregate features extracted by neural networks and construct the document vector representation.", "A Fully-Connected (FC) layer is usually applied upon the document vector to predict the probability of each label.", "And each row in its weight matrix can be interpreted as a label vector representation (Du et al., 2019b).", "In this way, the label probability can be predicted by computing the dot product between label and document vectors, which is proportional to the scalar projection of the label vector onto the document vector as shown in Figure 2.", "For example, label movie should have the largest scalar projection onto a document about movie.", "However, even Figure 2: Illustration of the FC layer in the encoding based methods.", "the learned label representation of music can be distinguished from movie , it may also have a large scalar projection onto the document.", "Moreover, multi-label documents always contain several concepts about multiple labels, such as a document about sport movie.", "Whereas the document vector representation is identical to all the labels, and training instances for tail labels are inadequate compared to head labels.", "The imbalance between head and tail labels makes it hard for the FC layer to make prediction, especially on tail labels.", "In this case, one vector can not sufficiently capture its salient and discriminative content.", "Therefore, the performance of constructing the document vector representation via simple aggregation operations is limited for MLC.", "Capsule networks (Sabour et al., 2017; Yang et al., 2018a) has recently proposed to use dynamic routing in place of pooling and achieved better performance for classification tasks.", "In fact, capsules are fine-grained features compared to the distributed document representation, and dynamic routing is a label-aware feature aggregation procedure.", "(Zhao et al., 2019) improves the scalability of capsule networks for MLC.", "However, they only use CNN to construct capsules, which capture local contextual information (Wang et al., 2016).", "Effectively learning the document information about multiple labels is crucial for MLC.", "Thus we propose to connect CNN and RNN in parallel to capture both local and global contextual information, which would be complementary to each other.", "Nevertheless, Euclidean capsules necessitate designing a non-linear squashing function.", "Inspired by the hyperbolic representation learning methods which demonstrate that the hyperbolic space has more representation capacity than the Euclidean space (Nickel and Kiela, 2017; Ganea et al., 2018a), Hyperbolic Capsule Networks (HYPERCAPS ) is proposed.", "Capsules are constrained in the hyperbolic space which does not require the squashing function.", "Hyperbolic Dynamic Routing (HDR) is introduced to aggregate hyperbolic capsules in a label-aware manner.", "Moreover, in order to fit the large label set of MLC and improve the scalability of HYPERCAPS , adaptive routing is presented to adjust the number of capsules participated in the routing procedure.", "The main contributions of our work are therefore summarized as follows: We propose to connect CNN and RNN in parallel to simultaneously extract local and global contextual information, which would be complementary to each other.", "HYPERCAPS with HDR are formulated to aggregate features in a label-aware manner, and hyperbolic capsules benefits from the representation capacity of the hyperbolic space.", "Adaptive routing is furthermore presented to improve the scalability of HYPERCAPS and fit the large label set of MLC.", "Extensive experiments on four benchmark MLC datasets demonstrate the effectiveness of HYPERCAPS , especially on tail labels.", "In order to make neural networks work in the hyperbolic space, formalism of the Mobius gyrovector space is adopted (Ganea et al., 2018b).", "An n -dimensional Poincare ball B n is a Riemannian manifold defined as B n = { x R n | (cid:107) x (cid:107) < 1 } , with its tangent space around p B n denoted as T p B n and the conformal factor as p := 2 1 (cid:107) p (cid:107) 2 .", "The exponential map exp p : T p B n B n for w T p B n \\ { 0 } is consequently defined as exp p ( w ) = p (tanh( p 2 (cid:107) w (cid:107) ) w (cid:107) w (cid:107) ) .", "To work with hyperbolic capsules, Mobius operations in the Poincare ball also need to be formulated.", "formulated.", "M obius addition for u , v B n is defined as u v = (1+2 (cid:104) u , v (cid:105) + (cid:107) v (cid:107) 2 ) u +(1 (cid:107) u (cid:107) 2 ) v 1+2 (cid:104) u , v (cid:105) + (cid:107) u (cid:107) 2 (cid:107) v (cid:107) 2 , (2) where (cid:104) , (cid:105) denotes the Euclidean inner product.", "Thus Mobius summation can be formulated as n M i = m p i = p m p n , p i B n .", "(3) Mobius scalar multiplication for k R and p B n \\ { 0 } is defined as k p = tanh( k tanh 1 ( (cid:107) p (cid:107) )) p (cid:107) p (cid:107) .", "(4) And k p = 0 when p = 0 B n .", "The definition of Mobius matrix-vector multiplication for M R m n and p B n when Mp (cid:54) = 0 is as follows M p = tanh( (cid:107) Mp (cid:107) (cid:107) p (cid:107) tanh 1 ( (cid:107) p (cid:107) )) Mp (cid:107) Mp (cid:107) .", "(5) And M p = 0 when Mp = 0 .", "Neural networks are generally used as effective feature extractors for text classification.", "Kernels of CNN can be used to capture local n-gram contextual information at different positions of a text sequence, while hidden states of RNN can represent global long-term dependencies of the text (Wang et al., 2016).", "Hence, we propose to obtain the combination of local and global hyperbolic capsules by connecting CNN and RNN in parallel, which would be complementary to each other.", "Given a text sequence of a document with T word tokens x = [ x 1 , . . . , x T ] , pre-trained w dimensional word embeddings ( e.g. GLOVE (Pen-nington et al., 2014)) are used to compose word vector representations E = [ e 1 , . . . , e T ] RT w , upon which CNN and RNN connected in parallel are used to construct local and global hyperbolic capsules in the Poincar e ball.", "Figure 3 illustrates the framework for HYPERCAPS .", "N-gram kernels K R k w with different window size k are applied on the local region of the word representations E t : t + k 1 R k w to construct the local features as", "where denotes the element-wise multiplication and is a non-linearity ( e.g. ReLU ).", "For simplicity, the bias term is omitted.", "With totally d channels, the local hyperbolic capsules at position t can be constructed as l t = exp 0 ([ l (1) t , . . . , l ( d ) t ]) B d .", "Therefore, a k -gram kernel with 1 stride can construct T k +1 local hyperbolic capsules.", "The local hyperbolic capsule set is denoted as { u 1 , . . . , u L } .", "Bidirectional GRU (Chung et al., 2014) is adopted to incorporate forward and backward global contextual information and construct the global hyperbolic capsules.", "Forward and backward hidden states at time-step t are obtained by h t = GRU ( h t 1 , e t ) , h t = GRU ( h t +1 , e t ) .", "Each of the total 2 T hidden states can be taken as a global hyperbolic capsule using the exponential map, i.e. g t = exp 0 ( h t ) , and equally for the backward capsules.", "The global hyperbolic capsule set is denoted as { u 1 , . . . , u G } .", "As discussed in (Zhao et al., 2019), the routing procedure is computational expensive for a large number of capsules.", "Compressing capsules into a smaller amount can not only relieve the computational complexity, but also merge similar capsules and remove outliers.", "Therefore, hyperbolic compression layer is introduced.", "Each compressed local hyperbolic capsule is calculated as a weighted Mobius summation over all the local hyperbolic capsules.", "For instance, u l = M u k { u 1 ,..., u L } r k u k B d , (9) where r k is a learnable weight parameter.", "And likewise for compressing global hyperbolic capsules.", "Let set { u 1 , . . . , u P } denote the compressed local and global hyperbolic capsules together, which are then aggregated in a label-aware manner via HDR.", "The purpose of Hyperbolic Dynamic Routing (HDR) is to iteratively aggregate local and global hyperbolic capsules into label-aware hyperbolic capsules, whose activations stand for probabilities of the labels.", "With the acquirement of the compressed local and global hyperbolic capsule set { u 1 , . . . , u P } in layer (cid:96) , let { v 1 , . . . , v Q } denote the label-aware hyperbolic capsule set in the next layer (cid:96) +1 , where Q equals to the number of labels.", "Following (Sabour et al., 2017), the compressed hyperbolic capsules are firstly transformed into a set of prediction capsules { u j | 1 , . . . , u j | P } for the j -th label-aware capsule, each of them is calculated by u j | i = W ij u i B d , (10) where W ij is a learnable parameter.", "Then v j is calculated as a weighted Mobius summation over all the prediction capsules by v j = M u j | i { u j | 1 ,..., u j | P } c ij u j | i , (11) where c ij denotes the coupling coefficient that indicates the connection strength between u j | i and v j .", "The coupling coefficient c ij is iteratively updated during the HDR procedure and computed by the routing softmax c ij = exp( b ij ) (cid:80) k exp( b ik ) , (12) where the logits b ij are the log prior probabilities between capsule i and j , which are initialized as 0 .", "Once the label-aware hyperbolic capsules are produced, each b ij is then updated by b ij = b ij + K ( d B ( v j , u j | i )) , (13) where d B ( , ) denotes the Poincare distance , which can be written as d B ( u , v ) = cosh 1 (1 + 1 2 u v (cid:107) u v (cid:107) 2 ) .", "(14)", "And K is a Epanechnikov kernel function (Wand and Jones, 1994) with K = (cid:40) x, x [0 , ) 0 , x (15) where is the maximum Poincare distance between two points in the Poincare ball, which is d B ( p , 0 ) with (cid:107) p (cid:107) = 1 (cid:15) ( (cid:15) = 10 5 ) to avoid numerical errors. HDR is summarized in Algorithm 1. Different from the routing procedure described in (Sabour et al., 2017), HDR does not require the squashing function since all the hyperbolic capsules are constrained in the Poincare ball. 4.2 Adaptive Routing The large amount of labels in MLC is one ma-jor source of the computational complexity for the routing procedure. Since most of the labels are unrelated to a document, calculating the label-aware hyperbolic capsules for all the unrelated labels is redundant. Therefore, encoding based adaptive routing layer is used to efficiently decide the candidate labels for the document. The adaptive routing layer produces the candidate probability of each label by c = ( W c 1 T (cid:88) e i E e i + b c ) , (16) Table 1: Statistics of the datasets: N train and N test are the numbers of training and test instances, W train and W test are their average word numbers, L is the average label number per instance, I is the average number of training instances per label, # H and # T are the numbers of head and tail labels, H and T are their average number of training instances respectively. Dataset N train N test W train W test L I # H H # T TAAPD 49,356 6,484 163.34 164.14 2.41 2,199.03 17 5,002.23 37 911.08 RCV1 23,149 781,265 259.47 269.23 3.21 715.50 27 2,209.44 76 184.76 ZHIHU 2,699,969 299,997 38.14 35.56 2.32 3,165.92 442 7,144.31 1,557 2,036.54 EUR-LEX 57 K 51,000 6,000 726.46 725.37 5.06 53.45 711 273.72 3,560 9.46 Algorithm 1 Hyperbolic Dynamic Routing 1: procedure HDR( u j | i , r , (cid:96) ) 2: Initialize i, j : b ij 0 3: for r iterations do 4: for all capsule i in layer (cid:96) and capsule j in layer (cid:96) + 1 : c ij softmax( b ij ) (cid:46) Eq. 12 5: for all capsule j in layer ( (cid:96) + 1 ): v j M i c ij u j | i 6: for all capsule i in layer (cid:96) and capsule j in layer (cid:96) + 1 : b ij b ij + K ( d B ( v j , u j | i )) 7: return v j where denotes the Sigmoid function. W c and the bias b c are learnable parameters updated by minimizing the binary cross-entropy loss (Liu et al., 2017) L c = Q (cid:80) j =1 (cid:0) y j log ( c j ) + (1 y j ) log (1 c j ) (cid:1) , (17) where c j [0 , 1] is the j -th element in c and y j { 0 , 1 } denotes the ground truth about label j .", "The adaptive routing layer selects the candidate labels during test.", "Label-aware hyperbolic capsules are then constructed via HDR to predict probabilities of these candidate labels.", "During the training process, negative sampling is used to improve the the scalability of HYPERCAPS .", "Let N + denote the true label set and N denote the set of randomly selected negative labels, the loss function is derived as L f = (cid:0) (cid:80) j N + log ( a j ) + (cid:80) j N log (1 a j ) (cid:1) , (18) where a j = ( d B ( v j , 0 )) is activations of the j -th label-aware capsules, which is proportional to the distance from the origin of the Poincare ball.", "The proposed HYPERCAPS is evaluated on four benchmark datasets with various label number from 54 to 4271.", "We compare with the state-of-the-art methods in terms of widely used metrics.", "Performance on tail labels is also compared to demonstrate the superiority of HYPERCAPS for MLC.", "An ablation test is also carried out to analyse the contribution of each component of HYPERCAPS .", "Datasets Experiments are carried out on four publicly available MLC datasets, including the small-scale AAPD (Yang et al., 2018b) and RCV1 (Lewis et al., 2004), the large-scale ZHIHU 1 and EUR-LEX 57 K (Chalkidis et al., 2019).", "Labels are divided into head and tail sets according to their number of training instances, i.e. labels have less than average number of training instances are divided into the tail label set.", "Their statistics can be found in Table 1.", "Evaluation metrics We use the rank-based evaluation metrics which have been widely adopted for MLC tasks (Bhatia et al., 2015; Liu et al., 2017), i.e. Precision@k ( P@k for short) and nDCG@k , which are respectively defined as P@k = 1 k (cid:88) j rank k ( a ) y j , (19) nDCG@k = (cid:80) j rank k ( a ) y j /log ( j + 1) (cid:80) min( k, (cid:107) y (cid:107) 0 ) j =1 1 /log ( j + 1) , (20) where y j { 0 , 1 } denotes the the ground truth about label j , rank k ( a ) denotes the indices of the candidate label-aware hyperbolic capsules with k largest activations in descending order, and (cid:107) y (cid:107) 0 is the true label number for the document instance.", "1 https://www.biendata.com/competition/ zhihu/data/ .", "The final results are averaged over all the test instances.", "Baselines To demonstrate the effectiveness of HYPERCAPS on the benchmark datasets, six comparative text classification methods are chosen as the baselines.", "FASTTEXT (Joulin et al., 2017) is a representative encoding-based method which use average pooling to construct document representations and MLP to make the predictions.", "SLEEC (Bhatia et al., 2015) is a typical label-embedding method for MLC, which uses k-nearest neighbors search to predict the labels.", "XML-CNN (Liu et al., 2017) employs CNN as local n-gram feature extractors and a dynamic pooling technique as aggregation method.", "SGM (Yang et al., 2018b) applies the seq2seq model with attention mechanism, which takes the global contextual information.", "REGGNN (Xu et al., 2019) uses a combination of CNN and LSTM with a dynamic gate that controls the information from these two parts.", "NLP-CAP (Zhao et al., 2019) is a capsule-based approach for MLC, which reformulates the routing algorithm.", "NLP-CAP use only CNN to construct capsules, and it applies the squashing function onto capsules.", "Implementation Details All the words are converted to lower case and padding is used to handle the various lengths of the text sequences.", "Maximum length of AAPD, RCV1 and EUR-LEX 57 K is set to 500, while maximum length of ZHIHU is 50.", "To compose the word vector representations, pre-trained 300-dimensional GLOVE (Pennington et al., 2014) word embeddings are used for AAPD, RCV1 and EUR-LEX 57 K , while ZHIHU uses its specified 256-dimensional word embeddings.", "The dimension of the Poincare ball is set to 32 with a radius 1 (cid:15) ( (cid:15) = 10 5 ) to avoid numerical errors.", "Multiple one-dimensional convolutional kernels (with window sizes of 2, 4, 8) are applied in the local hyperbolic capsule layer.", "The number of compressed local and global hyperbolic capsules is 128.", "Adaptive routing layer is not applied on the small-scale datasets AAPD and RCV1.", "The maximum candidate label number is set to 200 for the large-scale datasets ZHIHU and EUR-LEX 57 K .", "For the baselines, hyperparameters recommended by their authors are adopted.", "The proposed HYPERCAPS is evaluated on the four benchmark datasets by comparing with the six baselines in terms of P@k and nDCG@k with k = 1 , 3 , 5 .", "Results on all the labels averaged over the test instances are shown in Table 2.", "nDCG@1 is omitted since it gives the same value as P@1 .", "It is notable that HYPERCAPS obtains competitive results on the four datasets.", "The encoding-based FASTTEXT is generally inferior to the other baselines as it applies the average pooling on word vector representations, which", "ig-(a) AAPD", "nores word order for the construction of document representations.", "The typical MLC method SLEEC takes advantage of label correlations by embedding the label co-occurrence graph.", "However, SLEEC uses TF-IDF vectors to represent documents, thus word order is also ignored.", "XML-CNN uses a dynamic pooling technique to aggregate the local contextual features extracted by CNN, while SGM uses attention mechanism to aggregate the global contextual features extracted by LSTM.", "REGGNN is generally superior to both of them as it combines the local and global contextual information dynamically and takes label correlations into consideration using a regularized loss.", "However, the two capsule-based methods NLP-CAP and HYPERCAPS consistently outperform all the other methods owing to dynamic routing, which aggregates the fine-grained capsule features in a label-aware manner.", "Moreover, NLP-CAP only uses CNN to extract the local contextual information, while HYPERCAPS benefits from the parallel combination of local and global contextual information.", "In addition, NLP-CAP applies the non-linear squashing function for capsules in the Euclidean space, while HDR is designed for hyperbolic capsules, which take advantage of the representation capacity of the hyperbolic space.", "Therefore, HYPERCAPS outperforms NLP-CAP as expected.", "This result further confirms that the proposed HYPERCAPS with HDR is effective to learn the label-aware hyperbolic capsules for MLC.", "In MLC, tail labels have low occurring frequency and hence are hard to predict compared to head labels.", "The performance on tail labels of the four benchmark datasets is evaluated in terms of nDCG@k with k = 1 , 3 , 5 .", "Figure 4 shows the results of the five deep learning based MLC methods, i.e. XML-CNN, SGM, REGGNN, NLP-CAP and HYPERCAPS .", "nDCG@1 is smaller than nDCG@3 on AAPD, RCV1 and ZHIHU since most of their test instances contain less than three tail labels.", "It is remarkable that HYPERCAPS outperforms all the other methods on tail labels.", "REGGNN takes advantage of the local and global contextual information and label correlations, thus it outperforms XML-CNN and SGM.", "The two capsule-based methods NLP-CAP and HYPERCAPS are both superior to the other methods, which indicates that the label-aware dynamic routing is effective for the prediction on tail labels.", "In addition, the fact that HYPERCAPS significantly improves the prediction performance compared to NLP-CAP implies that the representation capacity of the hyperbolic space and the combination of local and global contextual information are helpful for learning on tail labels.", "The results demonstrate the superiority of the proposed HYPERCAPS on tail labels for MLC.", "An ablation test would be informative to analyze the effect of varying different components of the proposed HYPERCAPS , which can be taken apart as local Euclidean capsules only (denoted as L), global Euclidean capsules only (denoted as G), a combination of the local and global Euclidean capsules (denoted as L + G), and a combination of the local and global hyperbolic capsules (denoted as L + G + H).", "Euclidean capsules (in L, G and L + G) are aggregated via the origin dynamic routing (Sabour et al., 2017), while hyperbolic capsules (in L + G + H) are aggregated via our HDR.", "Figure 5 shows the results on EUR-LEX 57 K in terms of P@k with k = 1 , 3 , 5 .", "In order to make the comparison fair, the number of total compressed capsules is equally set to 256 for all the four models.", "Adaptive routing is also applied with the maximum candidate label number set equally to 200.", "Generally, the proposed combination of local and global contextual information contributes to the effectiveness of the model (L + G).", "Therefore, it is practical to combine the local and global contextual information via dynamic routing.", "HDR furthermore improves the performance by making use of the representation capacity of the hyperbolic space.", "Overall, each of the components benefits the performance of HYPERCAPS for MLC.", "In summary, extensive experiments are carried out on four MLC benchmark datasets with various scales.", "The results demonstrate that the proposed HYPERCAPS can achieve competitive performance compared with the baselines.", "In particular, effectiveness of HYPERCAPS is shown on tail labels.", "The ablation test furthermore confirms that the combination of local and global contextual information is practical and HYPERCAPS benefits from the representation capacity of the hyperbolic space.", "Multi-label classification (MLC) aims at assigning multiple relevant labels to one document.", "The MLC label set is large compared to Multi-class classification (MCC).", "Besides, the correlations of labels ( e.g. hierarchical label structures (Banerjee et al., 2019)) and the existence of tail labels make MLC a hard task (Bhatia et al., 2015).", "As data sparsity and scalability issues arise with the large number of labels, XML-CNN (Liu et al., 2017) employs CNN as efficient feature extractor, whereas it ignores label correlations, which are often used to deal with tail labels.", "The traditional MLC method SLEEC (Bhatia et al., 2015) makes use of label correlations by embedding the label co-occurrence graph.", "The seq2seq model SGM (Yang et al., 2018b) uses the attention mechanism to consider the label correlations, while REGGNN (Xu et al., 2019) applies a regularized loss specified for label co-occurrence.", "REGGNN additionally chooses to dynamically combine the local and global contextual information to construct document representations.", "Capsule networks are recently proposed to address the representation limitations of CNN and RNN.", "The concept of capsule is first introduced by (Hin-ton et al., 2011).", "(Sabour et al., 2017) replaces the scalar output features of CNN with vector capsules and pooling with dynamic routing.", "(Hinton et al., 2018) proposes the EM algorithm based routing procedure between capsule layers.", "(Gong et al., 2018) proposes to regard dynamic routing as an information aggregation procedure, which is more effective than pooling.", "(Yang et al., 2018a) and (Du et al., 2019a) investigate capsule networks for text classification.", "(Zhao et al., 2019) then presents a capsule compression method and reformulates the routing procedure to fit for MLC.", "Our work is different from the predecessors as we design the Hyperbolic Dynamic Routing (HDR) to aggregate the parallel combination of local and global contextual information in form of hyperbolic capsules, which are constrained in the hyperbolic space without the requirement of non-linear squashing function.", "In addition, adaptive routing is proposed to improve the scalability for large number of labels.", "Recent research on representation learning (Nickel and Kiela, 2017) indicates that hyperbolic space is superior to Euclidean space in terms of representation capacity, especially in low dimension.", "(Ganea et al., 2018b) generalizes operations for neural networks in the Poincare ball using formalism of Mobius gyrovector space.", "Some works lately demonstrate the superiority of the hyperbolic space for serval natural language processing tasks, such as textual entailment (Ganea et al., 2018a), machine translation (Gulcehre et al., 2019) and word embedding (Tifrea et al., 2019).", "Our work presents the Hyperbolic Capsule Networks (HYPERCAPS ) for MLC.", "References Siddhartha Banerjee, Cem Akkaya, Francisco Perez-Sorrosal, and Kostas Tsioutsiouliklis.", "Hierarchical transfer learning for multi-label text classification.", "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 62956300.", "Kush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, and Prateek Jain.", "Sparse local embeddings for extreme multi-label classification.", "In Advances in Neural Information Processing Systems 28 , pages 730738.", "Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, and Ion Androutsopoulos.", "2019.", "Large-scale multi-label text classification on EU legislation.", "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 63146322.", "Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio.", "2014.", "Empirical evaluation of gated recurrent neural networks on sequence modeling.", "In NIPS 2014 Workshop on Deep Learning .", "Chunning Du, Haifeng Sun, Jingyu Wang, Qi Qi, Jianxin Liao, Chun Wang, and Bing Ma.", "2019a.", "Investigating capsule network and semantic feature on hyperplanes for text classification.", "pages 456465.", "Cunxiao Du, Zhaozheng Chin, Fuli Feng, Lei Zhu, Tian Gan, and Liqiang Nie.", "2019b.", "Explicit interaction model towards text classification.", "In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence , pages 63596366.", "Octavian Ganea, Gary Becigneul, and Thomas Hofmann.", "2018a.", "Hyperbolic entailment cones for learning hierarchical embeddings.", "In Proceedings of the 35th International Conference on Machine Learning , pages 16461655.", "Octavian Ganea, Gary Becigneul, and Thomas Hofmann.", "2018b.", "Hyperbolic neural networks.", "In Advances in neural information processing systems 31 , pages 53455355.", "Jingjing Gong, Xipeng Qiu, Shaojing Wang, and Xuan-jing Huang.", "2018.", "Information aggregation via dynamic routing for sequence encoding.", "In Proceedings of the 27th International Conference on Computational Linguistics , pages 27422752.", "Caglar Gulcehre, Misha Denil, Mateusz Malinowski, Ali Razavi, Razvan Pascanu, Karl Moritz Hermann, Peter Battaglia, Victor Bapst, David Raposo, Adam Santoro, and Nando de Freitas.", "2019.", "Hyperbolic attention networks.", "In International Conference on Learning Representations .", "Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang.", "2011.", "Transforming auto-encoders.", "In International Conference on Artificial Neural Networks , pages 44 51.", "Springer.", "Geoffrey E Hinton, Sara Sabour, and Nicholas Frosst.", "2018.", "Matrix capsules with EM routing.", "In International Conference on Learning Representations .", "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov.", "2017.", "Bag of tricks for efficient text classification.", "In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers , pages 427431.", "Yoon Kim.", "2014.", "Convolutional neural networks for sentence classification.", "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing , pages 17461751.", "We present the Hyperbolic Capsule Networks (HYPERCAPS ) with Hyperbolic Dynamic Routing (HDR) and adaptive routing for Multi-Label Classification (MLC).", "The proposed HYPERCAPS takes advantage of the parallel combination of fine-grained local and global contextual information and label-aware feature aggregation method HDR to dynamically construct label-aware hyperbolic capsules for tail and head labels.", "Adaptive routing is additionally applied to improve the scalability of HYPERCAPS by controlling the number of capsules during the routing procedure.", "Extensive experiments are carried out on four benchmark datasets.", "Results compared with the state-of-the-art methods demonstrate the superiority of HYPERCAPS , especially on tail labels.", "As recent works explore the superiority of hyperbolic space to Euclidean space for serval natural language processing tasks, we intend to couple with the hyperbolic neural networks (Ganea et al., 2018b) and the hyperbolic word embedding method such as POINCAR EGLOVE (Tifrea et al., 2019) in the future.", "This work was supported in part by the National Natural Science Foundation of China under Grant 61822601, 61773050, and 61632004; the Beijing Natural Science Foundation under Grant Z180006; National Key Research and Development Program (2017YFC1703506); the Fundamental Research Funds for the Central Universities (2019JBZ110).", "We thank the anonymous reviewers for their valuable feedback." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Dialogue State Tracking (DST) aims to keep track of users' intentions during the course of a conversation.", "In DST, modelling the relations among domains and slots is still an under-studied problem.", "Existing approaches that have considered such relations generally fall short in: (1) fusing prior slot-domain membership relations and dialogue-aware dynamic slot relations explicitly, and (2) generalizing to unseen domains.", "To address these issues, we propose a novel D ynamic S chema G raph F usion Net work ( DSGFNet ), which generates a dynamic schema graph to explicitly fuse the prior slot-domain membership relations and dialogue-aware dynamic slot relations.", "It also uses the schemata to facilitate knowledge transfer to new domains.", "DSGFNet consists of a dialogue utterance encoder, a schema graph encoder, a dialogue-aware schema graph evolving network, and a schema graph enhanced dialogue state decoder.", "Empirical results on benchmark datasets (i.e., SGD, MultiWOZ2.1, and MultiWOZ2.2), show that DSGFNet outperforms existing methods.", "Task-oriented dialogue systems can help users accomplish different tasks (Huang et al., 2020), such as flight reservation, food ordering, and appointment scheduling.", "Conventionally, task-oriented dialogue systems consist of four modules (Zhang et al., 2020c): natural language understanding (NLU), dialogue state tracking (DST), dialogue manager (DM), and natural language generation (NLG).", "In this paper, we will focus on the DST module.", "The goal of DST is to extract users' goals or intentions as dialogue states and keep these states updated over the whole dialogue.", "In order to track users' goals, we need to have a predefined domain knowledge referred to as a schema, which consists of slot Work in part done while at University College London.", "names and their descriptions.", "Figure 1 gives an example of DST in a sample dialogue.", "Many models have been developed for DST due to its importance in task-oriented dialogue systems.", "Traditional approaches use deep neural networks or pre-trained language models to encode the dialogue context and infer slot values from it (Zhong et al., 2018; Ramadan et al., 2018; Wu et al., 2019; Ren et al., 2019; Zhang et al., 2020a; Hu et al., 2020; Gao et al., 2020; Zhang et al., 2020a,b).", "These models predict slot values without considering the relations among domains and slots.", "However, domains and slots in a dialogue are unlikely to be entirely independent, and ignoring the relations among domains and slots may lead to sub-optimal perfor-115 mance.", "To address this issue, several recent works have been proposed to model the relations among domains and slots in DST.", "Some of them introduce predefined schema graphs to incorporate prior slot-domain membership relations, which are defined based on human experience in advance (Chen et al., 2020; Zhu et al., 2020).", "The others use an attention mechanism to capture dialogue-aware dynamic slot relations (Feng et al., 2021; Heck et al., 2020).", "The dialogue-aware dynamic relations are the logical relations of slots across domains, which are highly related to specific dialogue contexts.", "However, existing DST models that involve the relations among domains and slots suffer from two major issues: (1) They fail to fuse the prior slot-domain membership relations and dialogue-aware dynamic slot relations explicitly; and (2) They fail to consider their generalizability to new domains.", "In practical scenarios, task-oriented dialogue systems need to support a large and constantly increasing number of new domains.", "To tackle these issues, we propose a novel approach named DSGFNet (Dynamic Schema Graph Fusion Network).", "For the first issue, DSGFNet dynamically updates the schema graph consisting of the predefined slot-domain membership relations with the dialogue-aware dynamic slot relations.", "To incorporate the dialogue-aware dynamic slot relations explicitly, DSGFNet adds three new edge types to the schema graph: co-reference relations , co-update relations , and co-occurrence relations .", "For the second issue, to improve its generalizability, DSGFNet employs a unified model containing schema-agnostic parameters to make predictions.", "Specifically, our proposed DSGFNet comprises of four components: a BERT-based dialogue utterance encoder to contextualize the current turn dialogue context and history, a BERT-based schema graph encoder to generalize to unseen domains and model the prior slot-domain membership relations on the schema graph, a dialogue-aware schema graph evolving network to augment the dialogue-aware dynamic slot relations on the schema graph, and a schema graph enhanced dialogue state decoder to extract value spans from the candidate elements considering the evolved schema graph.", "The contributions of this paper can be summarized as follows: We improve DST by proposing a dynamic, explainable, and general schema graph which explicitly models the relations among domains and slots based on both prior knowledge and the dialogue context, no matter whether the domains and slots are seen or not.", "We develop a fusion network, DSGFNet, which effectively enhances DST generating a schema graph out of the combination of prior slot-domain membership relations and dialogue-aware dynamic slot relations.", "We conduct extensive experiments on three benchmark datasets (i.e., SGD, MultiWOZ2.1, and MultiWOZ2.2) to demonstrate the superiority of DSGFNet 1 and the importance of the relations among domains and slots in DST.", "Recent DST approaches mainly focus on encoding the dialogue contexts with deep neural networks (e.g., convolutional and recurrent networks) and inferring the values of slots independently (Zhong et al., 2018; Ramadan et al., 2018; Wu et al., 2019; Ren et al., 2019; Zhang et al., 2020a; Hu et al., 2020; Gao et al., 2020).", "With the prevalence of pre-trained language models, such as BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019), a great variety of DST approaches have been developed on top of these pre-trained models (Zhang et al., 2020a,b; Lin et al., 2020).", "The relations among domains and slots are not considered in the above approaches.", "However, the prior slot-domain membership relations can facilitate the sharing of domain knowledge and the dialogue-aware dynamic slot relations can conduce dialogue history understanding.", "Ignoring these relations may lead to sub-optimal performance.", "To fill in this gap, several new DST approaches, which involve the relations among domains and slots, have been proposed.", "Some of them leverage a graph structure to capture the slot-domain membership relations (Lin et al., 2021; Chen et al., 2020; Zhu et al., 2020; Zeng and Nie, 2020; Ouyang et al., 2020).", "Specifically, a predefined schema graph is employed to represent the slot-domain membership relations.", "However, they fail to incorporate the dialogue-aware dynamic slot relations into the schema graph.", "The other approaches utilize the attention mechanism to learn dialogue-aware dynamic slot relation features in order to facilitate information flow among slots (Zhou and Small, 2019; 1 The code is available at https://github.com/ sweetalyssum/DSGFNet .", "Feng et al., 2021; Heck et al., 2020; Hu et al., 2020; Ye et al., 2021).", "However, these approaches ignore the slot-domain membership relations defined by prior knowledge.", "Since both the prior slot-domain membership relations and dialogue-aware dynamic slot relations can enhance DST performance, our approach is developed to combine them in an effective way.", "Given that a deployed dialogue system may encounter an ever-increasing number of new domains that have limited training data available, the DST module should be capable of generalizing to unseen domains.", "Recent DST approaches have focused on using zero-shot learning to achieve this goal (Ras-togi et al., 2020; Noroozi et al., 2020).", "These approaches exploit the natural language descriptions of schemata to transfer knowledge across domains.", "However, they ignore the relations among domains and slots.", "In this work, we propose a unified framework to fuse the prior slot-domain membership relations and dialogue-aware dynamic slot relations, no matter whether the domains are seen or not.", "The proposed DSGFNet consists of four components: (1) a BERT-based dialogue utterance encoder that aims to contextualize the tokens of the current turn and the dialogue history; (2) a schema graph encoder that is able to generalize to unseen domains and shares information among predefined slot-domain membership relations; (3) a dialogue-aware schema graph evolving network that adds the dialogue-aware dynamic slot relations into the", "schema graph; and (4) a schema graph enhanced dialogue state decoder that extracts the value span from the candidate elements based on the evolved schema graph.", "Figure 2 illustrates the architecture.", "This encoder takes as input the current and previous dialogue utterances.", "Specifically, the input is a sequence of tokens with length K , i.e., [ t 1 , ..., t K ] .", "Here, we set the first token t 1 to [CLS] ; subsequent are the tokens in the current dialogue utterance and the ones in the previous dialogue utterances, which are separated by [SEP] .", "We employ BERT (Devlin et al., 2019) to obtain contextual token embeddings.", "The output is a tensor of all the token embeddings B = [ b 1 , ..., b K ] , with one embedding for each token.", "To make use of the slot-domain membership relations defined by prior domain knowledge, we construct a schema graph based on the predefined ontology.", "An example is shown in Figure 2.", "In this schema graph, each node represents either a domain or a slot, and all the slot nodes are connected to their corresponding domain nodes.", "In order to allow information propagation across domains, all the domain nodes are connected with each other.", "Schema-Agnostic Embedding Initializer.", "To generalize to unseen domains, DSGFNet initializes the schema graph node embeddings via a schema-agnostic projection.", "Inspired by zero-shot learning (Romera-Paredes and Torr, 2015), we propose a schema-agnostic embedding initializer to 117 project schemata across domains into a unified semantic distribution.", "Specifically, we feed a natural language description of one slot/domain into BERT, using the output of [CLS] as the semantic embeddings for this slot/domain.", "The semantic embeddings for the set of slot and domain is I = [ i 1 , ..., i N + M ] , where N and M are the number of slots and domains, respectively.", "We constrain the schema embedding initializer not to have any domain-specific parameters so that it can generalize to unseen domains.", "Slot-Domain Membership Relation Reasoning Network.", "To involve the prior slot-domain membership relations into the schema graph node embeddings, DSGFNet propagates information among slots and domains over the schema graph.", "We add a self-loop to each node because the nodes need to propagate information to themselves.", "Inspired by the GAT model (Velickovic et al., 2018), we propose a slot-domain membership relation reasoning network to propagate information over the schema graph.", "For each node, we first compute attention scores for its neighbours.", "These attention scores are used to weigh the importance of each neighboring node.", "Formally, the attention scores are calculated as follows: h i,j = ReLU ( W [ i i , i j ]) , (1) i,j = exp ( h i,j ) (cid:80) k N i exp ( h i,k ) , (2) where W is a matrix of parameters and N i is the neighborhood of the i -th node.", "The normalized attention coefficients and the activation function are used to compute a non-linear weighted combination of the neighbours.", "This is used to compute the tensor of the schema graph node embeddings G = ( g 1 , ..., g N + M ) : g i = ReLU (cid:88) j N i i,j i j , (3) where i { 1 , . . . , N + M } .", "To explore the higher-order connectivity information of slots across domains, we stack l layers of the reasoning network.", "Each layer takes the node embeddings from the previous layer as input, and outputs the updated node embeddings to the next layer.", "We propose a schema graph evolving network to incorporate the dialogue-aware dynamic slot relations into the schema graph, which is composed of", "Schema-Dialogue Fusion Layer.", "Since the dynamic slot relations are related to the dialogue context, we need to fuse the dialogue context information into the schema graph.", "We adopt the multihead attention (Vaswani et al., 2017) to achieve this goal.", "The mathematical formulation is: H = MultiHead ( Q = g i , K = B , V = B ) , (4) g i = H W a , (5) where W a is learnable parameters of a linear projection after the multi-head attention, and g i is the dialogue-aware schema graph node embeddings.", "Dynamic Slot Relation Completion Layer.", "This layer aims to augment the dynamic slot relations on the schema graph based on the dialogue-aware node embeddings.", "To involve the dialogue-aware dynamic slot relations into DST explicitly, DSGFNet defines three types of dynamic slot relations: (1) Co-reference relations occur when a slot value has been mentioned earlier in the dialogue and has been assigned to another slot; (2) Co-update relations occur when slot values are updated together at the same dialogue turn, and; (3) Co-occurrence relations occur when slots with a high co-occurrence probability in a large dialogue corpus appear together in the current dialogue.", "Specifically, we feed the dialogue-aware slot node representations into a multi-layer perceptron followed by a 4 -way softmax function to identify the relations between slot pairs, which include the none relation and the three dynamic relations mentioned above.", "Formally, given the i -th and j -th dialogue-aware slot node embeddings g i and g j , we obtain an adjacent matrix of the dynamic slot relations for all slot pairs as follows: A ( i, j ) = arg max ( softmax ( MLP ( g i g j ))) .", "(6) With A , we add dynamic slot relation edges to the schema graph.", "To decode the slot values by means of incorporating the slot-domain membership relations and dialogue-aware dynamic slot relations which are captured by the evolved schema graph, we propose a schema graph enhanced dialogue state decoder.", "To learn a more comprehensive slot node embedding, we need to fuse multiple relations on the 118 evolved schema graph.", "DSGFNet divides different relations on the schema graph into sub-graphs R s , R r , R u , R o , which represent slot-domain membership relation, co-reference relation, co-update relation, and co-occurrence relation, respectively.", "For each sub-graph R i , its node embeddings s i are obtained by attending over the neighbors, which is the same as the method used in Section 3.2.", "Considering that different relation types have different contributions to the node interactions for different dialogue contexts (Wang et al., 2019), we aggregate these different sub-graphs via an attention mechanism as follows: S = [ s s ; s r ; s u ; s o ] , (7) = softmax ( S tanh ( W s b [ CLS ] + b s )) , (8) s = S , (9) where W s , b s are learnable weights, b [ CLS ] is the output of BERT-based dialogue utterance encoder.", "Each slot value is extracted by a value predictor based on the corresponding fused slot node embeddings s .", "The value predictor is a trainable nonlinear classifier followed by two parallel softmax layers to predict start and end positions in candidate elements C , which are composed by the dialogue context B and slots' candidate value vocabulary V : C = [ B ; V ] (10) [ l s , l e ] = r d tanh ( s W d C + b d ) , (11) p s = softmax ( l s ) , (12) p e = softmax ( l e ) , (13) where r d , W d , and b d are trainable parameters.", "Note that if the end position is before the start position, the resulting span will simply be None.", "If the start position is in the slots' candidate value vocabulary, the resulting span will only pick the candidate value in this position.", "During training, we use ground truth dynamic slot relation graph to optimize the dialogue state decoder.", "Cross-entropy between predicted value span [ p s , p e ] and ground truth value span is utilized to measure the loss of the value span prediction L s .", "The dynamic slot relation identifier is optimized by the cross-entropy loss L r between predicted dynamic relation A and the ground truth dynamic slot relation.", "We train dialogue state decoder and dynamic slot relation identifier together, the joint loss L is computed as follows: L = L r + (1 ) L s , (14) where [0 , 1] is a balance coefficient.", "During inference, the predicted dynamic slot relation A is used to predict value span as dialogue state.", "We conduct experiments on three task-oriented dialogue benchmark datasets: SGD (Rastogi et al., 2020), MultiWOZ2.2 (Zang et al., 2020), and Mul-tiWOZ2.1 (Eric et al., 2020).", "Among them, SGD is by far the most challenging dataset which contains over 16,000 conversations between a human-user and a virtual assistant across 16 domains.", "Unlike the other two datasets, it also includes unseen domains in the test set.", "MultiWOZ2.2 and MultiWOZ2.1 are smaller human-human conversations benchmark datasets, which contain over 8,000 multi-turn dialogues across 8 and 7 domains, respectively.", "MultiWOZ2.2 is a revised version of MultiWOZ2.1, which is re-annotated with a different set of annotators and also canonicalized entity names.", "Details of datasets are provided in Table 1.", "We compare with the following existing models, which are divided into two categories.", "(1) Models that can predict dialogue state on unseen domains: SGD-baseline (Rastogi et al., 2020), a schema-guided paradigm that predicts states for unseen domains; FastSGT (Noroozi et al., 2020), a BERT-based model that uses multi-head attention projections to analyze dialogue; Seq2Seq-DU (Feng et al., 2021), a sequence-to-sequence framework which decodes dialogue states in a flatten format.", "(2) Models that cannot predict dialogue state on unseen domains: TRADE (Wu et al., 119 2019), a generation model which generates dialogue states from utterances using a copy mechanism; DS-DST (Zhang et al., 2020a), a dual strategy that classifies over a picklist or finding values from a slot span; TripPy (Heck et al., 2020), an open-vocabulary model which copies values from dialogue context, or slot values in previous dialogue state; SOM-DST (Kim et al., 2020), a selectively overwriting mechanism which first predicts state operation on each of the slots and then overwrites with new values; MinTL-BART (Lin et al., 2020), a plug-and-play pre-trained model which jointly learns dialogue state tracking and dialogue response generation; SST (Chen et al., 2020), a graph model which fuses information from utterances and static schema graph; PPTOD (Su et al., 2021), a multi-task pre-training strategy that allows the model to learn the primary TOD task completion skills from heterogeneous dialog corpora.", "Our evaluation metrics are consistent with prior works on these datasets.", "We compute the Joint Goal Accuracy (Joint GA) on all test sets for straightforward comparison with the state-of-the-art methods.", "Joint GA is defined as the ratio of dialogue turns for which all slots have been filled with the correct values according to the ground truth.", "We use BERT model (i.e., BERT-base and uncased) to encode utterances and schema descriptions.", "The BERT models are fine-tuned in the training process.", "The maximum length of an input sequence is set to 512.", "The hidden size of the schema graph encoder and the schema graph evolving network is set to 256.", "The dropout probability is 0.3.", "The balance coefficient is 0.5.", "Adam (Kingma and Ba, 2014) is used for optimization with an initial learning rate (LR) of 2e-5.", "We conduct training with a warm-up proportion of 10% and let the LR decay linearly after the warm-up phase.", "Tables 2, 3, 4 show the performance of DSGFNet as well as the baselines on three datasets respectively.", "It is shown that DSGFNet achieves state-of-the-art performance in unseen domains on SGD, all domains on SGD, and MultiWOZ2.2.", "All improvements observed compared to the baselines are statistically significant according to two sided paired t-test (p 0.05).", "And the performance on MultiWOZ2.1 are comparable with the state-of-the-art 2 .", "Most notably, DSGFNet improves the performance on SGD most significantly, which has unseen domains and more complex schemata domains, compared to the runner-up.", "It indicates that DSGFNet can facilitate knowledge transfer to new domains and improve relation construction among complex schemata domains.", "We conjecture that it is due to DSGFNet containing the schema-agnostic encoder and dynamic schema graph.", "The following analysis provides a better understanding of our model's strengths.", "2 TRADE, SST use the original MultiWOZ datasets.", "The other models use the data preprocessed by TripPy.", "We conduct an ablation study on DSGFNet to quantify the contributions of various factors: the usage of slot-domain membership relations, dynamic slot relations, and multiple relation aggregation.", "The results indicate that the dynamic schema graph of DSGFNet is indispensable for DST.", "To check the effectiveness of the slot-domain membership relations, we remove the schema graph by replacing the prior slot-domain relation adjacency matrix with an identity matrix I .", "Results in Table 5 show that the joint goal accuracy of DSGFNet without the slot-domain membership relations decreases markedly on unseen domains of SGD, all domains of SGD, MultiWOZ2.2, and Mul-tiWOZ2.1.", "It indicates the schema graph, which contains slot-domain membership relations, can facilitate knowledge sharing among domain and slot no matter whether the domain is seen or not.", "To investigate the effectiveness of the dialogue-aware dynamic slot relations in the schema graph, we eliminate the evolving network of DSGFNet.", "Table 5 shows the results on unseen domains of SGD, all domains of SGD, MultiWOZ2.2, and Mul-tiWOZ2.1 in terms of joint goal accuracy.", "One can observe that without the dynamic slot relations the performance deteriorates considerably.", "In addition, there is a more markedly performance degradation compared with the results of the slot-domain membership relations.", "It indicates that the dynamic slot relations are more essential for DST, which can facilitate the understanding of the dialogue context.", "To validate the effectiveness of the schema graph relation aggregation mechanism in the dialogue state decoder, we directly concatenate all sub-graph representations instead of calculating a weighted sum via the sub-graph attention.", "As shown in Table 5, the performance of the models without the Unseen Domains SGD All DomainsSGD MultiWOZ2.2 MultiWOZ2.1 Unseen Domains SGD MultiWOZ2.2 MultiWOZ2.1 All DomainsSGD Figure 3: F1 and Accuracy of DSGFNet and BERT for dynamic relation prediction on unseen domains SGD, all domains of SGD, MultiWOZ2.2 and MultiWOZ2.1.", "relation aggregation layer in terms of joint goal accuracy decreases markedly compared to DSGFNet.", "It indicates that the attentions to different types of relations affect the dialogue understanding ability.", "In order to test the discriminative capability of DSGFNet for dynamic slot relations, we evaluate the performance of the schema graph evolving network.", "Since baselines cannot predict the dynamic slot relations explicitly, we compare DSGFNet with the BERT-based classification approach.", "Following the classification task in BERT, the input sequence starts with [CLS], followed by the tokens of the dialogue context and slot pairs, separated by [SEP], and the [CLS] representation is fed into an output layer for classification.", "Figure 3 shows the results on unseen domains of SGD, all domains of SGD, MultiWOZ2.2, and MultiWOZ2.1 in terms of F1 and Accuracy.", "From the results, we observe that DSGFNet outperforms BERT significantly.", "We conjecture that it is due to the exploitation of schema graph with slot-domain membership relations in DSGFNet.", "In addition, since BERT without schema encoder cannot solve unseen domains, there is a significant performance degradation on SGD which contains a large number of unseen domains in the test set.", "To better illustrate the effectiveness of augmenting slot relations on the schema graph, we study how different dynamic slot relations affect the DST performance.", "Table 7 presents the joint goal accuracy of DSGFNet with different dynamic relations on unseen domains of SGD, alll domains of SGD, MultiWOZ2.2, and MultiWOZ2.1.", "One can see that the performance of DSGFNet with each type of dynamic slot relation surpasses that without any dynamic slot relations considerably.", "Thus, all types of dynamic slot relations in the schema graph are helpful for dialogue understanding.", "Furthermore, the performance of DSGFNet with co-occurrence relation is superior to the performance with the other two dynamic slot relations.", "We conjecture that it is due to the fact that a large percentage of dynamic relations is the co-occurrence relation, which has an incredible effect on DST.", "To demonstrate the effectiveness of automatically completing each type of slot relations on the schema graphs, we replace four automatically-completed sub-graphs in DSGFNet with four fully-connected graphs.", "As shown in Table 7, the performance of the model with the fully-connected graphs in terms of joint goal accuracy decreases significantly compared to DSGFNet (two-sided paired t-test, p < 0 . 05 ).", "We believe that this is caused by the noise introduced by the redundancy captured by the relations between all pairs of slots.", "In addition, sampling the relations using our strategy can also reduce the memory requirements when the number of slots and domains are large.", "We make qualitative analysis on the results of DSGFNet and Seq2seq-DU on SGD.", "We find that DSGFNet can make a more accurate inference of dialogue states by using the dynamic schema graph.", "For example, as shown in Table 6, city-location is predicted as co-reference relation, city-date and number of seats-ride type are predicted as co-update relation, city-date is predicted as co-occurrence relation.", "Based on the dynamic schema graph, DSGFNet propagates information involving slot-domain membership relations and dynamic slot relations.", "Thus, it infers slot values more correctly.", "In contrast, since Seq2seq-DU ignores the dynamic slot relations, it cannot properly infer the values of location and ride type, which have dynamic slot relations with other slots.", "We have proposed a new approach to DST, referred to as DSGFNet, which effectively fuses prior slot-domain membership relations and dialogue-aware dynamic slot relations on the schema graph.", "To incorporate the dialogue-aware dynamic slot relations into DST explicitly, DSGFNet identifies co-reference, co-update, and co-occurrence relations.", "To improve the generalization ability, DSGFNet employs a schema-agnostic graph attention network to share information.", "Experimental results show that DSGFNet outperforms the existing methods in DST on three benchmark datasets, including unseen domains of SGD, all domains of SGD, Mul-tiWOZ2.1, and MultiWOZ2.2.", "For future work, we intend to further enhance our approach by utilizing more complex schemata and data augmentation techniques.", "This project was funded by the EPSRC Fellowship titled Task Based Information Retrieval and grant reference number EP/P024289/1." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "other" ]
[ "Multi-task learning (MTL) has been studied recently for sequence labeling.", "Typically, auxiliary tasks are selected specifically in order to improve the performance of a target task.", "Jointly learning multiple tasks in a way that benefit all of them simultaneously can in-crease the utility of MTL.", "In order to do so, we propose a new LSTM cell which contains both shared parameters that can learn from all tasks, and task-specific parameters that can learn task specific information.", "We name it a Shared-Cell Long-Short Term Memory (SC-LSTM).", "Experimental results on three sequence labeling benchmarks (named-entity recognition, text chunking, and part-of-speech tagging) demonstrate the effectiveness of our SC-LSTM cell.", "As one of the fundamental tasks in NLP, sequence labeling has been studied for years.", "Before the blooming of neural network methods, handcrafted features were widely used in traditional approaches like CRFs, HMMs, and maximum entropy classifiers (Lafferty et al., 2001; McCallum et al., 2000; McCallum and Li, 2003; Florian et al., 2003).", "However, applying them to different tasks or domains is hard.", "Recently, instead of using handcrafted features, end-to-end neural network based systems have been developed for sequence labeling tasks, such as LSTM-CNN (Chiu and Nichols, 2015), LSTM-CRF (Huang et al., 2015; Lample et al., 2016), and LSTM-CNN-CRF (Ma and Hovy, 2016).", "These models utilize LSTM to encode the global information of a sentence into a word-level representation of its tokens, which avoids manual feature engineering.", "Moreover, by incorporating a character-level representation of tokens, these models further improve.", "In many such studies, though, neural network models are trained toward a single task in a supervised way by making use of relatively small annotated training material.", "Jointly learning multiple tasks can reduce the risk of over-fitting to one task, and many attempts have been made at doing so for sequence labeling tasks (Caruana, 1997; Collobert and Weston, 2008; Collobert et al., 2011).", "Results so far are not conclusive.", "Some works have reported negative results overall.", "For instance in their pioneering work, Collobert et al. (2011) observed that training their model on NER, POS tagging and chunking altogether led to slight decrease in performance compared to a similar model trained on each task separately.", "Sgaard and Goldberg (2016) study chunking and CCG super tagging, coupled with an additional POS tagging task.", "They do report gains on both target tasks over single task models, but results varied depending where the additional task was taken care of in their architecture.", "The authors actually reported a failure to leverage other labelling tasks, and concluded that combined tasks should be sufficiently similar to the target one, for significant gains to be observed.", "Similarly, Alonso and Plank (2017) achieved significant improvements for only 1 out of 5 tasks considered.", "Also of interest is the work of (Changpinyo et al., 2018) where the authors investigate the classical shared encoder-based MTL framework (Collobert et al., 2011; Collobert and Weston, 2008) on 11 sequence labeling datasets including POS, NER, and chunking.", "They report that chunking is beneficial to NER, while POS tagging can be harmful.", "We present in Section 2 the two major approaches proposed for multi-task learning and discuss their limitations.", "We describe our approach in Section 3, and present our experimental settings and results in Section 4 and 5 respectively.", "We further analyze our approach in Section 6, discuss related works in Section 7 and conclude in Section 8.", "We are given a set of K tasks (in our case named-entity recognition, text chunking and part of speech tagging) that we want to train jointly in an end-to-end fashion.", "Each task k has an associated training set S k tp x ki , y ki q i Pr 1 ,n k s u of n k examples, where x ki and y ki are sequences of size m i of tokens and tags respectively.", "We wish to learn a single function F which maps any token input sequence x i to its task-specific labels, where the mapping defines a probabilistic distribution for each involved task: p p y 1 i , . . . , y Ki q F p x i q .", "There are two kinds of neural-based MTL methods.", "The first one LSTM-s hereafter uses an identical representation for all tasks, as proposed in (Collobert et al., 2011).", "This is illustrated in Figure 1 (left), where 3 layers of LSTMs are being stacked.", "1 While different tasks directly interact with all parameters of the model, this increases the risk of optimization conflicts when gold-standard labels from different tasks have no significant correlation.", "The second class of multi-task architectures is depicted in the middle part of Figure 1, and is named LSTM-d hereafter.", "In this configuration, each LSTM layer feeds a task-specific classifier and serves as input to the next stacked LSTM layer (Sgaard and Goldberg, 2016).", "The underlying as-sumption is that tasks may be ordered in such a way that easier tasks are learned first, the target tasks being the latest one considered, thus benefit-ing the hidden state of the lower layers.", "One drawback, however, is that one must decide which task to consider first, a decision which may impact the overall performance.", "Furthermore, using the hidden state of lower layers increases the limitation of learning representation for that task.", "We believe that one reason for the lack of consistent benefits of MTL in the labelling literature is that the proposed models share all or part of parameters for extracting hidden states, which leads to optimization conflicts when different tasks 1 This typically delivers better performance than having just one.", "In practice also, LSTM layers are replaced by biLSTM ones.", "require different features.", "We believe it would be helpful if we make the model have ability to learn a task-specific representation (Ammar et al., 2016; stling and Tiedemann, 2016; Kiperwasser and Ballesteros, 2018) at the same time.", "This observation led us to design a new LSTM cell which allows at almost no additional computation cost to efficiently train a single RNN-based model, where task-specific labelers clearly outperform their singly-tasked counterparts.", "Actually, by training our model on NER, chunking and POS tagging, we report state-of-the-art (or highly competitive) results on each task, without using external knowledge (such as gazetteers that has been shown to be important for NER), or hand-picking tasks to combine.", "Our solution is depicted in the right part of Figure 1 and detailed in the next section.", "It is actually very similar to the LSTM-s one, except that each LSTM layer passes on task-specific hidden representations that learn the peculiarities of individual tasks.", "In the last layer, each classifier is fed with a concatenation of a global representation (as in LSTM-s) and the task-specific one.", "By doing so, we keep the advantages of both aforementioned approaches, where one task can have its task-specific representation as in LSTM-d, while not enforcing any task order, further giving the freedom to the model to learn specificities of each task.", "For this architecture to work, we need to modify the classical LSTM cell, which is described in the next section.", "An LSTM cell (Hochreiter and Schmidhuber, 1997) is made up of four functional gates which control the input and output of the memory state c t : a forget gate f t controls what information to remove from the memory state of the last time step, an input gate i t controls the information to add to the current memory, and an output gate o t controls what information to release from the current memory state.", "This mechanism is formalized in Equation 1 where x t , h t is the input vector and hidden vector at time step t , is the sigmoid function, c is the new candidate state, c t is the memory state, which encodes information of the current input and history information, and indicates the element-wise product.", "W f , U f , W i , U i , W o , U o , W c , U c are weight matrices that are being learned.", "f t p W f h t 1 ` U f x t q , (1) i t p W i h t 1 ` U i x t q , o t p W o h t 1 ` U o x t q , c tanh p W c h t 1 ` U c x t q , c t i t c ` f t c t 1 , h t o t tanh p c t q , Figure 2: Structure of an SC-LSTM cell.", "The overall structure of our cell is depicted in Figure 2.", "On top of a standard LSTM cell, we add one cell per task with its own parameters.", "The standard LSTM cell is thus shared among the K task-specific cells, therefore the name we choose for this new cell, which stands for Shared-Cell LSTM.", "Task-specific cells are each parametrized by an output gate o kt which learns to select the useful information from the shared memory cell c t and outputs q kt .", "This is formally described in Equation 2, where W k and U k are two extra weight matrices that parametrize the k th task, and q kt has to be understood as a task-specific hidden representation since parameters of k th task-specific cell are only updated by supervision from task k .", "o kt p W k q kt 1 ` U k x kt q (2) q kt o kt tanh p c t q In order to make use of both shared and task-specific information (Kim et al., 2016; Peng et al., 2017; Hershcovich et al., 2018), for the k th task, we concatenate the output of the shared cell h t and of the task-specific one q kt to generate the final latent representation, as noted in Equation 3, where is the concatenation operation.", "In practice, we stack SC-LSTM layers.", "The top-most layer uses s kt as a representation of the current input, while cells in lower layers pass the current shared hidden state h t to the upper SC-LSTM cell.", "The use of s in the topmost layer only is arbitrary and should be investigated.", "The training material available may be gathered from different datasets S k which means that input sequences differ from one task to another.", "Therefore in practice, we build K dataloaders, and the training is achieved in a stochastic manner by looping over the tasks at each epoch, as detailed in Algorithm 1.", "The loss function (cid:15) we minimize is a linear combination of task-specific loss functions, where the weighting coefficients ( k in Equation 4) are hyper-parameters.", "We seek to minimize cross-entropy of the predicted and true distributions, therefore task-specific loss functions are defined according to Equation 5.", "where y ki,j is the prediction of the k th softmax classifier parametrized by a projection matrix W k and a bias vector b k :", "where s ki , j stands for j th token of the i th sequence for task k .", "In an SC-LSTM cell with k tasks, we will add k matrices and k bias vectors, compared with a vanilla LSTM cell, which increases the capacity of the resulting model.", "This extra calculation is conducted in parallel with the original LSTM computations.", "We test several baseline systems and our SC-LSTM model on three well-established sequence labeling benchmarks: CoNLL2003 (Tjong Kim Sang and De Meulder, 2003) for named-entity recognition, CoNLL2000 (Tjong Kim Sang and Buchholz, 2000) for chunking, and the more recent Universal Dependency dataset (Nivre et al., 2016) for part-of-speech tagging, 2 and on which recent MTL investigations have been conducted (Alonso and Plank, 2017; Changpinyo et al., 2018).", "recent works (Peters et al., 2017; Liu et al., 2017), sections 15-18 of the Wall Street Journal are used for training, and we randomly sampled 1000 sentences in the training set as the development set.", "Section 20 is used for tests.", "Table 1 presents the main characteristics of the training, development and test sets we used.", "We used bidirectional LSTM or SC-LSTM as our encoders for the vector representation of words.", "Bidirectional LSTM can capture global information of the whole sentence, thanks to the encoding of a sequence in a recurrent way.", "The vector representation of words consists of three parts: word embedding, character-level representation and contextual representation: previous works have proven that pre-trained word embeddings like Word2vec (Mikolov et al., 2013), SENNA (Collobert et al., 2011), or Glove (Pennington et al., 2014) have a positive impact on sequence labeling tasks.", "We used Glove embeddings 3 of dimension 100, that are fine-tuned during training; character-level information has been proven useful for three sequence labeling tasks (San-tos and Zadrozny, 2014; Ma and Hovy, 2016; Lample et al., 2016) and Some works further show its effectiveness (Reimers and Gurevych, 2017; Yang et al., 2018).", "In order to encode character sequences, we used a CNN.", "The character embedding look-up table is initialized by randomly sampling from 3 The embedding file glove6B are available https:// nlp.stanford.edu/projects/glove/ the uniform distribution in the range [-0.1, 0.1]; the third part of the input vector is contextual embedding.", "Most of the recent works found that contextualized features such as ELMo or BERT (Peters et al., 2018; Devlin et al., 2018) can greatly boost performance.", "We incorporate ELMo into our input vector by using the ELMo implementation of Gardner et al. (2018).", "Conditional random field (CRF) classifiers can consider the dependency of output tags and has been proven useful for tasks like NER or chunking.", "We consider CRF layers in our models.", "To compare the effectiveness of three MTL models, we first test our SC-LSTM and other vanilla LSTM based models without CNN based character-level information extractor and contextual embeddings.", "In this case, the input will be a concatenation of word embedding and capitalization features 4 .", "To compare with the state-of-art models, we further implemented three more variants: SC-LSTM-CNN-CRF which makes use of CNN-based character level features with a CRF layer on top, very similar to (Ma and Hovy, 2016), and SC-LSTM-LM-CNN, a variant which considers contextualized word embeddings as in (Peters et al., 2018).", "For SC-LSTM-LM-CNN, we used mainly the configuration advocated in (Peters et al., 2018) except for the hidden size of SC-LSTM, which we report in Table 2.", "We further weighted the NER task in the objective function of Equation 4 to 3 ( NER ), the weights of the other tasks where set to 1.", "5 We trained this model SC-LSTM-LM-CNN using the Adam optimizer (Kingma and Ba, 2014) with default setting.", "Such a model spends typically less than 30 epochs to converge.", "We choose the mini-batch size of 10 and used the gradient clipping threshold 5.", "All models are implemented using the Pytorch library (Paszke et al., 2017).", "Several tagging schemas have been proposed for conducting chunking and NER tasks.", "We used the most common BIO one in this work.", "4 Eight features encode capitalization patterns, such as AllUpper, InitialUpper, etc. 5 We found the loss of the NER task to be around 1/3 of the loss of the chunking and POS tasks.", "In this section, we start by comparing MTL approaches based on LSTM and SC-LSTM cells.", "We then report the performance of variants of our approach we implemented and compare it to state-of-the-art models.", "We compare bidirectional LSTM (STL), SC-LSTM, and baseline MTL models.", "The results are shown in Table 3.", "Training Bi-LSTM models on each task separately (STL) was first conducted as a point of comparison (line 1).", "For LSTM-s and LSTM-d, we regard one task as the main task and the others as auxiliary tasks; a setting consistent with Sgaard and Goldberg (2016).", "Because LSTM-s and LSTM-d always fail to achieve stable and competitive results on the three tasks we considered at the same time, we report the best performance we could obtain for each task (line 2 and 3) specifically.", "On the contrary, our SC-LSTM model is trained once jointly on all tasks, and only one model is being tested in the end, which is much easier and more realistic of a real deployment (line 4).", "The results show that our SC-LSTM model improves the performance of the three tasks simultaneously compared with LSTM (STL), and outperforms the other two MTL methods.", "By joint learning three tasks, both LSTM-s and LSTM-d can boost the chunking task significantly, but both fail to improve NER and POS tasks.", "This is consistent with observations made in (Collobert et al., 2011; Sgaard and Goldberg, 2016).", "We also observe that our SC-LSTM model also benefits the chunking task the most.", "We will analyze this further in Section 6.", "To further demonstrate the effectiveness of our SC-LSTM model, we compared different variants with state-of-the-art approaches, that we classify into three broad categories:", "Single sequence labeling where models are trained without the supervision of other tasks.", "Specifically, we compare our results to the LSTM-CRF model of Lample et al. (2016) and the LSTM-CNN-CRF of Ma and Hovy (2016), since those are state-of-the-art singly tasked sequence labelers.", "Multi-tasked sequence labelers where models leverage the supervision of other tasks.", "We compare our model with the representative approaches of Luo et al. (2015); Sgaard and Goldberg (2016); Collobert and Weston (2008); Collobert et al. (2011).", "Models with language model.", "Recently, several studies in using contextualized word embeddings achieved great success in a number of tasks.", "Some recent studies (Peters et al., 2017; Rei, 2017; Peters et al., 2018; Devlin et al., 2018) are particularly considered.", "Results for the CoNLL 2003 dataset are reported in Table 4.", "We observe that our SC-LSTM-LM-CNN model outperforms all approaches but Devlin et al. (2018) and Akbik et al. (2018).", "The latter work is using the development set as training NER F1-score Collobert et al. (2011) 89.59 Chiu and Nichols (2015) 91.62 Huang et al. (2015) 88.83 Luo et al. (2015) 91.20 Ma and Hovy (2016) 91.21 Lample et al. (2016) 90.94 Shen et al. (2017) 90.89 Yang et al. (2017) 91.20 Rei (2017) 86.26 Liu et al. (2017) 91.71 Peters et al. (2017) 91.93 Zhang et al. (2018) 91.2 Liu et al. (2018) 91.95 Peters et al. (2018) 92.22 Clark et al. (2018) 92.60 Akbik et al. (2018) 93.09 Devlin et al. (2018) 92.80 SC-LSTM 89.96 SC-LSTM-CNN-CRF 91.37 SC-LSTM-LM-CNN 92.60 Table 4: F1-score on the CoNLL03 NER dataset.", "material, which avoids a direct comparison.", "The former model (BERT) is achieving great success by leveraging a huge amount of unannotated data as well as a lot of computation resources we could not afford in this study.", "We are however pleased that our model is leveraging contextual embeddings with 0.38 absolute F1 improvement over the results of Peters et al. (2018).", "We leave as future work to investigate whether our MTL model can leverage BERT embeddings.", "We compared a number of models on the CoNLL2000 chunking dataset.", "A few of them (Ma and Hovy, 2016; Lample et al., 2016; Peters et al., 2018) where not tested on this benchmark, and we reimplemented them.", "We also trained the companion toolkits of those models, but (as detailed in the next section) got slightly lower results for some reasons.", "Table 5 reports the performance of the many approaches we tested.", "We observe that our SC-LSTM-LM-CNN architecture achieves a new state-of-the-art F1 score, with over 1 absolute point over the competitive approach of Peters et al. (2017), and an improvement of 0.4% over the current state-of-the-art method of Clark et al. (2018).", "We conducted experiments on the Universal Dependency POS English dataset and present the results in Table 6.", "The only study we found that reports results on the UD v1.3 benchmark we used here is (Bjerva et al., 2016), and we report the results they published.", "For Liu et al. (2017), Peters et al. (2018) we used the available companion toolkits 6 that we trained ourself with the default settings.", "We re-implemented the other approaches.", "Again, we observe that SC-LSTM-LM-CNN outperforms all other approaches we tested.", "The absolute improvement in F1 score over the current state-of-the-art of Peters et al. (2018) is 0.21%.", "In order to further validate our implementations, we also ran the toolkits of Ma and Hovy (2016) and Lample et al. (2016) and obtained slightly lower results 7 .", "We conducted a number of investigations in order to understand better why our multi-task learning model is effective.", "We report in Figure 3 the convergence of different MTL models on the development set.", "To obtain those curves, we collected the F1-score on the NER and chunking tasks as well as the accuracy of the POS task, and averaged them after each epoch.", "We clearly see that the SC-LSTM model converges faster than other ones.", "It achieves higher performance after the first epoch, and after about ten epochs, it shows a smooth performance curve, while LSTM-s and LSTM-d models still fluctuate.", "This indicates that our model can learn the hidden representation of multiple tasks in a faster and smoother way than the other two methods.", "Besides, we observe in Figure 3c and 3d that combinations of tasks involving chunking typically show a smooth training curve, on the contrary to Figure 3b where NER and POS tasks are combined.", "The fact that the training regimen fluctuates in the latter case for both LSTM-s and LSTM-d suggests that conflicts with those two tasks happen during optimisation, which we do not observe for our model.", "Also, Figure 3a illustrates that combining the three tasks altogether leads to comparably better performance of our model over LSTM-s and LSTM-d.", "We analyzed which task is benefited or harmed by others under the three MTL settings we considered and present results in Figure 4.", "We find that it leads to better performance for all MTL models by jointly learning chunking with NER or POS(see in Figure 4b).", "This is in particular the case of our SC-LSTM model which records the largest gain, especially when all tasks are being trained on.", "Figure 4c shows results obtained on POS.", "Only our SC-LSTM model achieves a meaningful improvement.", "We however observe that the NER task tends to hurt the performance of POS, since in most cases, the performance of POS+NER is lower than the one obtained with POS+chunking.", "Clearly, the combination of different tasks has a different effect on the final performance of each task.", "The chunking task seems compatible with NER and POS tasks, and it boosts the other two tasks in all three MTL settings, which is consistent with the results of Changpinyo et al. (2018).", "Directly jointly training on POS and NER datasets tends to reduce the performance in LSTM-s and LSTM-d, which is also consistent with the conclusion in (Changpinyo et al., 2018).", "In conclusion, all of the results show that our SC-LSTM model is effective at capturing the mutual benefits of all combined tasks.", "Since it performs consistently better in various settings, we believe our model to be more robust.", "There are many works that use extra knowledge to improve the performance of sequence labeling", "tasks.", "Many works have focussed on jointly learning two tasks, often with one being considered as the main task, the other being the auxiliary one (S-gaard and Goldberg, 2016; Bjerva et al., 2016; Alonso and Plank, 2017).", "For instance, chunking, combinatory categorical grammar supertag-ging, NER, super senses (SemCor), or multiword expression + supersense will be taken as the main task, while POS is the auxiliary task in (Sgaard and Goldberg, 2016).", "Exceptions to this line of work include (Collobert et al., 2011) that evaluates four tasks: POS, chunking, NER and semantic role labeling; (Kiperwasser and Ballesteros, 2018) that considers a machine translation task with POS and dependency parsing.", "And Niehues and Cho (2017) considers machine translation with POS and NER tasks; Zhang and Weiss (2016) show that jointly learning a POS tagger and a dependency parser is effective.", "Miwa and Bansal (2016) jointly trained models for entity detection and relation extraction in the field of relation extraction.", "Other works are also trying to leverage language models to empower the performance of sequence labeling tasks.", "Notably, Liu et al. (2017) propose a model which uses a neural language model to learn character-level knowledge, and conducts sequence labeling to guide the language model towards specific tasks.", "Others (Peters et al., 2017, 2018; Devlin et al., 2018) use neural language models pre-trained on a large unlabeled corpus to learn context-sensitive representations of words, and leverage this representation into the sequence labeling model.", "More related to the present work are studies that analyze the effectiveness of different combinations of sequence labeling tasks in a multi-task learning.", "In particular, Changpinyo et al. (2018) conduct an investigation on 11 sequence labeling tasks, while Alonso and Plank (2017) evaluate 5 tasks but report signifiant gains for only one task.", "In this paper, we propose a simple yet powerful LSTM cell that leverage both shared and task-Figure", "task-Figure 4: Results of different task groups on each test set:", "(a) NER,", "(b) chunking, and", "(c) POS.", "Horizontal lines show results for single-task models(NER: 89.39, Chunking: 94.44 and POS: 95.46).", "Johannes Bjerva, Barbara Plank, and Johan Bos.", "2016.", "Semantic tagging with deep residual networks.", "arXiv preprint arXiv:1609.07053 .", "Rich Caruana.", "1997.", "Multitask learning.", "Machine learning , 28(1):4175.", "Soravit Changpinyo, Hexiang Hu, and Fei Sha.", "2018.", "Multi-task learning for sequence tagging: An empirical study.", "arXiv preprint arXiv:1808.04151 .", "Jason PC Chiu and Eric Nichols.", "2015.", "Named entity recognition with bidirectional lstm-cnns.", "arXiv preprint arXiv:1511.08308 .", "Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le.", "2018.", "Semi-supervised sequence modeling with cross-view training.", "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 1914 1925.", "Association for Computational Linguistics.", "Ronan Collobert and Jason Weston.", "2008.", "A unified architecture for natural language processing: Deep neural networks with multitask learning.", "In Proceedings of the 25th international conference on Machine learning , pages 160167.", "ACM.", "Ronan Collobert, Jason Weston, Lon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa.", "2011.", "Natural language processing (almost) from scratch.", "Journal of Machine Learning Research , 12(Aug):24932537.", "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.", "2018.", "Bert: Pre-training of deep bidirectional transformers for language understanding.", "arXiv preprint arXiv:1810.04805 .", "specific parameters in a multi-task setting designed for sequence labelling.", "We conduct extensive experiments to compare both single-task learning and multi-task learning models.", "We analyzed the influence of grouping different tasks under various multi-task settings.", "Experiments demonstrate the effectiveness of our model for sequence labeling tasks.", "We report new state-of-the-art results on both POS and chunking tasks, and close to state-of-the-art performance on NER, without exploiting external ressources, neither diving into dedicated feature engineering.", "Despite those positive outcomes, several issues with multi-task learning for sequence labeling remain open.", "In particular, we only considered 3 tasks here, and therefore plan to test our approach on more tasks, perhaps understanding better why some tasks are less useful to others.", "Also, there are several ways we could have used our SC-LSTM cell into our models which we would like to investigate further.", "In particular, in this work, we only used the specific-task hidden states in the last layer of the model, which can obviously be revisited." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "other", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "objective", "objective", "abstain", "method", "objective", "method" ]
[ "We apply a generative segmental model of task structure, guided by narration, to action segmentation in video.", "We focus on unsupervised and weakly-supervised settings where no action labels are known during training.", "Despite its simplicity, our model performs competitively with previous work on a dataset of naturalistic instructional videos.", "Our model allows us to vary the sources of supervision used in training, and we find that both task structure and narrative language provide large benefits in segmentation quality.", "Finding boundaries in a continuous stream is a crucial process for human cognition (Martin and Tversky, 2003; Zacks and Swallow, 2007; Levine et al., 2019; Unal et al., 2019).", "To understand and remember what happens in the world around us, we need to recognize the action boundaries as they unfold and also distinguish the important actions from the insignificant ones.", "This process, referred to as temporal action segmentation , is also an important first step in systems that ground natural language in videos (Hendricks et al., 2017).", "These systems must identify which frames in a video depict actions which amounts to distinguishing these frames from background ones and identify which actions ( e.g. , boiling potatoes) each frame depicts.", "Despite recent advances (Miech et al., 2019; Sun et al., 2019), unsupervised action segmentation in videos remains a challenge.", "The recent availability of large datasets of naturalistic instructional videos provides an opportunity for modeling of action segmentation in a rich task context (Yu et al., 2014; Zhou et al., 2018; Zhukov et al., 2019; Miech et al., 2019; Tang et al., 2019); Work begun while DF was interning at DeepMind.", "in these videos, a person teaches a specific high-level task ( e.g. , making croquettes) while describing the lower-level steps involved in that task ( e.g. , boiling potatoes).", "However, the real-world nature of these datasets introduces many challenges.", "For example, more than 70% of the frames in one of the YouTube instructional video datasets, CrossTask (Zhukov et al., 2019), consist of background regions ( e.g. , the video presenter is thanking their viewers), which do not correspond to any of the steps for the video's task.", "These datasets are interesting because they provide (1) narrative language that roughly corresponds to the activities demonstrated in the videos and (2) structured task scripts that define a strong signal of the order in which steps in a task are typically performed.", "As a result, these datasets provide an opportunity to study the extent to which task structure and language can guide action segmentation.", "Interestingly, young children can segment actions without any explicit supervision (Baldwin et al., 2001; Sharon and Wynn, 1998), by tapping into similar cues action regularities and language descriptions (Levine et al., 2019).", "While previous work mostly focuses on building action segmentation models that perform well on a few metrics (Richard et al., 2018; Zhukov et al., 2019), we aim to provide insight into how various modeling choices impact action segmentation.", "How much do unsupervised models improve when given implicit supervision from task structure and language, and which types of supervision help most?", "Are discriminative or generative models better suited for the task?", "Does explicit structure modeling improve the quality of segmentation?", "To answer these questions, we compare two existing models with a generative hidden semi-Markov model, varying the degree of supervision.", "that our model and models from past work both benefit substantially from the weak supervision provided by task structure and narrative language, even on top of rich features from state-of-the-art pretrained action and object classifiers.", "Our analysis also shows that: (1) Generative models tend to do better than discriminative models of the same or similar model class at learning the full range of step types, which benefits action segmentation; (2) Task structure affords strong, feature-agnostic baselines that are difficult for existing systems to surpass; (3) Reporting multiple metrics is necessary to understand each model's effectiveness for action segmentation; we can devise feature-agnostic baselines that perform well on single metrics despite producing low-quality action segments.", "Typical methods (Rohrbach et al., 2012; Singh et al., 2016; Xu et al., 2017; Zhao et al., 2017; Lea et al., 2017; Yeung et al., 2018; Farha and Gall, 2019) for temporal action segmentation consist of assigning action classes to intervals of videos and rely on manually-annotated supervision.", "Such annotation is difficult to obtain at scale.", "As a result, recent work has focused on training such models with less supervision: one line of work assumes that only the order of actions happening in the video is given and use this weak supervision to perform action segmentation (Bojanowski et al., 2014; Huang et al., 2016; Kuehne et al., 2017; Richard et al., 2017; Ding and Xu, 2018; Chang et al., 2019).", "Other approaches weaken this supervision and use only the set of actions that occur in each video (Richard et al., 2018), or are fully unsupervised (Sener and Yao, 2018; Kukleva et al., 2019).", "Instructional videos have gained interest over the past few years (Yu et al., 2014; Sener et al., 2015; Malmaud et al., 2015; Alayrac et al., 2016; Zhukov et al., 2019) since they enable weakly-supervised modeling: previous work most similar to ours consists of models that localize actions in narrated videos with minimal supervision (Alayrac et al., 2016; Sener et al., 2015; Elhamifar and Naing, 2019; Zhukov et al., 2019).", "We present a generative model of action segmentation that incorporates duration modeling, narration and ordering constraints, and can be trained in all of the above supervision conditions by maximizing the likelihood of the data; while these past works have had these individual components, they have not yet all been combined.", "We use the recent CrossTask dataset (Zhukov et al., 2019) of instructional videos.", "To our knowledge, CrossTask is the only available dataset that has tasks from more than one domain, includes background regions, provides step annotations and naturalistic language.", "Other datasets lack one of these; e.g. they focus on one domain (Kuehne et al., 2014) or do not have natural language (Tang et al., 2019) or step annotations (Miech et al., 2019).", "An example instance from the dataset is shown in Figure 1, and we describe each aspect below.", "Tasks Each video comes from a task , e.g. make a latte , with tasks taken from the titles of selected WikiHow articles, and videos curated from YouTube search results for the task name.", "We focus on the primary section of the dataset, containing 2,700 videos from 18 different tasks.", "Steps and canonical order Each task has a set of steps : lower-level action step types, e.g. , steam milk and pour milk , which are typically completed when performing the task.", "Step names consist of a few words, typically naming an action and an object it is applied to.", "The dataset also provides a canonical step order for each task: an ordering, like a script (Schank and Abelson, 1977; Chambers and Jurafsky, 2008), in which a task's steps are typically performed.", "For each task, the set of step types and their canonical order were hand-constructed by the dataset creators based on section headers in the task's WikiHow article.", "Annotations Each video in the primary section of the dataset is annotated with labeled temporal segments identifying where steps occur.", "(In the weak supervision setting, these step segment labels are used only in evaluation, and never in training.)", "A given step for a task can occur multiple times, or not at all, in any of the task's videos.", "Steps in a video also need not occur in the task's canonical ordering (although in practice our results show that this ordering is a helpful inductive bias for learn-ing).", "Most of the frames in videos (72% over the entire corpus) are background not contained in any step segment.", "Narration Videos also have narration text (tran-scribed by YouTube's automatic speech recognition system) which typically consists of a mix of the Regions background pourmixtureintopan fl ip pancake background background Video Narration \"hey folks here welcome to my kitchen [...] folks my pan is nice and hot [...] just change the angle to show you [...] let cook [...] sit on towel [...] big old stack [...] Timestep Time (in s) Step background fl ip pancake rm pancake background Figure 1: An example video instance from the CrossTask dataset (Sec. 3).", "task demonstrator describing their actions and talking about unrelated topics.", "Although narration is temporally aligned with the video, and steps ( e.g. , pour milk ) are sometimes mentioned, these mentions often do not occur at the same time as the step they describe ( e.g. , let the milk cool before pouring it ).", "Zhukov et al. (2019) guide weakly-supervised training using the narration by defining a set of narration constraints for each video, which identify where in the video steps are likely to occur, using similarity between the step names and temporally-aligned narration (see Sec. 6.1).", "Our generative model of the video features and labeled task segments is a first-order semi-Markov model.", "We use a semi-Markov model for the action segmentation task because it explicitly models temporal regions of the video, their duration, their probable ordering, and their features.", "1 It can be trained in an unsupervised way, without labeled regions, to maximize the likelihood of the features.", "Timesteps Our atomic unit is a one-second region of the video, which we refer to as a timestep .", "A video with T timesteps has feature vectors x 1: T .", "The features x t at timestep t are derived from the video, its narration, or both, and in our work (and past work on the dataset) are produced by pre-trained neural models which summarize some non-local information in the region containing each timestep, which we describe in Sec. 6.3.", "Regions Our model segments a video with T timesteps into a sequence of regions, each of which consists of a consecutive number of timesteps (the region's duration ).", "The number of regions K in a 1 Semi-Markov models are also shown to be successful in the similar domain of speech recognition ( e.g. , Pylkkonen and Kurimo, 2004).", "video and the duration d k of each region can vary; the only constraint is that the sum of the durations equals the video length: (cid:80) Kk =1 d k = T .", "Each region has a label r k , which is either one of the task's step labels ( e.g. , pour milk ) or a special label BKG indicating the region is background.", "In our most general, unconstrained model, a given task step can occur multiple times (or not at all) as a region label in any video for the task, allowing step repetitions, dropping, and reordering.", "Structure We define a first-order Markov (bi-gram) model over these region labels: P ( r 1: K ) = P ( r 1 ) K (cid:89) k =2 P ( r k | r k 1 ) (1) with tabular conditional probabilities.", "While region labels are part of the dataset, they are primarily used for evaluation: we seek models that can be trained in the unsupervised and weakly-supervised conditions where labels are unavailable.", "This model structure, while simple, affords a dynamic program allowing efficient enumeration over both all possible segmentations of the video into regions and assignments of labels to the regions, allowing unsupervised training (Sec. 4.1).", "Duration Our model, following past work (Richard et al., 2018), parameterizes region durations using Poisson distributions, where each label type r has its own mean duration r : d k Poisson( r k ) .", "These durations are constrained so that they partition the video: e.g. , region r 2 begins at timestep d 1 (after region r 1 ), and the final region r K ends at the final timestep T .", "Timestep labels The region labels r 1: K (step, or background) and region durations d 1: K together give a sequence of timestep labels l 1: T for all timesteps, where a timestep's label is equal to the label for the region it is contained in.", "Feature distribution Our model's feature distribution p ( x t | l t ) is a class-conditioned multivariate Gaussian distribution: x t Normal( l t , ) , where l t is the step label at timestep t .", "(We note that the assignment of labels to steps is latent and unobserved during unsupervised and weakly-supervised training.)", "We use a separate learned mean l for each label type l , both steps and background.", "Labels are atomic and task-specific, e.g. , the step type pour milk when it occurs in the task make a latte does not share parameters with the step add milk when it occurs in the task make pancakes .", "2 We use a diagonal covariance matrix which is fixed to the empirical covariance of each feature dimension.", "3 4.1 Training In the unsupervised setting, labels l are unavailable at training (used only in evaluation).", "We describe training in this setting, as well as two supervised training methods which we use to analyze properties of the dataset and compare model classes.", "Unsupervised We train the generative model as a hidden semi-Markov model (HSMM).", "We optimize the model's parameters to maximize the log marginal likelihood of the features for all video instance features x ( i ) in the training set: ML = N (cid:88) i log P ( x ( i ) 1: T i ) (2) Applying the semi-Markov forward algorithm (Murphy, 2002; Yu, 2010) allows us to marginalize over all possible sequences of step labels to compute the log marginal likelihood for each video as a function of the model parameters, which we optimize directly using backpropagation and mini-batched gradient descent with the Adam (Kingma and Ba, 2015) optimizer.", "4 See Appendix A for optimization details.", "Generative supervised Here the labels l are observed; we train the model as a generative semi-Markov model (SMM) to maximize the log joint likelihood: J L = N (cid:88) i log P ( l ( i ) 1: T i , x ( i ) 1: T i ) (3) 2 We experimented with sharing steps, or step components, across tasks in initial experiments, but found that it was helpful to have task-specific structural probabilities.", "3 We found that using a shared diagonal covariance matrix outperformed using full or unshared covariance matrices.", "4 This is the same as mini-batched Expectation Maximization using gradient descent on the M-objective (Eisner, 2016).", "We maximize this likelihood over the entire training set using the closed form solution given the dataset's sufficient statistics (per-step feature means, average durations, and step transition fre-quencies).", "Discriminative supervised To train the SMM model discriminatively in the supervised setting, we use gradient descent to maximize the log conditional likelihood: CL = N (cid:88) i log P ( l ( i ) 1: T | x ( i ) 1: T ) (4) 5 Benchmarks We identify five modeling choices made in recent work: imposing a fixed ordering on steps (not allowing step reordering); allowing for steps to repeat in a video; modeling the duration of steps; using the language (narrations) associated with the video; and using a discriminative/generative model.", "We picked the recent models of Zhukov et al. (2019) and Richard et al. (2018) since they have non-overlapping strengths (see Table 1).", "ORDEREDDISCRIM This work (Zhukov et al., 2019) uses a discriminative classifier which gives a probability distribution over labels at each timestep: p ( l t | x t ) .", "Inference finds an assignment of steps to timesteps that maximizes (cid:80) t log p ( l t | x t ) subject to the constraints that: all steps are predicted exactly once; steps occur in the fixed canonical ordering defined for the task; one background region occurs between each step.", "Unsupervised training of the model alternates between inferring labels using the dynamic program, and updating the classifier to maximize the probability of these inferred labels.", "5 ACTIONSETS This work (Richard et al., 2018) uses a generative model which has structure similar to ours, but uses dataset statistics ( e.g. , average video length and number of steps) to learn the 5 To allow the model to predict step regions with duration longer than a single timestep, we modify this classifier to also predict a background class, and incorporate the scores of the background class into the dynamic program.", "structure distributions, rather than setting parameters to maximize the likelihood of the data.", "As in our model, region durations are modeled using a class-conditional Poisson distribution.", "The feature distribution is modeled using Bayesian inversion of a discriminative classifier (a multi-layer percep-tron) with an estimated label prior.", "The structural parameters of the model (durations and class priors) are estimated using the length of each video, and the number of possible step types.", "As originally presented, this model depends on knowing which steps occur in a video at training time; for fair comparison, we adapt it to the same supervision conditions of Zhukov et al. (2019) by enforcing the canonical step ordering for the task during both training and evaluation.", "We compare models on the CrossTask dataset across supervision conditions.", "We primarily evaluate the models on action segmentation (Sec. 1).", "Past work on the dataset (Zhukov et al., 2019) has focused on a step recognition task , where models identify individual timesteps in videos that correspond to possible steps; for comparison, we also report performance for all models on this task.", "In all settings, the task for a given video is known (and hence the possible steps), but the settings vary in the availability of other sources of supervision: step labels for each timestep in a video, and constraints from language and step ordering.", "Models are trained on a training set and evaluated on a separate held-out testing set, consisting of different videos (from the same tasks).", "Fully unsupervised No labels for timesteps are available during training.", "The only supervision is the number of possible step types for each task (and, as in all settings, which task each video is from).", "In evaluation, the task for a given video (and hence the possible steps, but not their ordering) are known.", "We follow past work in this setting (Sener et al., 2015; Sener and Yao, 2018) by finding a mapping from model states to region labels that maximizes label accuracy, averaged across all videos in the task.", "See Appendix C for details.", "Weakly supervised No labels for timesteps are available, but two supervision types are used in the form of constraints (Zhukov et al., 2019): (1) Step ordering constraints : Step regions are constrained to occur in the canonical step ordering (see Sec. 3) for the task, but steps may be separated by background.", "We constrain the structure prior distributions p ( r 1 ) and transition distributions p ( r k +1 | r k ) of the HSMM to enforce this ordering.", "For p ( r 1 ) , we only allow non-zero probability for the background region, BKG, and for the first step in the task's ordering.", "p ( r k | r k 1 ) constrains each step type to only transition to the next step in the constrained ordering, or to BKG.", "6 As step ordering constraints change the parameters of the model, when we use them we enforce them during both training and testing.", "While this obviates most of the learned structure of the HSMM, the duration model (as well as the feature model) is still learned.", "(2) Narration constraints : These give regions in the video where each step type is likely to occur.", "Zhukov et al. (2019) obtained these using similarities between word vectors for the transcribed narration and the words in the step labels, and a dynamic program to produce constraint regions that maximize these similarities, subject to the step ordering matching the canonical task ordering.", "See Zhukov et al. for details.", "We enforce these constraints in the HSMM by penalizing the feature distributions to prevent any step labels that occur outside of one of the allowed constraint regions for that step.", "Following Zhukov et al., we only use these narration constraints during training.", "7 6.2 Evaluation We use three metrics from past work, outlined here and described in more detail in Appendix D. To evaluate action segmentation, we use two varieties of the standard label accuracy metric (Sener and Yao, 2018; Richard et al., 2018): all label accuracy , which is computed on all timesteps, including background and non-background, as well as step label accuracy : accuracy only for timesteps that occur in a non-background region (according to the ground-truth annotations).", "Since these two accuracy metrics are defined on individual frames, 6 To enforce ordering when steps are separated by BKG, we annotate BKG labels with the preceeding step type (but all BKG labels for a task share feature and duration parameters, and are merged for evaluation).", "7 We also experiment with using features derived from transcribed narration in Appendix G. they penalize models if they don't capture the full temporal extent of actions in their predicted segmentations.", "Our third metric is step recall , used by past work on the CrossTask dataset (Zhukov et al., 2019) to measure step recognition (defined in Sec. 6).", "This metric evaluates the fraction of step types which are correctly identified by a model when it is allowed to predict only one frame per step type, per video.", "A high step recall indicates a model can accurately identify at least one representative frame of each action type in a video.", "We also report three other statistics to analyze the predicted segmentations: (1) Sequence similarity: the similarity of the sequence of region labels predicted in the video to the groundtruth, using inverse Levenshtein distance normalized to be between 0 and 100.", "See Appendix D for more details.", "(2) Predicted background percentage: the percentage of timesteps for which the model predicts the background label.", "Models with a higher percentage than the ground truth background percentage (72%) are overpredicting background.", "(3) Number of segments: the number of step segments predicted in a video.", "Values higher than the ground truth average (7.7) indicate overly-fragmented steps.", "Sequence similarity and number of segments are particularly relevant for measuring the effects of structure, as they do not factor over individual timesteps (as do the all label and step label accuracies and step recall).", "We average values across the 18 tasks in the evaluation set (following Zhukov et al., 2019).", "For our features x 1: T , we use the same base features as Zhukov et al. (2019), which are produced by convolutional networks pre-trained on separate activity, object, and audio classification datasets.", "See Appendix B for details.", "In our generative models, we apply PCA (following Kuehne et al., 2014 and Richard et al., 2018) to project features to 300 dimensions and decorrelate dimensions (see Appendix B for details).", "8 7 Results We first define several baselines based on dataset statistics (Sec. 7.1), which we will find to be strong in comparison to past work.", "We then analyze each 8 This reduces the number of parameters that need to be learned in the emission distributions, both by reducing the dimensionality and allowing a diagonal covariance matrix.", "In early experiments we found PCA improved performance.", "aspect of our proposed model on the dataset in a supervised training setting (Sec. 7.2), removing some error sources of unsupervised learning and evaluating whether a given model fits the dataset (Liang and Klein, 2008).", "Finally, we move to our main setting, the weakly-supervised setting of past work, incrementally adding step ordering and narration constraints (see Sec. 6.1) to evaluate the degree to which each helps (Sec. 7.3).", "Results are given in Table 2 for models trained on the CrossTask training set of primary tasks, and evaluated on the held-out validation set.", "We will describe and analyze each set of results in turn.", "See Figure 2 for a plot of models' performance on two key metrics, and Appendix I for example predictions.", "Table 2 (top block) shows baselines that do not use video (or narration) features, but predict steps according to overall statistics of the training data.", "These demonstrate characteristics of the data, and the importance of using multiple metrics.", "Predict background (B1) Since most timesteps are background, a model that predicts background everywhere can obtain high overall label accuracy, showing the importance of also using step label accuracy as a metric for action segmentation.", "Sample from the training distribution (B2) For each timestep in each video, we sample a label from the empirical distribution of step and background label frequencies for the video's task in the training data.", "Ordered uniform (B3) For each video, we predict step regions in the canonical step order, separated by background regions.", "The length of each region is set so that all step regions in a video have equal duration, and the percentage of background timesteps is equal to the corpus average.", "See Uniform in Figure 3a for sample predictions.", "Sampling each timestep label independently from the task distribution (row B2), and using a uniform step assignment in the task's canonical ordering with background (B3) both obtain similar step label accuracy, but the ordered uniform baseline improves substantially on the step recall metric, indicating that step ordering is a useful inductive bias for step recognition .", "Models in the unstructured block of Table 2 are classification models applied independently to all timesteps, allowing us to compare the performance of the feature models used as components in our structured models.", "We find that a Gaussian mixture model (row S3), which is used as the feature model in the HSMM, obtains comparable step recall and substantially higher step label accuracy than a discriminative linear classifer (row S1) similar to the one used in Zhukov et al. (2019), which is partially explained by the discriminative classifier overpredicting the background class (comparing Predicted Background % for those two rows).", "Using a higher capacity discriminative classifier, a neural net with a single hidden layer (MLP), improves performance over the linear model on several metrics (row S2); however, the MLP still overpredicts background, substantially underperforming the Gaussian mixture on the step label accuracy metric.", "In the structured block of Table 2, we compare the full models which use step constraints (Zhukov et al., 2019) or learned transition distributions (the SMM) to model task structure.", "The structured models learn (or in the case of Zhukov et al., enforce) orderings over the steps, which greatly improve their sequence similarity scores when compared to the unstructured models, and decrease step fragmentation (as measured by num. segments).", "Figure 3a shows predictions for a typical video, demonstrating this decreased fragmentation.", "9 9 We also perform an ablation study to understand the effect of the duration model.", "(a) Step segmentations in the full supervision condition for a video from the make kimchi fried rice task, comparing the ground truth (GT), ordered uniform baseline (Uniform), and predictions from the Gaussian mixture (GMM) and semi-Markov (SMM) models.", "(b) Step segmentations in the noor weak-supervision conditions for a video from the make pancakes task, comparing the ground truth (GT) to predictions from our model without (HSMM) and with constraint supervision (HSMM+Narr+Ord) and from Zhukov et al. (2019) (ORDEREDDISCRIM ).", "We see two trends in the supervised results: (1) Generative models obtain substantially higher step label accuracy than discriminative models of the same or similar class.", "This is likely due to the fact that the generative models directly parameterize the step distribution.", "(See Appendix E.) (2) Structured sequence modeling naturally improves performance on sequence-level metrics (se-quence similarity and number of segments predicted) over the unstructured models.", "However, none of the learned structured models improve on the strong ordered uniform baseline (B3) which just predicts the canonical ordering of a task's steps (interspersed with background regions).", "This will motivate using this canonical ordering as a constraint in unsupervised learning.", "Overall, the SMM models obtain strong action segmentation performance (high step label accuracy without fragmenting segments or overpredicting background).", "Here models are trained without supervision for the labels l 1: T .", "We compare models trained without any constraints, to those that use constraints from step ordering and narration, in the Unand Weakly Supervised block of Table 2.", "Example outputs are shown in Appendix I. Our generative HSMM model affords training without any constraints (row U1).", "This model has high step label accuracy (compared to the other unsupervised models) but low all label accuracy, and similar scores for both metrics.", "This hints, and other metrics confirm, that the model is not adequately distinguishing steps from background: the percentage of predicted background is very low (31%) compared to the ground truth (72%, row GT).", "See HSMM in Figure 3b for predictions for a typical video.", "These results are attributable to features within a given video (even across step types) being more similar than features of the same step type in different videos (see Appendix H for feature visualizations).", "The induced latent model states typically capture this inter-video diversity, rather than distinguishing steps across tasks.", "We next add in constraints from the canonical step ordering , which our supervised results showed to be a strong inductive bias.", "Unlike in the fully unsupervised setting, the HSMM model with ordering (HSMM+Ord, row U4) learns to distinguish steps from background when constrained to predict each step region once in a video, with predicted background timesteps (70.6%) close to the ground-truth (72%).", "However, performance of this model is still very low on the task metrics comparable to or underperforming the ordered uniform baseline with background (row B3) on all metrics.", "This constrained step ordering setting also allows us to apply ACTIONSETS (Richard et al., 2018) and ORDEREDDISCRIM (Zhukov et al., 2019).", "ACTIONSETS obtains high step label accuracy, but substantially underpredicts background, as evidenced by both the all label accuracy and the low predicted background percentage.", "The tendency of ORDEREDDISCRIM to overpredict background which we saw in the supervised setting (row S4) is even more pronounced in this weakly-supervised setting (row U3), resulting in scores very close to the predict background baseline (B1).", "Next, we use narration constraints (U5), which are enforced only during training time, following Zhukov et al. (2019).", "Narration constraints substantially improve all label accuracy (comparing U1 and U5).", "However, the model overpredicts All Label Step Label Step Acc.", "background, likely because it doesn't enforce each step type to occur in a given video.", "Overpredicting background causes step label accuracy and step recall to decrease.", "Finally, we compare the HSMM and ORDEREDDISCRIM models when using both narration constraints (in training) and ordering constraints (in training and testing) in the ordering + narration block.", "Both models benefit substantially from narration on all metrics when compared to using only ordering supervision, more than doubling their performance on step label accuracy and step recall (comparing U6 and U7 to U3 and U4).", "Our weakly-supervised results show that: (1) Both action segmentation metrics all label accuracy and step label accuracy are important to evaluate whether models adequately distinguish meaningful actions from background.", "(2) Step constraints derived from the canonical step ordering provide a strong inductive bias for unsupervised step induction.", "Past work requires these constraints and the HSMM, when trained without them, does poorly, learning to capture diversity across videos rather than to identify steps.", "(3) However, ordering supervision alone is not sufficient to allow these models to learn better segmentations than a simple baseline that just uses the ordering to assign labels ( ordered uniform ); narration is also required.", "Finally, we compare our full model to the ORDEREDDISCRIM model of Zhukov et al. (2019) in the primary data evaluation setup from that work: averaging results over 20 random splits of the primary data (Table 3).", "This is a low data setting which uses only 30 videos per task as training data in each split.", "Accordingly, both models have lower performance, although the relative ordering is the same: higher step label accuracy for the HSMM, and higher all label accuracy and step recall for ORDEREDDISCRIM .", "Although in this low-data setting, models overpredict background even more, this problem is less pronounced for the HSMM: 97.4% of timesteps for ORDEREDDISCRIM are predicted background (explaining its high all label accuracy), and 87.1% for HSMM.", "We find that unsupervised action segmentation in naturalistic instructional videos is greatly aided by the inductive bias given by typical step orderings within a task, and narrative language describing the actions being done.", "While some results are more mixed (with the same supervision, different models are better on different metrics), we do observe that across settings and metrics, step ordering and narration increase performance.", "Our results also illustrate the importance of strong baselines: without weak supervision from step orderings and narrative language, even state-of-the-art unsupervised action segmentation models operating on rich video features underperform feature-agnostic baselines.", "We hope that future work will continue to evaluate broadly.", "While action segmentation in videos from diverse domains remains challenging videos contain both a large variety of types of depicted actions, and high visual variety in how the actions are portrayed we find that structured generative models provide a strong benchmark for the task due to their abilities to capture the full diversity of action types (by directly modeling distributions over action occurrences), and to benefit from weak supervision.", "Future work might explore methods for incorporating richer learned representations both of the diverse visual observations in videos, and the narration that describes them, into such models.", "Thanks to Dan Klein, Andrew Zisserman, Lisa Anne Hendricks, Aishwarya Agrawal, Gabor Melis, Angeliki Lazaridou, Anna Rohrbach, Justin Chiu, Susie Young, the DeepMind language team, and the anonymous reviewers for helpful feedback on this work.", "DF is supported by a Google PhD Fellowship." ]
[ "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "other", "other", "other", "other", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "method", "result", "abstain", "other", "other" ]
[ "We introduce neural finite state transducers (NFSTs), a family of string transduction models defining joint and conditional probability distributions over pairs of strings.", "The probability of a string pair is obtained by marginalizing over all its accepting paths in a finite state transducer.", "In contrast to ordinary weighted FSTs, however, each path is scored using an arbitrary function such as a recurrent neural network, which breaks the usual conditional independence assumption (Markov property).", "NFSTs are more powerful than previous finite-state models with neural features (Rastogi et al., 2016).", "We present training and inference algorithms for locally and globally normalized variants of NFSTs.", "In experiments on dierent transduction tasks, they compete favorably against seq2seq models while oer-ing interpretable paths that correspond to hard monotonic alignments.", "Weighted finite state transducers (WFSTs) have been used for decades to analyze, align, and transduce strings in language and speech processing (Roche and Schabes, 1997; Mohri et al., 2008).", "They form a family of ecient, interpretable models with well-studied theory.", "A WFST describes a function that maps each string pair ( x , y ) to a weightoften a real number representing p ( x , y ) or p ( y | x ) .", "The WFST is a labeled graph, in which each path a represents a sequence of operations that describes how some x and some y could be jointly generated, or how x could be edited into y .", "Multiple paths for the same ( x , y ) pair correspond to dierent analyses (labeled alignments) of that pair.", "However, WFSTs can only model certain functions, known as the rational relations (Berstel and Reutenauer,", "1988).The weight of a path is simply the product of the weights on its arcs.", "This means s1 : s0 s2 <BOS> : <EOS> a: <i-a><V><o-> b:b <i-b><C><o-b> a: <i-a> : <o-> th: <i-t><i-h><C><o > Figure 1: A marked finite-state transducer T .", "In this paper, we propose neural finite state transducers (NFSTs), in which the weight of each path is instead given by some sort of neural network, such as an RNN.", "Thus, the weight of an arc can depend on the context in which the arc is used.", "By abandoning the Markov property, we lose exact dynamic programming algorithms, but we gain expressivity: the neural network can capture dependencies among the operations along a path.", "For example, the RNN might give higher weight to a path if it is internally consistent: it might thus prefer to transcribe a speaker's utterance with a path that maps similar sounds in similar contexts to similar phonemes, thereby adapting to the speaker's accent.", "Consider a finite-state transducer T as in Figure 1 (see Appendix A for background).", "Using the composition operator , we can obtain a new FST, x T , whose accepting paths correspond to the accepting paths of T that have input string x .", "Similarly, the accepting paths of T y correspond to the accepting paths of T that have output string y .", "Finally, x T y extracts the paths that have both properties.", "We define a joint probability distribution over ( x , y ) pairs by marginalizing over those paths: p ( x , y ) = (cid:88) a x T y p ( a ) = 1 Z ( T ) (cid:88) a x T y p ( a ) (1) where p ( a ) is the weight of path a and Z ( T ) = (cid:80) a T p ( a ) is a normalization constant.", "We define p ( a ) (cid:44) exp G ( a ) with G ( a ) being some parametric scoring function.", "In our experiments, we will adopt a fairly simple left-to-right RNN architecture (2.2), but one could easily substitute fancier architectures.", "We will also consider defining G by a locally normalized RNN that ensures Z ( T ) = 1 .", "In short, we use the finite-state transducer T to compactly define a set of possible paths a .", "The number of paths may be exponential in the size of T , or infinite if T is cyclic.", "However, in contrast to WFSTs, we abandon this combinatorial structure in favor of neural nets when defining the probability distribution over a .", "In the resulting marginal distribution p ( x , y ) given in equation (1), the path a that aligns x and y is a latent variable.", "This is also true of the resulting conditional distribution p ( y | x ) .", "We explore training and inference algorithms for various classes of NFST models (3).", "Classical WFSTs (Mohri et al., 2008) and BiRNN-WFSTs (Rastogi et al., 2016) use restricted scoring functions and so admit exact dynamic programming algorithms.", "For general NFSTs, however, we must resort to approximate computation of the model's training gradient, marginal probabilities, and predictions.", "In this paper, we will use sequential importance sampling methods (Lin and Eisner, 2018), leaving variational approximation methods to future work.", "Defining models using FSTs has several benefits: Output-sensitive encoding Currently popular models of p ( y | x ) used in machine translation and morphology include seq2seq (Sutskever et al., 2014), seq2seq with attention (Bahdanau et al., 2015; Luong et al., 2015), the Transformer (Vaswani et al., 2017).", "These models first encode x as a vector or sequence of vectors, and then condition the generation of y on this encoding.", "The vector is determined from x only.", "This is also the case in the BiRNN-WFST (Rastogi et al., 2016), a previous finite-state model to which we compare.", "By contrast, in our NFST, the state of the RNN as it reads and transduces the second half of x is influenced by the first halves of both x and y and their alignment.", "Inductive bias Typically, a FST is constructed with domain knowledge (possibly by compiling a regular expression), so that its states reflect interpretable properties such as syllable boundaries or linguistic features.", "Indeed, we will show below how to make these properties explicit by marking the FST arcs.", "The NFST's path scoring function then sees these marks and can learn to take them into account.", "The NFST also inherits any hard constraints from the FST: if the FST omits all ( x , y ) paths for some illegal x , y , then p ( x , y ) = 0 for any parameter vector (a structural zero).", "Interpretability Like a WFST, an NFST can explain why it mapped x to y in terms of a latent path a , which specifies a hard monotonic labeled alignment.", "The posterior distribution p ( a | x , y ) specifies which paths a are the best explanations (e.g., Table 5).", "We conduct experiments on three tasks: grapheme-to-phoneme, phoneme-to-grapheme, and action-to-command (Bastings et al., 2018).", "Our results on these datasets show that our best models can improve over neural seq2seq and previously proposed hard alignment models.", "An NFST is a pair ( T , G ) , where T is an unweighted FST with accepting paths A and G A R is a function that scores these paths.", "As explained earlier, we then refer to p ( a ) = exp G ( a ) as the weight of path a A .", "A weighted relation between input and output strings is given by p ( x , y ) , which is defined to be the total weight of all paths with input string x and output string y , where where and are the input and output alphabets of T .", "The real parameter vector can be adjusted to obtain dierent weighted relations.", "We Model Training Algorithms Long-Term Output-Output Dependency Left-to-Right Factorization WFSTs Dynamic Programming (cid:55) (cid:51) BiRNN-WFSTs Dynamic Programming (cid:55) (cid:51) Local NFSTs Importance Sampling (cid:51) (cid:51) Global NFSTs Importance Sampling (cid:51) (cid:55) Table 1: Comparison between WFSTs, BiRNN-WFSTs (Rastogi et al., 2016), and NFSTs.", "can normalize p to get a probability distribution as shown in equation (1).", "To obtain contextual mark scores as desired, one simple architecture is a recurrent neural network: G ( ) (cid:44) | | (cid:88) t =1 g ( s t 1 , t ) (2) s t = f ( s t 1 , t ) , with s 0 = 0 (3) where s t 1 R d is the hidden state vector of the network after reading 1 t 1 .", "Weighted FST.", "A WFST over the (+ , ) semiring can be regarded as the special case in which G ( a ) (cid:44) (cid:80) | a | t =1 g ( a t ) .", "This is a sum of scores assigned to the arcs in a = a 1 a 2 .", "Marked FST.", "Our innovation is to allow the arcs' scores to depend on their context in the path.", "Now no longer associates a fixed score with each arc.", "Rather, we assume that each arc a in the FST comes labeled with a sequence of marks from a mark alphabet , as illustrated in Figure 1.", "The marks reflect the FST constructor's domain knowledge about what arc a does (see 4.2 be-low).", "We now define G ( a ) = G ( ( a )) , where ( a ) = ( a 1 ) ( a 2 ) is the concatenated sequence of marks from the arcs along path a .", "It is sometimes helpful to divide marks into different classes.", "An arc can be regarded as a possible edit that aligns an input substring with an output substring in the context of transitioning from one FST state to another.", "The arc's input marks describe its input substring, its output marks describe its output substring, and the remaining marks may describe other properties of the arc's aligned input-output pair or the states that it connects.", "Recall that an FST encodes domain knowledge.", "Its paths represent alignments between input and output strings, where each alignment specifies a segmentation of x and y into substrings labeled with FST states.", "Decorating the arcs with marks furnishes the path scoring model with domain-specific information about the alignments.", "RNN scoring.", "If merely associated a fixed score with each mark, then the marked FST would be no more powerful than the WFST.", "The g function defines the score of reading t in this left context, and f defines how doing so updates the state.", "In our experiments, we chose f to be the GRU state update function (Cho et al., 2014).", "We defined g ( s , t ) (cid:44) ( Ws + b ) (cid:62) emb( t ) .", "The parameter vector specifies the GRU parameters, W , b , and the mark embeddings emb( ) .", "One could easily substitute much fancier architectures, such as a stacked BiLSTM with attention (Tilk and Alume, 2016), or a Transformer (Vaswani et al., 2017).", "In hopes of improving the inductive bias of the learner, we partitioned the hidden state vector into three sub-vectors: s t = [ s a t ; s x t ; s y t ] .", "The mark scoring function f ( s t 1 , t ) was as before, but we restricted the form of g , the state update function.", "s a t encodes all past marks and depends on the full hidden state so far: s a t = g a ( s t 1 , t ) .", "However, we make s x t encode only the sequence of past input marks, ignoring all others.", "Thus, s x t = g x ( s x t 1 , t ) if t is an input mark, and s x t = s x t 1 otherwise.", "Symmetrically, s y t encodes only the sequence of past output marks.", "This architecture is somewhat like Dyer et al. (2016), which also uses dierent sub-vectors to keep track of dierent aspects of the history.", "A diculty with the general model form in equation (1) is that the normalizing constant Z ( T ) = (cid:80) a T p ( a ) must sum over a large set of pathsin fact, an infinite set if T is cyclic.", "This sum may diverge for some values of the parameter vector , which complicates training of the model (Dreyer, 2011).", "Even if the sum is known to converge, it is in general intractable to compute it exactly.", "Thus, estimating the gradient of Z ( T ) during training involves approximate sampling from the typically high-entropy distribution p ( a ) .", "The resulting estimates are error-prone because the sample size tends to be too small and the approximate sampler is biased.", "A standard solution in the WFST setting (e.g. Cotterell et al., 2014) is to use a locally normalized model, in which Z ( T ) is guaranteed to be 1.", "1 The big summation over all paths a is replaced by small summationswhich can be computed explicitly over just the outgoing edges from a given state.", "Formally, we define the unnormalized score of arc a i in the context of path a in the obvious way, by summing over the contextual scores of its marks: g ( a i ) (cid:44) k (cid:88) t = j +1 g ( s t 1 , t ) (4) where j = | ( a 1 ) ( a i 1 ) | and k = | ( a 1 ) ( a i ) | .", "Its normalized score is then g , T ( a i ) (cid:44) log (cid:16) exp g ( a i ) / (cid:88) a (cid:48) exp g ( a (cid:48) ) (cid:17) where a (cid:48) ranges over all arcs in T (including a i itself) that emerge from the same state as a i does.", "We can now score the paths in T using G , T ( a ) = | a | (cid:88) i =1 g , T ( a i ) (5) This gives rise to a proper probability distribution p ( a ) (cid:44) p ( a ) = exp G , T ( a ) over the paths of T .", "No global normalization constant is necessary.", "However, note that the scoring function now requires T as an extra subscript, because it is necessary when scoring a to identify the competitors in T of each arc a i .", "Thus,when p ( x , y ) is found as usual by summing up the probabilities of all paths in x T y , each path is still scored using its arcs' competitors from T .", "This means that each state in x T y must record the state in T from which it was derived.", "Many algorithms for working with probability distributionsincluding our training and decoding 1 Provided that every state in T is co-accessible, i.e., has a path to a final state.", "algorithms belowrely on conditional sampling.", "In general, we would like to sample a path of T given the knowledge that its input and output strings fall into sets X and Y respectively.", "2 If X and Y are regular languages, this is equivalent to defining T (cid:48) = X T Y and sampling from p ( a | T (cid:48) ) (cid:44) p ( a ) (cid:80) a (cid:48) T (cid:48) p ( a (cid:48) ) , (6) Due to the nonlinearity of G , the denominator of equation (6) is generally intractable.", "If T (cid:48) is cyclic, it cannot even be computed by brute-force enumeration.", "Thus, we fall back on normalized importance sampling, directly adopting the ideas of Lin and Eisner (2018) in our more general FST setting.", "We employ a proposal distribution q : p ( a | T (cid:48) ) = E a q [ p ( a | T (cid:48) ) q ( a ) ] , (7) M (cid:88) m =1 p ( a ( m ) ) q ( a ( m ) ) Z I ( a = a ( m ) ) = p ( a | T (cid:48) ) , where Z = (cid:80) Mm (cid:48) =1 p ( a ( m (cid:48) ) ) q ( a ( m (cid:48) ) ) , and q is a locally normalized distribution over paths a T (cid:48) .", "In this paper we further parametrize q as q ( a ; T (cid:48) ) = T (cid:89) t =1 q t ( a t | a 1 ...t 1 ; , T (cid:48) ) , (8) q t ( a | a : t 1 ; , T (cid:48) ) exp( g ( s t 1 , a t ; , T ) + C ) , where C (cid:44) C ( s (cid:48) t , X, Y, ) R , s (cid:48) t (cid:44) f ( s t 1 , ( a )) is a compatibility function that is typically modeled using a neural network.", "In this paper, one the following three cases are encountered: X = x , is a string, and Y = : in this case T (cid:48) = x T .", "We let C = C x ( s (cid:48) t , RNN x ( x , i, ); ) , where i is the length of the input prefix in a 1 ...t", ".a , RNN x ( x , i, ) is the hidden state of the i -th position after reading x (not a nor ) backwards, and C x ( , ) is a feed-forward network that takes the concatenated vector of all arguments, and outputs a real scalar.", "We describe the parametrization of C x in Appendix C.1.", "2 When X or Y is larger than a single string, it is commonly all of or respectively, in which case conditioning on it gives no information.", "X = , and Y = y is a string: in this case T (cid:48) = T y .", "We let C = C y ( s (cid:48) t , RNN y ( y , j, ); ) , where j is the length of the output prefix in a 1 ...t", ".a , and RNN y , C y are similarly defined as in RNN x and C x .", "X and Y are both strings X = x , Y = y : in this case we let C = C xy ( s (cid:48) t , RNN x ( x , i, ) , RNN y ( y , j, ); ) .", "Given a path prefix a : t 1 , q t ( a | a : t 1 ; , T (cid:48) ) is defined over arcs a such that a : t 1", ".a is a valid path prefix in T (cid:48) .", "To optimize with regard to q , we follow (Lin and Eisner, 2018) and seek to find = argmin KL [ p || q ] , where p is the approximate distribution defined in equation (7), which is equivalent to maximizing the log-likelihood of q ( a ) when a is distributed according to the approximation p .", "In this paper, we consider joint training.", "The loss function of our model is defined as the negative log joint probability of string pair ( x , y ) : L ( x , y ) = log p ( x , y ) = log (cid:88) a x T y p ( a ) .", "(9) Since p is an exponential family distribution, the gradients of L can be written as (Bishop, 2006) L ( x , y ) = E a p ( | x T y ) [ log p ( a )] , (10) where p ( | x T y ) is a conditioned distribution over paths.", "Computing equation (10) requires sampling from p ( | x T y ) , which, as we discuss in 3.1, is often impractical.", "We therefore approximate it with L ( x , y ) = E a p ( | x T y ) [ log p ( a )] E a p ( | x T y ) [ log p ( a )] (11) = M (cid:88) m =1 w ( m ) G ( a ( m ) ) , (12) where q is a proposal distribution parametrized as in equation (8) (discussed in 3.1,) a (1) . . . a ( M ) q are i.i.d. samples of paths in x T y , and w ( m ) is the importance weight of the m -th sample satisfying w ( m ) exp G ( a ( m ) ) q ( a ( m ) ) , (cid:80) Mm =1 w ( m ) = 1 .", "Pseudocode for calculating equation (12) is listed in Algorithm 1.", "Algorithm 1 Compute approximate gradient for updating G Require: G : A R is an NFST scoring function, q is a distribution over paths, M N is the sample size 1: function Get-Gradient( G , M , q ) 2: for m in 1 . . . M do 3: a ( m ) q 4: w ( m ) exp G ( a ( m ) ) q ( a ) 5: end for 6: Z (cid:80) Mm =1 w ( m ) 7: for m in 1 . . . M do 8: w ( m ) w ( m ) Z 9: end for 10: return (cid:80) Mm =1 w ( m ) G ( a ( m ) ) 11: end function 3.3 Decoding most probable strings Besides finding good paths in a conditioned distribution as we discuss in 3.1, we are also often interested in finding good output strings , which is conventionally referred to as the decoding problem, which we define to be finding the best output string y (cid:44) argmax y L ( Y ) p Y ( y | T (cid:48) ) , where p Y ( y | T (cid:48) ) (cid:44) (cid:80) a T (cid:48) y p ( a ) (cid:80) a (cid:48) T (cid:48) p ( a (cid:48) ) .", "y (cid:44) argmax y PY ( y | T (cid:48) ) is a consistent estimator of y , which can directly be used to find the best string.", "However, making this estimate accurate might be expensive: it requires sampling many paths in the machine T (cid:48) , which is usually cyclic, and therefore has infinitely many more paths, than T (cid:48) y k , which has finitely many paths when A is acyclic.", "On the other hand, for the task of finding the best string among a pool candidates, we do not need to compute (or approximate) the denominator in equation (13), since y = argmax y L ( Y ) (cid:88) a T (cid:48) y p ( a ) .", "As in the case for paths, the language L ( Y ) is usually infinitely large.", "However given an output candidate y k L (cid:48) L ( Y ) , we can approximate the summation in equation (14) using importance sampling: (cid:88) a T (cid:48) y k p ( a ) = E a q ( |T (cid:48) y k ) [ p ( a ) q ( a | T (cid:48) y k )] , (15) Algorithm 2 Training procedure for G .", "where q ( | T (cid:48) y k ) is a proposal distribution over paths in T (cid:48) y k .", "In this paper we parametrize q ( | T (cid:48) y k ) following the definition in equation (8).", "When L (cid:48) is finitely large, we reduce the decoding task into a reranking task.", "To populate L (cid:48) , one possibility is to marginalize over paths in the approximate distribution p ( a | T (cid:48) ) discussed in 3.1 to obtain an estimate p Y ( y | T (cid:48) ) , and use its support as L (cid:48) .", "Note that it's possible to populate the candidate pool in other ways, each with its advantages and drawbacks: for example, one can use a topk path set from a weighted (Markovian) FST.", "This approach guarantees exact computation, and the pool quality would no longer depend on the qualities of the smoothing distribution q .", "However it is also a considerably much weaker model and may yield uninspiring candidates.", "In the common case where the conditioned machine T (cid:48) = X T Y has X = x as the input string, and Y is the universal acceptor that accepts , one can obtain a candidate pool from seq2seq models: seq2seq models can capture long distance dependencies between input and output strings, and are typically fast to train and decode from.", "However they are not applicable in the case where L ( Y ) (cid:54) = .", "Experimental details of decoding are further discussed in 4.3.", "Our experiments mainly aim to: (1) show the eectiveness of NFSTs on transduction tasks; (2) illustrate that how priorknowledge can be introduced into NFSTs and improve the performance; (3) demonstrate the interpretability of our model.", "Throughout, we experiment on three tasks:", "(i) grapheme-to-phoneme,", "(ii) phoneme-to-grapheme, and", "(iii) actions-to-commands.", "We compare with competitive string transduction baseline models in these tasks.", "Grapheme-to-phoneme and phoneme-to-grapheme (G2P/P2G) refer to the transduction between words' spelling and phonemic transcription.", "English has a highly irregular orthography (Venezky, 2011), which necessitates the use of rich models for this task.", "We use a portion of the standard CMUDict dataset: the Sphinx-compatible version of CMUDict (Weide, 2005).", "As for metrics, we choose widely used exact match accuracy and edit distance.", "Action-to-command (A2C) refers to the transduction between an action sequence and imperative commands.", "We use NACS (Bastings et al., 2018) in our experiment.", "As for metrics, we use exact match accuracy (EM).", "Note that the in A2C setting, a given input can yield dierent outputs, e.g. I_JUMP I_WALK I_WALK corresponds to both jump and walk twice and walk twice after jump.", "NACS is a finite set of action-command pairs; we consider a predicted command to be correct if it is in the finite set and its corresponding actions is exactly the input.", "We evaluate on the length setting proposed by Bastings et al. (2018), where we train on shorter sequences and evaluate on longer sequences.", "NFSTs require an unweighted FST T which defines a scaold for the relation it recognizes.", "In this paper we experiment with two versions of T : the first is a simple general' design T 0 , which contains only three states s { 0 , 1 , 2 } , where the only arc between q 0 and q 1 consumes the mark <BOS> ; and the only arc between q 1 and q 2 consumes the mark <EOS> .", "T 0 has exactly one accepting state, which is q 2 .", "To ensure that T 0 defines relation for all possible string pairs ( x , y ) , we add all arcs of the form a = ( s 1 , s 1 , , , ) , ( , ) to T .", "To recognize transduction rules defined in the Wikipedia English IPA Help page, we define TIPA , which has all states and arcs of T 0 , and additional states and arcs to handle multi-grapheme and multi-phoneme transductions defined in the IPA Help: 3 for example, the transduction th T is encoded as two arcs ( s 1 , s 3 , , t , T ) and ( s 3 , s 1 , , h , ) .", "Because of the lack of good prior knowledge that can be added to A2C experiments, we only use general FSTs in those experiments for such experiments.", "Nor do we encode special marks that we are going to introduce below.", "4 4.2.1 Design of mark sequences As with regular WFSTs, the arcs can often be hand-engineered to incorporate prior knowledge.", "Recall thatas we describe in 2.2,eacharc is associatedwith a mark sequence.", "In this paper, we will always derive the mark sequence on an arc a = ( s (cid:48) , s, (cid:48) , , ) of the transducer T as = [ , (cid:48) , , s ] , where (cid:48) can be engineered to reflect FSTand application-specific properties of a path, such as the IPA Help list we mentioned earlier.", "One way to encode such knowledge into mark sequences is to have special mark symbols in mark sequences for particular transductions.", "In this paper we experiment with two schemes of marks: IPA Help (IPA).", "We define the IPA mark IPA = { C | V } , where the symbol C indicates that this arc is part of a transduction rule listed in the consonant section of the Wikipedia English IPA Help page.", "Similarly, the mark V indicates that the transduction rule is listed in the vowel section.", "4 The NACS dataset was actually generated from a regular transducer, which we could in principle use, but doing so would make the transduction fully deterministic and probably not interesting/hard enough.", "Phoneme Classes (Phone).", "We define Phone marks Phone = ( ) , where is a lookup function that returns the phoneme class of defined by the CMUDict dataset.", "5 In this paper we experiment with the following three FST and mark configurations for G2P/P2G experiments: -IPA-Phone in which case (cid:48) = for all arcs.", "T = T 0 .", "+IPA-Phone in which case (cid:48) = [ IPA ] when the transduction rule is found in the IPA Help list, otherwise (cid:48) = .", "T = TIPA .", "+IPA+Phone in which case (cid:48) = [ IPA Phone ] when the transduction rule is found in the IPA Help list, otherwise (cid:48) = [ Phone ] .", "T = TIPA .", "As we said earlier, we only use T = T 0 with no special marks for A2C experiments.", "Experimental results on these dierent configurations are in 5.3.", "We experimentwiththe following methods to decode the most probable strings:", "Approximate Posterior (AP).", "We approximate the posterior distribution over output strings p Y ( y | T (cid:48) ) , and pick y = argmax y p Y ( y | T (cid:48) ) as the output.", "Reranking AP.", "As we discuss in 3.3, improving y by taking more path samples in T (cid:48) may be expensive.", "The reranking method uses the support of p Y as a candidate pool L (cid:48) , and for each y k L (cid:48) we estimate equation (15) using path samples in T (cid:48) y k .", "Reranking External.", "This decoding method uses k -best lists from external models.", "In this paper, we make use of sequence-to-sequence baseline models as the candidate pool L (cid:48) .", "Reranking AP + External.", "This decoding method uses the union of the support of p Y and k -best lists from the sequence-to-sequence baseline models as the candidate pool L (cid:48) .", "In this paper,we take 128 path samples per candidate for all Reranking methods.", "5 https://github.com/cmusphinx/cmudict/blob/ master/cmudict.phones 5 Results 5.1 Baselines We compare NFSTs against the following baselines: BiRNN-WFSTs proposed by Rastogi et al. (2016), were weighted finite-state transducers whose weights encode input string features by the use of recurrent neural networks.", "As we note in Table 1, they can be seen as a special case of NFSTs, where the Markov property is kept, but where exact inference is still possible.", "Seq2seq models are the standard toolkit for transduction tasks.", "We make use of the attention mechanism proposed by Luong et al. (2015), which accomplishes soft alignments' that do not enforce a monotonic alignment constraint.", "Neuralized IBM Model 1 is a character transduction model recently proposed by Wu et al. (2018), which marginalizes over non-monotonic hard alignments between input and output strings.", "Like (Luong et al., 2015), they did not enforce monotonic alignment constraints; but unlike them, they did not make use of the input feeding mechanism, 6 where past alignment information is fed back into the RNN decoder.", "This particular omission allows (Wu et al., 2018) to do exact inference with a dynamic programming algorithm.", "All baseline systems are tuned on the validation sets.", "The seq2seq models employ GRUs, with word and RNN embedding size = 500 and a dropout rate of 0 .", "3 .", "They are trained with the Adam optimizer (Kingma and Ba, 2014) over 50 epochs.", "The Neuralized IBM Model 1 models are tuned as described in (Wu et al., 2018).", "Table 2 indicates that BiRNN-WFST models (Ras-togi et al., 2016) perform worse than other models.", "Their Markovian assumption helps enable dynamic programming, but restricts their expressive power, which greatly hampers the BiRNN-WFST's performance on the P2G/G2P task.", "The NACS task also relies highly on output-output interactions, and BiRNN-WFST performs very poorly there.", "Table 3 shows results from dierent decoding methods on the G2P/P2G tasks, configuration +IPA+Phone .", "AP performs significantly worse than Reranking AP , suggesting that the estimate y suers from the variance problem.", "Interestingly, of decoding methods that employ external models, Reranking External performs better than Reranking AP + External , despite having a smaller candidate pool.", "We think there is some product-of-experts eect in Reranking External since the external model may not be biased in the same way as our model is.", "But such benefits vanish when candidates from AP are also in the pool our learned approximation learns the bias in the model and hence the worse performance in Reranking AP + External .", "This suggests an interesting regularization trick in practice: populating the candidate pool using external models to hide our model bias.", "However when we compare our method against non-NFST baseline methods we do not make use of such tricks, to ensure a more fair comparison.", "In Table 4 we see that combining both +IPA and +Phone improves model generalizability over the general FST (-IPA -Phone).", "We also note that using only the IPA marks leads to degraded performance EM Accuracy Edit Distance Dev Test Dev Test -IPA -Phone 31.8 29.3 1.38 1.373 +IPA -Phone 31.3 29.2 1.367 1.431 +IPA +Phone 32.7 31.8 1.319 1.332 Table 4: Average exact match accuracy (%, higher the better) and edit distance (lower the better) on G2P and P2G.", "compared to the general FST baseline.", "This is a surprising result one explanation is the IPA marks are not defined on all paths that transduce the intended input-output pairs: NFSTs are capable of recognizing phoneme-grapheme alignments in dierent paths, 7 but only one such path is marked by +IPA.", "But we leave a more thorough analysis to future work.", "Recently, there has been work relating finite-state methods and neural architectures.", "For example, Schwartz et al. (2018) and Peng et al. (2018) have shown the equivalence between some neural models and WFSAs.", "The most important dierences of our work is that in addition to classifying strings, NFSTs can also transduce strings.", "Moreover, NFSTs also allow free topology of FST design, and breaks the Markovian assumption.", "In addition to models we compare against in 4, we note that (Aharoni and Goldberg, 2017; Deng et al., 2018) are also similar to our work; in that they also marginalize over latent alignments, although they do not enforce the monotonicity constraint.", "Work that discusses globally normalized sequence models are relevant to our work.", "In this paper, we discuss a training strategy that bounds the partition function; other ways to train a globally normalized model (not necessarily probabilistic) include (Wiseman and Rush, 2016; Andor et al., 2016).", "On the other hand, our locally normalized FSTs bear resemblance to (Dyer et al., 2016), which was also locally normalized, and also employed importance sampling for training.", "Neural finite state transducers (NFSTs) are able to model string pairs, considering their monotonic alignment but also enjoying RNNs' power to handle non-finite-state phenomena.", "They compete favor-7 This is discussed further in Appendix B.2.", "ably with state-of-the-art neural models on transduction tasks.", "At the same time, it is easy to inject domain knowledge into NFSTs for inductive bias, and they oer interpretable paths.", "In this paper, we have used rather simple architectures for our RNNs; one could experiment with multiple layers and attention.", "One could also experiment with associating marks dierently with arcsthe marks are able to convey useful domain information to the RNNs.", "For example, in a P2G or G2P task, all arcs that cross a syllable boundary might update the RNN state using a syllable mark.", "We envision using regular expressions to build the NFSTs, and embedding marks in the regular expressions as a way of sending useful features to the RNNs to help them evaluate paths.", "In this paper,we have studied NFSTs as standalone systems.", "But as probabilistic models, they can be readily embedded in a bigger picture: it should be directly feasible to incorporate a globally/locally normalized NFST in a larger probabilistic model (Finkel and Manning, 2009; Chiang et al., 2010).", "The path weights of NFSTs could be interpreted simply as scores, rather than log-probabilities.", "One would then decode by seeking the 1-best path with input x , e.g., via beam search or Monte Carlo Tree Search.", "In this setting, one might attempt to train the NFST using methods similar to the max-violation structured perceptron or the structured SVM.", "This work has been generously supported by a Google Faculty Research Award and by Grant No. 1718846 from the National Science Foundation, both to the last author.", "Hao Zhu is supported by Tsinghua University Initiative Scientific Research Program.", "We thank Shijie Wu for providing us IBM Neuralized Model 1 experiment results." ]
[ "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Recently, the performance of Pre-trained Language Models (PLMs) has been significantly improved by injecting knowledge facts to enhance their abilities of language understanding.", "For medical domains, the background knowledge sources are especially useful, due to the massive medical terms and their complicated relations are difficult to understand in text.", "In this work, we introduce SMedBERT, a medical PLM trained on large-scale medical corpora, incorporating deep structured semantics knowledge from neighbours of linked-entity.", "In SMedBERT, the mention-neighbour hybrid attention is proposed to learn heterogeneous-entity information, which infuses the semantic representations of entity types into the homogeneous neighbouring entity structure.", "Apart from knowledge integration as external features, we propose to employ the neighbors of linked-entities in the knowledge graph as additional global contexts of text mentions, allowing them to communicate via shared neighbors, thus enrich their semantic representations.", "Experiments demonstrate that SMedBERT significantly outperforms strong baselines in various knowledge-intensive Chinese medical tasks.", "It also improves the performance of other tasks such as question answering, question matching and natural language inference.", "1 1 Introduction Pre-trained Language Models (PLMs) learn effective context representations with self-supervised tasks, spotlighting in various NLP tasks (Wang et al., 2019a; Nan et al., 2020; Liu et al., 2020a).", "In addition, Knowledge-Enhanced PLMs (KEPLMs) (Zhang et al., 2019; Liu et al., 2020b; Wang et al., 2019b) further benefit language understanding by Corresponding author.", "In the literatures, a majority of KEPLMs (Zhang et al., 2020a; Hayashi et al., 2020; Sun et al., 2020) inject information of entities corresponding to mention-spans from Knowledge Graphs (KGs) into contextual representations.", "However, those KEPLMs only utilize linked-entity in the KGs as auxiliary information, which pay little attention to the neighboring structured semantics information of the entity linked with text mentions.", "In the medical context, there exist complicated domain knowledge such as relations and medical facts among medical terms (Rotmensch et al., 2017; Li et al., 2020), which are difficult to model using previous approaches.", "To address this issue, we consider leveraging structured semantics knowledge in medical KGs from the two aspects.", "(1) Rich semantic information from neighboring structures of linked-entities, such as entity types and relations, are highly useful for medical text understanding.", "As in Figure 1, (novel coronavirus) can be the cause of many diseases, such as (pneumonia) and (respiratory syndrome).", "2 (2) Additionally, we leverage neighbors of linked-entity as global contexts to complement plain-text contexts used in (Mikolov et al., 2013a; Pennington et al., 2014).", "The structure knowledge contained in neighbouring entities can act as the knowledge bridge between mention-spans, facilitating the interaction of different mention representations.", "Hence, PLMs can learn better representations for rare medical terms.", "In this paper, we introduce SMedBERT, a KEPLM pre-trained over large-scale medical corpora and medical KGs.", "To the best of our knowledge, SMedBERT is the first PLM with structured semantics knowledge injected in the medical domain.", "Specifically, the contributions of SMedBERT mainly include two modules: Mention-neighbor Hybrid Attention: We fuse the embeddings of the node and type of linked-entity neighbors into contextual target mention representations.", "The type-level and node-level attentions help to learn the importance of entity types and the neighbors of linked-entity, respectively, in order to reduce the knowledge noise injected into the model.", "The type-level attention transforms the homogeneous node-level attention into a heterogeneous learning process of neighboring entities.", "Mention-neighbor Context Modeling: We propose two novel self-supervised learning tasks for promoting interaction between mention-span and corresponding global context, namely masked neighbor modeling and masked mention modeling.", "The former enriches the representations of con-text neighboring entities based on the well trained target word mention-span, while the latter focuses on gathering those information back from neighboring entities to the masked target like low-frequency mention-span which is poorly represented (Turian et al., 2010).", "In the experiments, we compare SMedBERT against various strong baselines, including mainstream KEPLMs pre-trained over our medical resources.", "The underlying medical NLP tasks include: named entity recognition, relation extraction, question answering, question matching and natural language inference.", "The results show that SMedBERT consistently outperforms all the baselines on these tasks.", "2 Although we focus on Chinese medical PLMs here.", "The proposed method can be easily adapted to other languages, which is beyond the scope of this work.", "PLMs in the Open Domain.", "PLMs have gained much attention recently, proving successful for boosting the performance of various NLP tasks (Qiu et al., 2020).", "Early works on PLMs focus on feature-based approaches to transform words into distributed representations (Collobert and Weston, 2008; Mikolov et al., 2013b; Pennington et al., 2014; Peters et al., 2018).", "BERT (Devlin et al., 2019) (as well as its robustly optimized version RoBERTa (Liu et al., 2019b)) employs bidirectional transformer encoders (Vaswani et al., 2017) and self-supervised tasks to generate context-aware token representations.", "Further improvement of performances mostly based on the following three types of techniques, including self-supervised tasks (Joshi et al., 2020), transformer encoder architectures (Yang et al., 2019) and multi-task learning (Liu et al., 2019a).", "Knowledge-Enhanced PLMs.", "As existing BERT-like models only learn knowledge from plain corpora, various works have investigated how to incorporate knowledge facts to enhance the language understanding abilities of PLMs.", "KEPLMs are mainly divided into the following three types.", "(1) Knowledge-enhanced by Entity Embedding: ERNIE-THU (Zhang et al., 2019) and KnowBERT (Peters et al., 2019) inject linked-entity as heterogeneous features learned by KG embedding algorithms such as TransE (Bordes et al., 2013).", "(2) Knowledge-enhanced by Entity Description: E-BERT (Zhang et al., 2020a) and KEPLER (Wang et al., 2019b) add extra description text of entities to enhance semantic representation.", "(3) Knowledge-enhanced by Triplet Sentence: K-BERT (Liu et al., 2020b) and CoLAKE (Sun et al., 2020) convert triplets into sentences and insert them into the training corpora without pre-trained embedding.", "Previous studies on KG embedding (Nguyen et al., 2016; Schlichtkrull et al., 2018) have shown that utilizing the surrounding facts of entity can obtain more informative embedding, which is the focus of our work.", "PLMs in the Medical Domain.", "PLMs in the medical domain can be generally divided into three categories.", "(1) BioBERT (Lee et al., 2020), Blue-BERT (Peng et al., 2019), SCIBERT (Beltagy et al., 2019) and ClinicalBert (Huang et al., 2019) apply continual learning on medical domain texts, such as PubMed abstracts, PMC full-text articles and MIMIC-III clinical notes.", "(2) PubMedBERT (cid:68)(cid:437)(cid:367)(cid:410)(cid:349)(cid:882)(cid:44)(cid:286)(cid:258)(cid:282)(cid:3)(cid:94)(cid:286)(cid:367)(cid:296)(cid:882)(cid:4)(cid:410)(cid:410)(cid:286)(cid:374)(cid:410)(cid:349)(cid:381)(cid:374) (cid:38)(cid:286)(cid:286)(cid:282)(cid:3)(cid:38)(cid:381)(cid:396)(cid:449)(cid:258)(cid:396)(cid:282)(cid:3)(cid:62)(cid:258)(cid:455)(cid:286)(cid:396) (cid:100)(cid:381)(cid:364)(cid:286)(cid:374)(cid:3)(cid:47)(cid:374)(cid:393)(cid:437)(cid:410)(cid:3) (cid:68)(cid:454) (cid:100)(cid:882)(cid:28)(cid:374)(cid:272)(cid:381)(cid:282)(cid:286)(cid:396) (cid:60)(cid:882)(cid:28)(cid:374)(cid:272)(cid:381)(cid:282)(cid:286)(cid:396) (cid:69)(cid:454) (cid:68)(cid:437)(cid:367)(cid:410)(cid:349)(cid:882)(cid:44)(cid:286)(cid:258)(cid:282)(cid:3)(cid:94)(cid:286)(cid:367)(cid:296)(cid:882)(cid:4)(cid:410)(cid:410)(cid:286)(cid:374)(cid:410)(cid:349)(cid:381)(cid:374) (cid:44)(cid:455)(cid:271)(cid:396)(cid:349)(cid:282)(cid:3)(cid:4)(cid:410)(cid:410)(cid:286)(cid:374)(cid:410)(cid:349)(cid:381)(cid:374)(cid:3)(cid:69)(cid:286)(cid:410)(cid:449)(cid:381)(cid:396)(cid:364) (cid:44)(cid:286)(cid:410)(cid:286)(cid:396)(cid:381)(cid:336)(cid:286)(cid:374)(cid:286)(cid:381)(cid:437)(cid:400)(cid:3)(cid:47)(cid:374)(cid:296)(cid:381)(cid:396)(cid:373)(cid:258)(cid:410)(cid:349)(cid:381)(cid:374)(cid:3)(cid:38)(cid:437)(cid:400)(cid:349)(cid:381)(cid:374) (cid:87)(cid:396)(cid:286)(cid:882)(cid:410)(cid:396)(cid:258)(cid:349)(cid:374)(cid:349)(cid:374)(cid:336)(cid:3)(cid:100)(cid:258)(cid:400)(cid:364)(cid:400) (cid:68)(cid:62)(cid:68)(cid:3) (cid:68)(cid:286)(cid:374)(cid:410)(cid:349)(cid:381)(cid:374)(cid:882)(cid:374)(cid:286)(cid:349)(cid:336)(cid:346)(cid:271)(cid:381)(cid:396)(cid:3)(cid:18)(cid:381)(cid:374)(cid:410)(cid:286)(cid:454)(cid:410)(cid:3)(cid:68)(cid:381)(cid:282)(cid:286)(cid:367)(cid:349)(cid:374)(cid:336) (cid:894)(cid:258)(cid:895) (cid:94)(cid:68)(cid:286)(cid:282)(cid:17)(cid:28)(cid:90)(cid:100)(cid:3)(cid:4)(cid:396)(cid:272)(cid:346)(cid:349)(cid:410)(cid:286)(cid:272)(cid:410)(cid:437)(cid:396)(cid:286) (cid:60)(cid:374)(cid:381)(cid:449)(cid:367)(cid:286)(cid:282)(cid:336)(cid:286)(cid:3)(cid:47)(cid:374)(cid:393)(cid:437)(cid:410) (cid:68)(cid:286)(cid:282)(cid:349)(cid:272)(cid:258)(cid:367)(cid:3)(cid:60)(cid:39) (cid:7136) (cid:2000) (cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3) (cid:11253) (cid:8706) (cid:3) (cid:708) (cid:18)(cid:75)(cid:115)(cid:47)(cid:24)(cid:882)(cid:1005)(cid:1013) (cid:709) (cid:69)(cid:286)(cid:349)(cid:336)(cid:346)(cid:271)(cid:381)(cid:396)(cid:349)(cid:374)(cid:336)(cid:28)(cid:374)(cid:410)(cid:349)(cid:410)(cid:349)(cid:286)(cid:400) (cid:100)(cid:455)(cid:393)(cid:286) (cid:69)(cid:381)(cid:282)(cid:286) (cid:69)(cid:381)(cid:282)(cid:286) (cid:87)(cid:28)(cid:87)(cid:90) (cid:100) (cid:455) (cid:393) (cid:286) (cid:74) (cid:202) (cid:44)(cid:455)(cid:271)(cid:396)(cid:349)(cid:282)(cid:3)(cid:4)(cid:410)(cid:410)(cid:856)(cid:3)(cid:920)(cid:3)(cid:47)(cid:374)(cid:296)(cid:381)(cid:396)(cid:856)(cid:3)(cid:38)(cid:437)(cid:400)(cid:349)(cid:381)(cid:374)(cid:3)(cid:68)(cid:381)(cid:282)(cid:437)(cid:367)(cid:286)(cid:400) (cid:202) (cid:68)(cid:286)(cid:374)(cid:410)(cid:349)(cid:381)(cid:374)(cid:882)(cid:374)(cid:286)(cid:349)(cid:336)(cid:346)(cid:271)(cid:381)(cid:396)(cid:3)(cid:18)(cid:381)(cid:374)(cid:410)(cid:286)(cid:454)(cid:410)(cid:3)(cid:68)(cid:381)(cid:282)(cid:286)(cid:367)(cid:349)(cid:374)(cid:336) (cid:3843)(cid:68)(cid:258)(cid:400)(cid:364)(cid:286)(cid:282)(cid:3)(cid:69)(cid:286)(cid:349)(cid:336)(cid:346)(cid:271)(cid:381)(cid:396)(cid:3)(cid:68)(cid:381)(cid:282)(cid:286)(cid:367)(cid:349)(cid:374)(cid:336) (cid:20) (cid:85) (cid:22) (cid:85) (cid:21) (cid:85) (cid:364)(cid:374)(cid:381)(cid:449)(cid:367)(cid:286)(cid:282)(cid:336)(cid:286)(cid:3)(cid:272)(cid:381)(cid:374)(cid:410)(cid:286)(cid:454)(cid:410) (cid:364)(cid:374)(cid:381)(cid:449)(cid:367)(cid:286)(cid:282)(cid:336)(cid:286)(cid:3)(cid:272)(cid:381)(cid:374)(cid:410)(cid:286)(cid:454)(cid:410) (cid:23) (cid:85) (cid:896)(cid:68)(cid:4)(cid:94)(cid:60)(cid:897) (cid:312) (cid:68)(cid:258)(cid:400)(cid:364)(cid:286)(cid:282)(cid:3)(cid:68)(cid:286)(cid:374)(cid:410)(cid:349)(cid:381)(cid:374)(cid:3)(cid:68)(cid:381)(cid:282)(cid:286)(cid:367)(cid:349)(cid:374)(cid:336) (cid:87)(cid:62)(cid:68)(cid:400) (cid:410)(cid:381)(cid:364)(cid:286)(cid:374)(cid:400) (cid:20) (cid:80) (cid:79) (cid:22) (cid:80) (cid:79) (cid:21) (cid:80) (cid:79) Figure 2: Model overview of SMedBERT.", "(Gu et al., 2020) learns weights from scratch using PubMed data to obtain an in-domain vocabulary, alleviating the out-of-vocabulary (OOV) problem.", "This training paradigm needs the support of large-scale domain data and resources.", "(3) Some other PLMs use domain self-supervised tasks for pretraining.", "For example, MC-BERT (Zhang et al., 2020b) masks Chinese medical entities and phrases to learn complex structures and concepts.", "Disease-BERT (He et al., 2020) leverages the medical terms and its category as the labels to pre-train the model.", "In this paper, we utilize both domain corpora and neighboring entity triplets of mentions to enhance the learning of medical language representations.", "In the PLM, we denote the hidden feature of each token { w 1 , ..., w N } as { h 1 , h 2 , ..., h N } where N is the maximum input sequence length and the total number of pre-training samples as M .", "Let E be the set of mention-span e m in the training corpora.", "Furthermore, the medical KG consists of the entities set E and the relations set R .", "The triplet set is S = { ( h, r, t ) | h E , r R , t E} , where h is the head entity with relation r to the tail entity t .", "The embeddings of entities and relations trained on KG by TransR (Lin et al., 2015) are represented as ent and rel , respectively.", "The neighboring entity set recalled from KG by e m is denoted as N e m = { e 1 m , e 2 m , ..., e Km } where K is the threshold of our PEPR algorithm.", "We denote the number of entities in the KG as Z .", "The dimensions of the hidden representation in PLM and the KG embeddings are d 1 and d 2 , respectively.", "The main architecture of the our model is shown in Figure 2.", "SMedBERT mainly includes three components: (1) Top-K entity sorting determine which K neighbour entities to use for each mention.", "(2) Mention-neighbor hybrid attention aims to infuse the structured semantics knowledge into encoder layers, which includes type attention, node attention and gated position infusion module.", "(3) Mention-neighbor context modeling includes masked neighbor modeling and masked mention modeling aims to promote mentions to leverage and interact with neighbour entities.", "Previous research shows that simple neighboring entity expansion may induce knowledge noises during PLM training (Wang et al., 2019a).", "In order to recall the most important neighboring entity set from the KG for each mention, we extend the Personalized PageRank (PPR) (Page et al., 1999) algorithm to filter out trivial entities.", "3 Recall that the iterative process in PPR is V i = (1 ) A V i 1 + P where A is the normalized adjacency matrix, is the damping factor, P is uniformly distributed jump probability vector, and V is the iterative score vector for each entity.", "PEPR specifically focuses on learning the weight for the target mention span in each iteration.", "It 3 We name our algorithm to be Personalized Entity PageRank, abbreviated as PEPR.", "where T is the sum of frequencies of all entities.", "t e m is the frequency of e m in the corpora.", "After sorting, we select the topK entity set N e m .", "Besides the embeddings of neighboring entities, SMedBERT integrates the type information of medical entities to further enhance semantic representations of mention-span.", "Different types of neighboring entities may have different impacts.", "Given a specific mention-span e m , we compute the neighboring entity type attention.", "Concretely, we calculate hidden representation of each entity type as h = (cid:80) e im E m h e im .", "E m are neighboring entities of e m with the same type and h e im = ent (cid:0) e im (cid:1) R d 2 .", "where f sp is the self-attentive pooling (Lin et al., 2017) to generate the mention-span representation h e m R d 1 and the ( h i , h i +1 , . . . , h j ) is the hidden representation of tokens ( w i , w i +1 , . . . , w j ) in mention-span e m trained by PLMs.", "h (cid:48) e m R d 2 is obtained by ( ) non-linear activation function GELU (Hendrycks and Gimpel, 2016) and the learnable projection matrix W be R d 1 d 2 .", "LN is the LayerNorm function (Ba et al., 2016).", "Then, we calculate the each type attention weight using the type representation h R d 2 and the transformed mention-span representation h (cid:48) e m : (cid:48) = tanh (cid:0) h (cid:48) e m W t + h W t (cid:48) (cid:1) W a (3) where W t R d 2 d 2 , W t (cid:48) R d 2 d 2 and W a R d 2 1 .", "Finally, the neighboring entity type attention weights are obtained by normalizing the attention score (cid:48) among all entity types T .", "Apart from entity type information, different neighboring entities also have different influences.", "Specifically, we devise the neighboring entity node attention to capture the different semantic influ-ences from neighboring entities to the target mention span and reduce the effect of noises.", "We calculate the entity node attention using the mention-span representation h (cid:48) e m and neighboring entities representation h e im with entity type as: (cid:48) e m e im = (cid:0) h (cid:48) e m W q (cid:1) (cid:0) h e im W k (cid:1) T d 2 (4) e m e im = exp (cid:16) (cid:48) e m e im (cid:17) (cid:80) e im N em exp (cid:16) (cid:48) e m e im (cid:17) (5) where W q R d 2 d 2 and W k R d 2 d 2 are the attention weight matrices.", "The representations of all neighboring entities in N e m are aggregated to h (cid:48) e m R d 2 : (cid:98) h (cid:48) e m = (cid:88) e i m N em e m e im (cid:0) h e im W v + b v (cid:1) (6) h (cid:48) e m = LN (cid:16)(cid:98) h (cid:48) e m + (cid:16) (cid:16)(cid:98) h (cid:48) e m W l 1 + b l 1 (cid:17) W l 2 (cid:17)(cid:17) (7) where W v R d 2 d 2 , W l 1 R d 2 4 d 2 , W l 2 R 4 d 2 d 2 .", "b v R d 2 and b l 1 R 4 d 2 are the bias vectors.", "h (cid:48) e m is the mention-neighbor representation from hybrid attention module.", "Knowledge-injected representations may divert the texts from its original meanings.", "We further reduce knowledge noises via gated position infusion: h (cid:48) e mf = (cid:0)(cid:2) h (cid:48) e m (cid:107) h (cid:48) e m (cid:3) W mf + b mf (cid:1) (8) (cid:101) h (cid:48) e mf = LN ( h (cid:48) e mf W bp + b bp ) (9) where W mf R 2 d 2 2 d 2 , W bp R 2 d 2 d 1 , b mf R 2 d 2 , b bp R d 1 .", "h (cid:48) e mf R 2 d 2 is the span-level infusion representation.", "(cid:107) means concatenation operation.", "(cid:101) h (cid:48) e mf R d 1 is the final knowledge-injected representation for mention e m .", "We generate the output token representation h if by 4 : g i = tanh (cid:16)(cid:16)(cid:104) h i (cid:107) (cid:101) h (cid:48) e mf (cid:105)(cid:17) W ug + b ug (cid:17) (10) h if = (cid:16)(cid:16)(cid:104) h i (cid:107) g i (cid:101) h (cid:48) e mf (cid:105)(cid:17) W ex + b ex (cid:17) + h i (11) where W ug , W ex R 2 d 1 d 1 .", "b ug , b ex R d 1 .", "means element-wise multiplication.", "4 We find that restricting the knowledge infusion position to tokens is helpful to improve performance.", "To fully exploit the structured semantics knowledge in KG, we further introduce two novel self-supervised pre-training tasks, namely Masked Neighbor Modeling (MNeM) and Masked Mention Modeling (MMeM).", "Formally, let r be the relation between the mention-span e m and a neighboring entity e im :", "where h mf is the mention-span hidden features based on the tokens hidden representation (cid:0) h if , h ( i +1) f , . . . , h jf (cid:1) .", "h r = rel ( r ) R d 2 is the relation r representation and W sa R d 1 d 2 is a learnable projection matrix.", "The goal of MNeM is leveraging the structured semantics in surrounding entities while reserving the knowledge of relations between entities.", "Considering the object functions of skip-gram with negative sampling (SGNS) (Mikolov et al., 2013a) and score function of TransR (Lin et al., 2015): LS = log f s ( w, c ) + k E c n PD [log f s ( w, c n )] (13) f tr ( h, r, t ) = (cid:107) hM r + r tM r (cid:107) (14) where the w in LS is the target word of context c .", "f s is the compatibility function measuring how well the target word is fitted into the context.", "Inspired by SGNS, following the general energy-based framework (LeCun et al., 2006), we treat mention-spans in corpora as target words, and neighbors of corresponding entities in KG as contexts to provide additional global contexts.", "We employ the Sampled-Softmax (Jean et al., 2015) as the criterion L MNeM for the mention-span e m : (cid:88) N em log exp( f s ( )) exp( f s ( )) + K E e n Q ( e n ) [exp( f s ( (cid:48) ))] (15) where denotes the triplet ( e m , r, e im ) , e im N e m .", "(cid:48) is the negative triplets ( e m , r, e n ) , and e n is negative entity sampled with Q ( e im ) detailed in Appendix B. To keep the knowledge of relations between entities, we define the compatibility function as: f s (cid:0) e m , r, e im (cid:1) = h mf M r + h r || h mf M r + h r || ( h e im M r ) T || h e im M r || (16) where is a scale factor.", "Assuming the norms of both h mf M r + h r and h e im M r are 1,we have: f s (cid:0) e m , r, e im (cid:1) = f tr ( h mf , h r , h e im ) = 0 (17) which indicates the proposed f s is equivalence with f tr .", "Because | h e n M r | needs to be calculated for each e n , the computation of the score function f s is costly.", "Hence, we transform part of the formula f s as follows: ( h mf M r + h r ) ( h e n M r ) T = (cid:2) h mf 1 (cid:3) (cid:20) M r h r (cid:21) (cid:20) M r h r (cid:21) T (cid:2) h e n 0 (cid:3) T = (cid:2) h mf 1 (cid:3) MP r (cid:2) h e n 0 (cid:3) T (18) In this way, we eliminate computation of transforming each h e n .", "Finally, to compensate the offset introduced by the negative sampling function Q ( e im ) (Jean et al., 2015), we complement f s ( e m , r, e i m ) as: (cid:2) h mf 1 (cid:3) MP r (cid:107) (cid:2) h mf 1 (cid:3) MP r (cid:107) (cid:2) h e im 0 (cid:3) (cid:107) h e im (cid:107) log Q ( e im ) (19) 3.4.2 Masked Mention Modeling In contrast to MNeM, MMeM transfers the semantic information in neighboring entities back to the masked mention e m .", "where Y m is the ground-truth representation of e m and h ip = p ( w i ) R d 2 .", "p is the pre-trained embedding of BERT in our medical corpora.", "The mention-span representation obtained by our model is h mf .", "For a sample s , the loss of MMeM L MMeM is calculated via Mean-Squared Error: L MMeM = M s (cid:88) m i (cid:107) h m i f Y m i (cid:107) 2 (21) where M s is the set of mentions of sample s .", "In SMedBERT, the training objectives mainly consist of three parts, including the self-supervised loss proposed in previous works and the mention-neighbor context modeling loss proposed in our work.", "Our model can be applied to medical text pre-training directly in different languages as long as high-quality medical KGs can be obtained.", "The total loss is as follows: L total = LEX + 1 L MNeM + 2 L MMeM (22) where LEX is the sum of sentence-order prediction (SOP) (Lan et al., 2020) and masked language modeling.", "1 and 2 are the hyperparameters.", "Pre-training Data.", "The pre-training corpora after pre-processing contains 5,937,695 text segments with 3,028,224,412 tokens (4.9 GB).", "The KGs embedding trained by TransR (Lin et al., 2015) on two trusted data sources, including the Symptom-In-Chinese from OpenKG 5 and DXY-KG 6 containing 139,572 and 152,508 entities, respectively.", "The number of triplets in the two KGs are 1,007,818 and 3,764,711.", "The pre-training corpora and the KGs are further described in Appendix A.1.", "Task Data.", "We use four large-scale datasets in ChineseBLUE (Zhang et al., 2020b) to evaluate our model, which are benchmark of Chinese medical NLP tasks.", "Additionally, we test models on four datasets from real application scenarios provided by DXY company 7 and CHIP 8 , i.e., Named Entity Recognition (DXY-NER), Relation Extraction (DXY-RE, CHIP-RE) and Question Answer (WebMedQA (He et al., 2019)).", "For other information of the downstream datasets, we refer readers to Appendix A.2.", "In this work, we compare SMedBERT with general PLMs, domain-specific PLMs and KEPLMs with knowledge embedding injected, pre-trained on our Chinese medical corpora:", "General PLMs: We use three Chinese BERT-style models, namely BERT-base (Devlin et al., 2019), BERT-wwm (Cui et al., 2019) and RoBERTa (Liu et al., 2019b).", "All the weights are initialized from (Cui et al., 2020).", "Domain-specific PLMs: As very few PLMs in the Chinese medical domain are available, we consider the following models.", "MC-BERT (Zhang et al., 5 http://www.openkg.cn/dataset/ symptom-in-chinese 6 https://portal.dxy.cn/ 7 https://auth.dxy.cn/accounts/login 8 http://www.cips-chip.org.cn:8088/home Model D1 D2 D3 SGNS-char-med 27.21% 27.16% 21.72% SGNS-word-med 24.64% 24.95% 20.37% GLOVE-char-med 27.24% 27.12% 21.91% GLOVE-word-med 24.41% 23.89% 20.56% BERT-open 29.79% 29.41% 21.83% BERT-wwm-open 29.75% 29.55% 21.97% RoBERTa-open 30.84% 30.56% 21.98% MC-BERT 30.63% 30.34% 22.65% BioBERT-zh 30.84% 30.69% 22.71% ERNIE-med 30.97% 30.78% 22.99% KnowBERT-med 30.95% 30.77% 23.07% SMedBERT 31.81% 32.14% 24.08 % Table 1: Results of unsupervised semantic similarity task. med refers to models continually pre-trained on medical corpora, and open means open-domain corpora. char' and word refer to the token granularity of input samples. 2020b) is pre-trained over a Chinese medical corpora via masking different granularity tokens.", "We also pre-train BERT using our corpora, denoted as BioBERT-zh.", "KEPLMs: We employ two SOTA KEPLMs continually pre-trained on our medical corpora as our baseline models, including ERNIE-THU (Zhang et al., 2019) and KnowBERT (Peters et al., 2019).", "For a fair comparison, KEPLMs use other additional resources rather than the KG embedding are excluded (See Section 2), and all the baseline KEPLMs are injected by the same KG embedding.", "The detailed parameter settings and training procedure are in Appendix B. 4.3 Intrinsic Evaluation To evaluate the semantic representation ability of SMedBERT, we design an unsupervised semantic similarity task.", "Specifically, we extract all entities pairs with equivalence relations in KGs as positive pairs.", "For each positive pair, we use one of the entity as query entity while the other as positive candidate, which is used to sample other entities as negative candidates.", "We denote this dataset as D1 .", "Besides, the entities in the same positive pair often have many neighbours in common.", "We select positive pairs with large proportions of common neighbours as D2 .", "Additionally, to verify the ability of SMedBERT of enhancing the low-frequency mention representation, we extract all positive pairs that with at least one low-frequency mention as D3 .", "There are totally 359,358, 272,320 and 41,583 samples for D1 , D2 , D3 respectively.", "We describe the Named Entity Recognition Relation Extraction Model cMedQANER DXY-NER Average CHIP-RE DXY-RE Average Dev Test Dev Test Test Test Dev Test Test BERT-open 80.69% 83.12% 79.12% 79.03% 81.08% 85.86% 94.18% 94.13% 90.00% BERT-wwm-open 80.52% 83.07% 79.48% 79.29% 81.18% 86.01% 94.35% 94.38% 90.20% RoBERT-open 80.92% 83.29% 79.27% 79.33% 81.31% 86.19% 94.64% 94.66% 90.43% BioBERT-zh 80.72% 83.38% 79.52% 79.45% 81.42% 86.12% 94.54% 94.64% 90.38% MC-BERT 81.02% 83.46% 79.79% 79.59% 81.53% 86.09% 94.74% 94.73% 90.41% KnowBERT-med 81.29% 83.75% 80.86% 80.44% 82.10% 86.27% 95.05% 94.97% 90.62% ERNIE-med 81.22% 83.87% 80.82% 80.87% 82.37% 86.25% 94.98% 94.91% 90.58% SMedBERT 82.23% 84.75% 83.06% 82.94% 83.85% 86.95% 95.73% 95.89% 91.42% Table 2: Performance of Named Entity Recognition (NER) and Relation Extraction (RE) tasks in terms of F1.", "details of collecting data and embedding words in Appendix C. In this experiments, we compare SMedBERT with three types of models: classical word embedding methods ( SGNS (Mikolov et al., 2013a), GLOVE (Pennington et al., 2014)), PLMs and KEPLMs.", "We compute the similarity between the representation of query entities and all the other entities, retrieving the most similar one.", "The evaluation metric is top-1 accuracy (Acc@1).", "Experiment results are shown in Table 1.", "From the results, we observe that: (1) SMedBERT greatly outperforms all baselines especially on the dataset D2 (+1.36%) , where most positive pairs have many shared neighbours, demonstrating that ability of SMedBERT to utilize semantic information from the global context.", "(2) In dataset D3 , SMedBERT improve the performance significantly (+1.01%) , indicating our model is effective to enhance the representation of low-frequency mentions.", "We first evaluate our model in NER and RE tasks that are closely related to entities in the input texts.", "Table 2 shows the performances on medical NER and RE tasks.", "In NER and RE tasks, we can observe from the results: (1) Compared with PLMs trained in open-domain corpora, KEPLMs with medical corpora and knowledge facts achieve better results.", "(2) The performance of SMedBERT is greatly improved compared with the strongest baseline in two NER datasets (+0.88%, +2.07%) , and (+0.68%, +0.92%) on RE tasks.", "We also evaluate SMedBERT on QA, QM and NLI tasks and the performance is shown in Table 3.", "We can observe that SMedBERT improve the performance consistently on these datasets (+0.90% on QA, +0.89% on QM and +0.63% on NLI) .", "In general, it can be seen from Table 2 and Table 3 that injecting the domain knowledge especially the structured semantics knowledge can improve the result greatly.", "In this experiment, we explore the model performance in NER and RE tasks with different entity hit ratios, which control the proportions of knowledge-enhanced mention-spans in the samples.", "The aver-Figure 3: Entity hit ratio results of SMedBERT and ERNIE in NER and RE tasks.", "age number of mention-spans in samples is about 40.", "Figure 3 illustrates the performance of SMedBERT and ERNIE-med (Zhang et al., 2019).", "From the result, we can observe that: (1) The performance improves significantly at the beginning and then keeps stable as the hit ratio increases, proving the heterogeneous knowledge is beneficial to improve the ability of language understanding and indicating too much knowledge facts are unhelpful to further improve model performance due to the knowledge noise (Liu et al., 2020b).", "(2) Compared with previous approaches, our SMedBERT model improves performance greatly and more stable.", "We further evaluate the model performance under different K over the test set of DXY-NER and DXY-RE.", "Figure 4 shows the the model result with K = { 5 , 10 , 20 , 30 } .", "In our settings, the SMedBERT can achieve the best performance in different tasks around K = 10 .", "The results of SMedBERT show that the model performance increasing first and then decreasing with the increasing of K .", "This phenomenon also indicates the knowledge noise problem that injecting too much knowledge of neighboring entities may hurt the performance.", "In Table 4, we choose three important model components for our ablation study and report the test", "set performance on four datasets of NER and RE tasks that are closely related to entities.", "Specifically, the three model components are neighboring entity type attention, the whole hybrid attention module, and mention-neighbor context modeling respectively, which includes two masked language model loss L MNeM and L MMeM .", "From the result, we can observe that: (1) Without any of the three mechanisms, our model performance can also perform competitively with the strong baseline ERNIE-med (Zhang et al., 2019).", "(2) Note that after removing the hybrid attention module, the performance of our model has the greatest decline, which indicates that injecting rich heterogeneous knowledge of neighboring entities is effective.", "In this work, we address medical text mining tasks with the structured semantics KEPLM proposed named SMedBERT.", "Accordingly, we inject entity type semantic information of neighboring entities into node attention mechanism via heterogeneous feature learning process.", "Moreover, we treat the neighboring entity structures as additional global contexts to predict the masked candidate entities based on mention-spans and vice versa.", "The experimental results show the significant improvement of our model on various medical NLP tasks and the intrinsic evaluation.", "There are two research directions that can be further explored: (1) Injecting deeper knowledge by using farther neighboring entities as contexts; (2) Further enhancing Chinese medical long-tail entity semantic representation.", "We would like to thank anonymous reviewers for their valuable comments.", "This work is supported by the National Key Research and Development Program of China under Grant No. 2016YFB1000904, and Alibaba Group through Alibaba Research Intern Program." ]
[ "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "method", "result", "abstain", "other", "other" ]