sentences
sequence
labels
sequence
[ "The mainstream machine learning paradigms for NLP often work with two underlying presumptions.", "First, the target task is predefined and static; a system merely needs to learn to solve it exclusively.", "Second, the supervision of a task mainly comes from a set of labeled examples.", "A question arises: how to build a system that can keep learning new tasks from their instructions?", "This work defines a new learning paradigm ConTinTin ( Contin ual Learning from T ask In structions), in which a system should learn a sequence of new tasks one by one, each task is explained by a piece of textual instruction.", "The system is required to", "(i) generate the expected outputs of a new task by learning from its instruction,", "(ii) transfer the knowledge acquired from upstream tasks to help solve downstream tasks (i.e., forward-transfer), and", "(iii) retain or even improve the performance on earlier tasks after learning new tasks (i.e., backward-transfer).", "This new problem is studied on a stream of more than 60 tasks, each equipped with an instruction.", "Technically, our method InstructionSpeak contains two strategies that make full use of task instructions to improve forward-transfer and backward-transfer: one is to learn from negative outputs, the other is to re-visit instructions of previous tasks.", "To our knowledge, this is the first time to study ConTinTin in NLP.", "In addition to the problem formulation and our promising approach, this work also contributes to providing rich analyses for the community to better understand this novel learning problem.", "The main goal of machine learning algorithms lies in seeking supervision for solving a target task.", "Traditionally, the supervision is extracted from a set of labeled examples.", "The learner constructs a decision function that generalizes beyond the seen examples.", "Work was done at Salesforce Research.", "While this paradigm has been tremendously successful for many NLP problems, an inherent drawback exists in it: the learner can only be as good as the provided data (Goldwasser and Roth, 2014).", "Learning, therefore, relies on annotating a large volume of training data, an expensive and time-consuming process.", "To alleviate the costly demand for task-specific annotation (referred as S 0 here-after), the human learning process suggests at least two sources of alternative supervision: one is to accumulate knowledge from tasks learned in the past ( S 1 ) (Richard, 1970; Thrun and Mitchell, 1995; Chomsky, 2002); the other is to learn from natural instructions ( S 2 ) describing a high-level story about target tasks (Goldwasser and Roth, 2014).", "Unfortunately, we rarely see the joint power of S 1 and S 2 .", "In this work, we present a new learning paradigm ConTinTin contin ual Learning from t ask in structions.", "In ConTinTin , each task is given an instruction describing the target concept directly and a few instances exemplifying it.", "The system is required to incrementally learn a stream of tasks, so that the knowledge gained in the past can be used to address subsequent tasks.", "Apparently, this new problem tries to integrate the S 1 and S 2 into a single learning paradigm while decreasing the necessity of S 0 .", "More specifically, ConTinTin is expected to carry the properties listed in Table 1. Our data set is restructured from the NATURALINSTRUCTIONS (Mishra et al., 2021).", "NATURALINSTRUCTIONS is a benchmark that studies if a model can make appropriate use of natural language instructions to answer inputs accordingly.", "It comprises 61 tasks; each task is associated with a piece of instruction consisting of Title , Definition , Caution , Prompt , Things to avoid , Examples , etc.", "NATURAL-INSTRUCTIONS originally focuses on conventional supervised learning: give a bunch of tasks out of the 61 as the training tasks, and evalu-3062 Item Explanation Instruction-driven supervision Each task is explained by an instruction and a couple of instances exemplifying it.", "Fixed model capacity The system's structure and parameter size are constant regardless of its learning status.", "Knowledge maintenance The system is not inclined to catastrophic forgetting.", "Forward transfer The system uses knowledge acquired from upstream tasks to help solve downstream tasks.", "Backward transfer The system uses knowledge acquired from downstream tasks to help solve upstream tasks.", "ate the remaining tasks in a batch.", "In order to fit the formulation of ConTinTin , we reorganize the 61 tasks in NATURAL-INSTRUCTIONS : a few tasks (e.g., size k ) out of the 61 act as training tasks, and the remaining 61 k tasks as an ordered list of new tasks.", "The learner is expected to first learn from the k training tasks about how to use instructions to solve problems; then it evolves task by task along with the new task chain.", "Our system InstructionSpeak is based on BART (Lewis et al., 2020) with two proposed strategies aiming at making the best use of instructions.", "The first strategy, N EGATIVETRAINING , makes use of unfavorable clues, such as Things to avoid , from the instruction to promote the task understanding and forward-transfer.", "The second strategy, H ISTORYTRAINING , revisits instructions of earlier tasks during continual learning to alleviate the catastrophic forgetting issue in backward-transfer.", "We evaluate InstructionSpeak on a wide range of transferring distances (from 1 to 40), which shows that InstructionSpeak can generally help both forward-transfer and backward-transfer.", "1 Overall, this work has made three-fold contributions.", "First, ConTinTin is the first time to be formulated and studied in the NLP community.", "Second, we propose InstructionSpeak , a promising approach to ConTinTin .", "Third, we conduct intensive analyses, aiming to give a better understanding of this new challenge.", "This section retrospects continual learning and learning from task instructions , two machine learning paradigms that try to explore supervisions S 1 and S 2 , respectively.", "learn-1 \"Transferring distance\" refers to the task numbers between the model at a new status and the model at an earlier status.", "2 Continual learning in the literature is also referred to as: lifelong learning (Silver and Mercer, 2002), incremental learning (Solomonoff, 1989), sequential learning (McCloskey ing problem was mainly studied in computer vision or robotics domains, and most work concentrated on mitigating catastrophic forgetting (McCloskey and Cohen, 1989; Serr et al., 2018; Hofmanninger et al., 2020).", "Continual learning can be summarized into three categories: class continual learning ( CCL ), domain continual learning ( DCL ), and task continual learning ( TCL ).", "CCL learns a sequence of classes (e.g., visual object categories, text labels, etc.) to build one overall multi-label classifier for all the classes seen so far (Yan et al., 2021).", "For example, Wang et al. (2019) studied incrementally learning new relations for two entity mentions in an input sentence, and each relation has many labeled examples.", "Xia et al. (2021) proposed few-shot CCL in which multi-round of new text tags (e.g., intents or relations expressed in the input text) are encountered sequentially, and each new tag is only accompanied by a couple of examples.", "DCL essentially studies the same task but in different domains.", "The system is expected to evolve along with learning from a stream of datasets of the same task and different data distributions.", "Typical work in NLP includes sentiment classification (Chen et al., 2015; Xia et al., 2017), conversational agents (Lee, 2017), text classification, and question answering (d'Autume et al., 2019), etc.", "TCL tries to learn distinct tasks sequentially.", "Systems in (Sun et al., 2020a,b) incrementally learned among five disparate NLP tasks.", "Jin et al. (2021) further extended the size of the task stream (one benchmark has 26 tasks, the other covers 55) and studied TCL in a few-shot scenario.", "It is worth mentioning that all the listed work in TCL consistently transformed all tasks into question answering format (as pointed out in (McCann et al., 2018), many NLP tasks can be formulated as question an-swering), thus TCL in these literature was actually converted into DCL .", "Similar with (Xia et al., 2021; Jin et al., 2021), our work also focuses on low-resource continual learning; in contrast, our learning problem belongs and Cohen, 1989), and never-ending learning (Carlson et al., 2010).", "to TCL while each task in our formulation is expressed by instructions instead of labeled examples .", "Learning from textual instructions.", "This learning paradigm was first presented by Goldwasser and Roth (2014).", "They investigated the challenges on Solitaire card game where an instruction is a short sentence such as you can move any top card to a free cell if it is empty , then this instruction is mapped into logical expression via semantic parsing so that an automated agent can understand and execute the instruction.", "More recent work tried to examine the ability of large-scale pretrained language models to follow natural language instructions of varying complexity.", "For example, Efrat and Levy (2020) tested GPT-2 (Radford et al., 2019) to understand instructions like listing nouns , output the n th word or char and real-world MTurk instructions to annotate some popular datasets.", "They concluded that GPT-2 works poorly when the supervision comes from those instructions.", "A dominant instruction format nowadays is called prompt which mostly is a short piece of text describing the core concept of the task.", "Representative work includes (Radford et al., 2019; Schick and Schtze, 2020, 2021), etc. (Please refer to the survey (Liu et al., 2021) for more", "details.) While these prompt-based results are encouraging, such prompts are often too simplistic, whereas many real NLP problems cannot be effectively formulated as short prompts or a few positive examples.", "Motivated, Mishra et al. (2021) collected more than 60 distinct NLP tasks with real-world MTurk instructions, and claimed that pretrained language models, such as BART and GPT-3 (Brown et al., 2020), benefit from instructions to generalize across tasks.", "To our knowledge, the only work somehow resembling ours is (Rostami et al., 2020), in which task descriptions were incorporated into lifelong learning for zero-shot transfer.", "We differ in three aspects:", "(i) they focused on robot controlling problems,", "(ii) their tasks are from a single domain, and", "(iii) in addition to the associated instruction, they assumed that each task has a large number of labeled examples.", "A system in our ConTinTin comprises two stages, as illustrated in Figure 1. The first stage describes its starting status before learning the first new task; the second stage describes how it evolve continually with a sequence of instruction-equipped unseen tasks.", "To make it easier to understand, we first introduce the evolution process , then the initialization process .", "Evolution process.", "ConTinTin tries to build a model M that is able to deal with unseen tasks ( U ) appearing consecutively by understanding merely the instruction of each task.", "We denote the task sequence as U = [ u 1 , u 2 , , u i , ].", "Each task u i has a piece of textual description d u i , and a set of evaluation instances { ( x ju i , y ju i ) } nj =1 where y ju i is the expected output of the input x ju i .", "An example d u i will be shown in Section 3.3.", "We denote the model M , having learned [ u 1 , , u i ], as M i .", "For each task u i , M i is required to generate the output for x iu i based on the instruction in d u i .", "hyperparameter m and i Ensure: g i 1: for task t in U do 2: for j < m times do 3: k = random.randint(1, |U|i ); 4: sample [ u 1 , , u k 1 , t , u k +1 , , u k + i ] from U ; 5: M k = M evolves over [ u 1 , , u k 1 , t ]; 6: M k + i = M evolves over [ u 1 , , u k + i ]; 7: g ji,t = M k + i ( t ) -M k ( t ); 8: end for 9: g i,t = 1 m (cid:80) mj =1 g ji,t ; 10: end for 11: g i = 1 | U | (cid:80) t U g i,t", "tions and learn continually?", "We prepare a few training tasks ( S =[ s 1 , s 2 , , s k ]) to equip the machine with the ability to annotate the task instances given instructions.", "Each training task s i also has its instruction d s i and n labeled examples {( x js i , y js i )} nj =1 .", "Note that here we want to control k to be small; otherwise, if ConTinTin requires a large number of training tasks at the initialization stage, there is no point anymore to make use of instructions to alleviate the burden of data annotation.", "For this metric, we attempt to quantify the effectiveness of learning more prior tasks before solving a target task.", "Intuitively, more prior tasks, better downstream performance.", "We define metric g i (hereafter, refers to forward-transfer and refers to backward-transfer): the average gained performance over all new tasks in U when each of them is learned after k + i 1 previous tasks, compared with learning them merely after k 1 tasks ( i is transferring distance).", "As Algorithm 1 shows, computing g i needs two loops.", "First, iterate on all tasks in U , select one task t as", "(i) the k th task and randomly sample its upstream tasks [ u 1 , , u k 1 ] from remaining tasks in U , getting one online learning score M k ( t ) , or as", "(ii) the ( k + i ) th task for another online learning score M k + i ( t ) .", "M k + i ( t ) M k ( t ) is one instance of the forward-transfer score, which indicates how much improvement the extra upstream tasks of size i bring to the target task t .", "For this particular task t , repeat its upstream tasks m times and calculate the average as a final score of t , denoted as g i,t .", "Second, the same procedure is applied to all tasks in U and finally average g i,t over all t to get the g i value.", "g i always measures the expected performance gain our system can get when it has continually leaned i more tasks.", "For forward-transfer, we expect g i is positive and increases when i gets larger.", "Backward-transfer evaluation.", "In contrast to the forward-transfer evaluation, we define g i as the backward-transfer metric, which tells how much better our system can handle a task learned i steps ago, compared with the performance on the same the task last time.", "As Algorithm 2 describes, two loops to calculate g i .", "Firstly, for a given task t from U , put t in a random position k in the task chain, followed by i other tasks.", "Subtract its performance when the model M learned it the first time (i.e., M k ( t ) ) by its performance when the model finished learning all the k + i tasks in the chain (i.e., M k + i ( t ) ).", "This operation generates a score given this chain; repeat this process m times to get an average gain g i,t for the task t .", "Secondly, average the g i,t over all t to get the g i value.", "If a system can always make use of downstream tasks to help upstream tasks, g i should be positive; otherwise, g i will be negative due to catastrophic forgetting.", "There are no NLP datasets for ConTinTin particularly.", "This work is based on NATURALINSTRUCTIONS (Mishra et al., 2021) after data 3065 Title Definition Caution Answering simple science questions In this subtask, you will answer a simple science question.", "reorganization.", "Next, we first introduce NATURALINSTRUCTIONS , then describe our revised version specific to our problem.", "NATURAL-INSTRUCTIONS was constructed in the following pipeline: Mishra et al. (2021) first collected some popular NLP benchmarks (e.g., Cos-mosQA (Huang et al., 2019), Quoref (Dasigi et al., 2019), Winogrande (Sakaguchi et al., 2020),", "etc.) with their crowdsourcing instructions through engaging with their authors.", "Since all the crowdsourcing instructions include multiple steps to guide annotators to gather task instances, they further broke raw crowdsourcing instructions down into their individual steps, generating a larger number of subtasks that are minimal and standalone.", "At last, a total of 61 tasks are obtained, covering six categories: 13 question generation tasks (QG), 16 answer generation tasks (AG), 12 classification tasks (CF), 8 incorrect answer generation tasks (IAG), 10 minimal modification tasks (MM) and 2 verification tasks (VF).", "An instruction example is presented in Figure 2. Our data split.", "tasks in S have instructions and keep their labeled example set.", "The remaining 61k tasks are treated as unseen task set U .", "Each task in U has only instruction; the labeled example set is used for evaluation rather than model training.", "It is noteworthy that task order in continual learning should influence the final performance.", "We do not attempt to release a fixed split of S and U .", "In experiments, we will randomly generate them multiple times to form different task chains and report the average performance.", "Most prior studies about continual learning focused on backward-transfer (Serr et al., 2018; d'Autume et al., 2019) while paying less attention to the forward-transfer performance.", "Next, we introduce our approach to promoting both of them.", "The big story of our strategies lies in better understanding of the textual instruction of u i .", "Two concrete strategies as follows.", "NEGATIVETRAINING : to distinguish favorable and unfavorable clues in instructions .", "Unfavorable clues, such as the red items in Figure 2, are essential for humans to make decisions while not being successfully leveraged by machine learning.", "For example, Mishra et al. (2021) found discarding negative examples can even improve the performance.", "We believe this indicates the approach failed to learn from negative examples rather than those examples being truly useless.", "Then, how can we make machines extract effective supervision from negative samples?", "First, we introduce a method that was tried but did not work well minimizing the probability of generating negative output .", "Maximizing the probabilities of gold output is widely used in text generation.", "It sounds intuitive to minimize that for unwanted output, such as (He and Glass, 2020).", "We tried a joint training on maximizing positive and minimizing negative examples, which is even worse than maximizing the positive alone.", "Since many negative outputs contain tokens that exist in the gold answers, we suspect that minimizing their probabilities will let the model have more difficulty decoding the correct output.", "After further study of those negative examples and their explanations, we decide to treat those negative examples as positive and move the negative learning phase as pretraining, i.e., pretrain on negative examples first, then finetune on positive 3066 examples.", "The inspiration comes from the fact that negative examples, despite the tag negative, can still provide useful information about the expected output.", "Take a look at the negative example in Figure 2, its output C i.e., color and shape of the rock is discouraged just because it does not follow some rules of automatic evaluation rather than it is really wrong.", "Apparently, as a first step, optimizing the system to generate the so-called negative output is still better than any general-purpose pretrained BART.", "For each unseen task in U , we directly adopt its negative examples if available.", "For the k training tasks in S , positive instances (including positive examples in instructions and those labeled task instances) are much more than the negative examples, we use the pretrained model on S to do prediction on all inputs of S , if the output is not equal to the gold output, we treat this (input, predicted output) as a negative example.", "It means we have a loose definition of what negative output is: it is negative once it is not equal to the ground truth.", "Since the pretrained model on S can already guarantee generation quality, those generated negative outputs are mostly related with the gold outputs (measured by ROUGE metrics).", "3 HISTORYTRAINING : revisit instructions of previous tasks .", "To mitigate catastrophic forgetting, many prior works about continual learning tried to store a couple of labeled examples of upstreaming tasks to replay.", "In our ConTinTin formulation, each new task is described merely by the instruction.", "Instead of storing some examples of previous tasks, we keep their instructions.", "When learning the i th task in U , our model will first learn all the instructions of prior i 2 tasks in a batch with a lower learning rate.", "Revisiting precedent instructions is cost-effective since each instruction is as short as a couple of conventionally annotated examples but with much more supervision.", "Overall, our two strategies work jointly to enhance the forward-transfer and the backward-transfer performance.", "Our system InstructionSpeak is based on BART, treating all tasks as a text-to-text problem.", "The 3 We also tried to build a negative-output generator given available negative examples in instructions .", "This type of negative output was planed for pretraining in both S and U .", "However, due to the tiny size of negative examples in instructions (most tasks have at most 2 negative examples, a couple of them have zero), the learned negative-output generator yields outputs that are over unreasonable.", "full input format of encoder: [ Input ] input string [ Title ] title string [ Prompt ] prompt string [ Definition ] definition string [ Avoid ] things to avoid string [ Caution ] caution string [ POS1 ] [ Input ] input string [ Output ] output string [ Explanation ] explanation string [ POS n ] [ Input ] input string [ Output ] output string [ Explanation ] explanation string.", "Note that we put the input at the beginning of this encoder's input template to prevent from being discarded due to long text truncation.", "When pretrain on training tasks S , the full input pattern is used; when continually learn on U , since the input at the beginning comes from positive or negative examples of the instruction, we do not include the positive examples in the input template (i.e., the blue part is dropped).", "Given S and U , the whole learning pipeline in InstructionSpeak is:", "(i) pretrain on S to get model M ;", "(ii) use M to make predictions on S to collect negative example set S ;", "(iii) pretrain on S and finetune on S to get boosted model M which is the starting model status for continual learning on U ;", "(iv) for the i th unseen task u i in U , tune M on instructions of all earlier tasks [ u 1 , , u i 2 ] in a batch;", "(v) tune on negative examples of u i , if available;", "(vi) tune on positive examples of u i .", "Setup.", "We use the pretrained BART-base model released by Huggingface.", "Hyperparameters: m = 10 in Algorithms 1-2; k = 5 for the task set S ; max input length 1024 tokens, learning rate 5e-5, 3 epochs as suggested by (Mishra et al., 2021) for most phases of training (except for 5e-6 and one epoch for HISTORYTRAINING ); batch size 5 for training on S and 2 for continual learning on U .", "All unseen tasks U randomly select 1k labeled examples for performance evaluation.", "Note that the official evaluation metric for NATURALINSTRUCTIONS is ROUGE-L (Lin, 2004).", "According to the definitions of our evaluation metrics, g i and g i numbers are the same meaning as ROUGE-L.", "Baselines.", "There are no prior systems that can fit the formulation of ConTinTin exactly.", "In addition, as the ConTinTin properties in Table 1 indicate, ideally, ConTinTin prefers a fixed model capacity.", "Therefore, we do not compare with systems that incorporate extra memory modules or adaptors, such as (d'Autume et al., 2019; Jin et al., 3067 Method forward-transfer backward-transfer g 1 g 10 g 20 g 30 g 40 g 1 g 10 g 20 g 30 g 40 Seq-finetune 1.44 7.15 3.28 19.46 -3.74 8.73 2.9 16.42 -0.36 17.23 1.57 3.28 0.04 12.46 -0.19 21.75 -6.48 19.17 -9.46 19.57 LAMOL (Sun et al., 2020a) -1.34 4.46 1.41 13.55 3.31 14.32 -5.40 20.44 -0.03 12.68 2.67 12.52 2.21 7.98 9.42 12.88 6.33 20.13 7.21 14.81 Our InstructionSpeak 2.16 6.46 5.06 20.87 2.29 18.03 4.07 7.95 4.39 14.56 1.44 9.28 5.21 18.20 7.33 13.48 14.99 20.21 12.31 16.53 w/o NEGATIVETRAINING -2.89 13.12 1.06 17.21 1.33 13.09 2.21 14.42 1.78 17.90 2.21 12.23 3.37 13.23 11.44 11.03 10.36 21.34 8.94 19.41 w/o HISTORYTRAINING 1.88 17.73 3.32 12.76 4.41 20.24 3.22 16.66 2.97 14.93 4.74 16.54 -2.78 19.38 -0.83 12.93 1.35 15.95 3.49 14.05 Multi-task (upperbound) 7.98 20.47 Table 2: The main results of ConTinTin .", "2021; Ke et al., 2021).", "The following systems are considered: Seq-finetune : first pretrain a BART on S , then fine-tune it on U sequentially.", "It does not pay special attention to catastrophic forgetting.", "Multi-task : first pretrain a BART on S , then train on instructions of all tasks in U simultaneously.", "It, acting as the upperbound of continual learning, does not distinguish between forward-transfer and backward-transfer.", "LAMOL (Sun et al., 2020a): A state-of-the-art system that uses pretrained language models for task continual learning.", "All tasks are converted into QA and a single language model is used for the continual learning; before training on a new task, the language model first generates pseudo-examples for previous tasks; those pseudo-examples are mixed with the examples of the new task to train the language model.", "The original language model in LAMOL is a smallest pretrained GPT-2, we replace it with BART for a fair comparison.", "We have three threads of observations.", "forward-transfer and backward-transfer evaluations.", "For forward-transfer, all systems cannot beat the multi-task learning, but in backward-transfer, InstructionSpeak even outperforms the multi-task competitor; this is because multi-task learning, though widely treated as upperbound for continual learning, only trained on all U tasks for 3 epochs.", "Our method, equipped with HISTORYTRAINING , actually learns many times of earlier U tasks during the continual learning.", "Despite a few exceptions, generally speaking, both the forward and backward transfer performance increase when the transferring distance increases from 1 to 40.", "Secondly, the ablation study verifies the effectiveness of our two strategies.", "NEGATIVETRAINING plays the leading role in forward-transfer while doing a moderate favor to the backward-transfer.", "A totally opposite phenomenon is noticed for HISTORYTRAINING : it clearly contributes to the backward-transfer evaluation while influencing the forward-transfer to some extent.", "Thirdly, the standard deviations are mostly large; this should be due to the fact that the 61 tasks in NATURAL-INSTRUCTIONS contains 6 distinct categories; each category benefits from the model generalization by different degrees.", "To further figure out the exact performance of our system on different task categories, we report on the standard split of NATURALINSTRUCTIONS as Mishra et al. (2021) did: they have a fixed set of 12 tasks for testing (2 for each category), and all remaining tasks as training data.", "Since their 12 test tasks have no order, for each of the test category, we put it as the sixth (resp. first) task in the chain for forward-transfer (resp. backward-transfer).", "Once the position of the test category is fixed, we randomly order the remaining five categories in the sequence for 10 times and report the average performance.", "Thus, each test category will have two numbers for every continual learning approach: one for forward-transfer, the other for backward-transfer.", "In addition, we also report our system InstructionSpeak without continual learning (w/o CL), i.e., using the system pretrained on 49 tasks in S to predict.", "Table 3 lists the results of all continual learning systems on NATURAL-INSTRUCTIONS .", "We notice that", "(i) the results of different task categories vary a lot.", "For example, minimal modification tasks (MM) easily get ROUGE-L score above 80, but it is pretty challenging to obtain ROUGE-L score over 10 for Verification (VF);", "(ii) Classification tasks (CF) seem suffering from backward-transfer.", "We suspect CF is too sensitive to classification-specific supervision, such as label spaces; the continual learning on many subsequent tasks of different categories will mislead the model in solving CF.", "This is further supported by looking at the results of three systems: InstructionSpeak w/o CL, (Mishra et al., 2021) and InstructionSpeak forward-transfer.", "The first two systems start predicting on U once finish the training on S .", "Note that CF in U has 10 CF tasks in S ; it means the first two systems, although they did not learn the CF in U , still obtained enough supervision for this category from S .", "That's why all three systems get high performance on CF.", "Once they get tuned on more different categories, the supervision disappears increasingly.", "In addition to the results in Tables 2-3, we are further interested in the following two questions.", "Q 1 : how many training tasks does a system need to learn from instructions?", "Recall that apart from the U in the evolution process, we use k tasks ( S =[ s 1 , s 2 , , s k ]) to initialize the model.", "S can have maximal 20 tasks (due to the limited size of NATURAL-INSTRUCTIONS ) and our system only used 5 out of them.", "Here, we further explore the model's behavior when k varies.", "Figure 3 depicts the influence of k on forward-transfer.", "For forward-transfer, larger k values (i.e., more training tasks to initialize the model) consistently improve performance.", "We think that more training tasks tend to teach the model better at understanding the task instructions, which can further improve the model's transferability when it learns i more tasks to report g i on a downstream task u i .", "We notice that NATURAL-INSTRUCTIONS v2 4 has over 1.7k tasks.", "We leave it as future work to further explore the potential of increasing training tasks.", "Q 2 : how do tasks of different categories in U benefit?", "In Section 3.3, we mentioned that all tasks can be organized into six categories.", "We check their separate performances here.", "Note that both Algorithms 1-2 obtain the final score by averaging over all tasks in U , here we average those tasks that belong to the same category to get category-wise forward-transfer and backward-transfer performances.", "From Figure", "4(a) and Figure", "4(b), we notice that:", "(i) tasks of distinct categories indeed demonstrate different performances for both forward-transfer and backward-transfer evaluations;", "(ii) the phenomena on the two evaluations are similar: some categories consistently benefit more, such as clas-sification, answer generation, question generation, while some keep obtaining worse scores, such as minimal modification and verification categories.", "We think this discrepancy origins from two factors; one is how many tasks a particular category has, the other is how similar or relevant the tasks in that category are with tasks of other categories.", "Intuitively, a categories with more tasks occupying the task chain and resembling other tasks, such as classification, answer generation and question generation, can be easier solved when the model comes up to it or comes back to it.", "This work introduced a novel learning problem: continual learning from task instructions.", "The goal is to explore the potential of exiting pretrained language models in solving new tasks by understanding instructions rather than labeled examples.", "With our problem formulation and a well-performing system, we pave the way for future study of this challenge in the community.", "We thank Daniel Khashabi from AI2 and Swaroop Mishra from ASU for help during this work." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "other" ]
[ "Representations of events described in text are important for various tasks.", "In this work, we present SWCC : a S imultaneous W eakly supervised C ontrastive learning and C lustering framework for event representation learning.", "SWCC learns event representations by making better use of co-occurrence information of events.", "Specifically, we introduce a weakly supervised contrastive learning method that allows us to consider multiple positives and multiple negatives, and a prototype-based clustering method that avoids semantically related events being pulled apart.", "For model training, SWCC learns representations by simultaneously performing weakly supervised contrastive learning and prototype-based clustering.", "Experimental results show that SWCC outperforms other baselines on Hard Similarity and Transitive Sentence Similarity tasks.", "In addition, a thorough analysis of the prototype-based clustering method demonstrates that the learned prototype vectors are able to implicitly capture various relations between events.", "Our code will be available at https://github.", "com/gaojun4ever/SWCC4Event .", "Distributed representations of events, are a common way to represent events in a machine-readable form and have shown to provide meaningful features for various tasks (Lee and Goldwasser, 2018; Rezaee and Ferraro, 2021; Deng et al., 2021; Martin et al., 2018; Chen et al., 2021).", "Obtaining effective event representations is challenging, as it requires representations to capture various relations between events.", "Figure 1 presents four pairs of events with different relations.", "Two events may share the same event attributes (e.g. event types and sentiments), and there may also be a causal or temporal relation between two events.", "Early works (Weber et al., 2018) exploit easily accessible co-occurrence relation of events to learn event representations.", "Although the use of co-occurrence relation works well, it is too coarse for deep understanding of events, which requires fine-grained knowledge (Lee and Goldwasser, 2019).", "Recent works focus on fine-grained knowledge, such as discourse relations (Lee and Goldwasser, 2019; Zheng et al., 2020) and commonsense knowledge (e.g. sentiments and intents) (Sap et al., 2019; Ding et al., 2019).", "Concretely, Lee and Goldwasser (2019) and Zheng et al. (2020) leverage 11 discourse relation types to model event script knowledge.", "Ding et al. (2019) incorporate manually labeled commonsense knowledge (intents and sentiments) into event representation learning.", "However, the types of fine-grained event knowledge are so diverse that we cannot enumerate all of them and currently adopted fine-grained knowledge fall under a small set of event knowledge.", "In addition, some manually labeled knowledge (Sap et al., 2019; Hwang et al., 2021) is costly and difficult to apply on large datasets.", "In our work, we observe that there is a rich amount of information in co-occurring events, but previous works did not make good use of such information.", "Based on existing works on event relation 3036 extraction (Xue et al., 2016; Lee and Goldwasser, 2019; Zhang et al., 2020; Wang et al., 2020), we find that the co-occurrence relation, which refers to two events appearing in the same document, can be seen as a superset of currently defined explicit discourse relations.", "To be specific, these relations are often indicated by discourse markers (e.g., because, capturing the casual relation) (Lee and Goldwasser, 2019).", "Therefore, two related events must exist in the same sentence or document.", "More than that, the co-occurrence relation also includes other implicit event knowledge.", "For example, events that occur in the same document may share the same topic and event type.", "To learn event representations, previous works (Granroth-Wilding and Clark, 2016; Weber et al., 2018) based on co-occurrence information usually exploit instance-wise contrastive learning approaches related to the margin loss, which consists of an anchor, positive, and negative sample, where the anchor is more similar to the positive than the negative.", "However, they share two common limitations: (1) such margin-based approaches struggle to capture the essential differences between events with different semantics, as they only consider one positive and one negative per anchor.", "(2) Randomly sampled negative samples may contain samples semantically related to the anchor, but are undesirably pushed apart in embedding space.", "This problem arises because these instance-wise contrastive learning approaches treat randomly selected events as negative samples, regardless of their semantic relevance.", "We are motivated to address the above issues with the goal of making better use of co-occurrence information of events.", "To this end, we present SWCC : a S imultaneous W eakly supervised C ontrastive learning and C lustering framework for event representation learning, where we exploit document-level co-occurrence information of events as weak supervision and learn event representations by simultaneously performing weakly supervised contrastive learning and prototype-based clustering.", "To address the first issue, we build our approach on the contrastive framework with the InfoNCE objective (van den Oord et al., 2019), which is a self-supervised contrastive learning method that uses one positive and multiple negatives.", "Further, we extend the InfoNCE to a weakly supervised contrastive learning setting, allowing us to consider multiple positives and multiple negatives per anchor (as opposed to the previous works which use only one positive and one nega-tive).", "Co-occurring events are then incorporated as additional positives, weighted by a normalized co-occurrence frequency.", "To address the second issue, we introduce a prototype-based clustering method to avoid semantically related events being pulled apart.", "Specifically, we impose a prototype for each cluster, which is a representative embedding for a group of semantically related events.", "Then we cluster the data while enforce consistency between cluster assignments produced for different augmented representations of an event.", "Unlike the instance-wise contrastive learning, our clustering method focuses on the cluster-level semantic concepts by contrasting between representations of events and clusters.", "Overall, we make the following contributions: We propose a simple and effective framework ( SWCC ) that learns event representations by making better use of co-occurrence information of events.", "Experimental results show that our approach outperforms previous approaches on several event related tasks.", "We introduce a weakly supervised contrastive learning method that allows us to consider multiple positives and multiple negatives, and a prototype-based clustering method that avoids semantically related events being pulled apart.", "We provide a thorough analysis of the prototype-based clustering method to demonstrate that the learned prototype vectors are able to implicitly capture various relations between events.", "Event representation model.", "In the early works (Weber et al., 2018; Ding et al., 2019), Neural Tensor Networks (NTNs) (Socher et al., 2013b,a) are widely adopted to compose the representation of event constitutions, i.e., (subject, predicate, object) .", "However, such methods introduced strong compositional inductive bias and can not extend to events with more additional arguments, such as time, location etc.", "Several recent works (Zheng et al., 2020; Vijayaraghavan and Roy, 2021) replaced static word vector compositions with powerful pretrained language models, such as BERT (Devlin et al., 2019), for flexible event representations and achieved better performance.", "Following them, we also take the BERT as the backbone model.", "event text, which contains a sequence of tokens and the input format can be represented as follows: [CLS] , pred, subj, obj, [SEP] .", "Define x = [ x 0 , x 1 , , x L ] to be the input sequence of length L , where x 0 and x L are the [CLS] token and the [SEP] token respectively.", "Given x , the BERT returns a sequence of contextualized vectors: [ v [CLS] , v x 1 , , v x L ] = BERT( x ) , (2) where v [CLS] is the representation for the [CLS] token.", "In the default case, the final vector representation z of the event is the output representation of the [CLS] token: z = v [CLS] .", "Instance-wise contrastive learning.", "Event representation models learn representations with contrastive learning, which aims to pull related events together and push apart unrelated events.", "Margin loss (Schroff et al., 2015) is a widely used contrastive loss in most of the existing works on event representation learning (Weber et al., 2018; Ding et al., 2019; Zheng et al., 2020).", "Most recently, an alternative contrastive loss function, called InfoNCE (van den Oord et al., 2019), has been proposed and shown effective in various contrastive learning tasks (He et al., 2020; Hu et al., 2021; Gao et al., 2021).", "Chen et al. (2020a) further demonstrate that InfoNCE works better than the Margin loss.", "In this work, we explore the use of InfoNCE to train our event representation model.", "Formally, given a set of N paired events D = { x i , x + i } Ni =1 , where x + i is a positive sample for x i , the InfoNCE objective for ( x i , x + i ) is presented in a softmax form with in-batch negatives (Chen et al., 2020a; Gao et al., 2021): L = log g ( z i , z + i ) g ( z i , z + i ) + (cid:80) k N ( i ) g ( z i , z k ) , (3) where z i and z + i are the augmented representations of x i and x + i obtained through a representation model , k N ( i ) is the index of in-batch negatives.", "and g is a function: g ( z i , z k ) = exp( z (cid:62) i z k / ) , where R + is a positive value of temperature.", "Data augmentation.", "One critical question in contrastive learning is how to obtain z + i .", "In language representation, z + i are often obtained by first applying data augmentation in the form of word deletion, reordering, or substitution on x i and then feeding it into the event representation model.", "Several recent works (Gao et al., 2021; Liang et al., 2021) exploit dropout noise as data augmentation for NLP tasks and find that this data augmentation technique performs much better than common data augmentation techniques.", "Specifically, given an input event x i , we obtain z i and z + i by feeding the same input to the BERT encoder with the parametric weights twice, and each time we apply a different dropout mask: z i = f ( x i , 1 ) , z + i = f ( x i , 2 ) , (4) where 1 and 2 are two different random masks for dropout.", "As described in Sec.3.1, given an anchor event z i , we generate 3 positive samples z a 1 , z a 2 and z a 3 with different dropout masks.", "In this section, we will present technical details of our proposed approach and our goal is to learn", "event representations by making better use of co-occurrence information of events.", "Figure 2 presents an overview of our proposed approach, which contains two parts: the weakly-supervised contrastive learning method (left) and the prototype-based clustering method (right).", "In the following sections, we will introduce both methods separately.", "We build our approach on the contrastive framework with the InfoNCE objective (Eq.3) instead of the margin loss.", "To incorporate co-occurrence information into event representation learning, a straightforward way is to consider the co-occurring event of each input event as an additional positive sample, that is, the positive augmented representations of x i come not only from itself but also from its co-occurring event denoted as x p .", "However, The original InfoNCE objective cannot handle the case where there exists multiple positive samples.", "Inspired by Khosla et al. (2020), we take a similar formulation to tackle this problem.", "More than that, we also introduce a weighting mechanism to consider co-occurrence frequency of two events, which indicates the strength of the connection between two events.", "Co-occurrence as weak supervision.", "Formally, for each input pair ( x i , x p ) , where x i and x p refer to the input event and one of its co-occurring events, we first compute an augmented representation z i of x i as an anchor event, through the event representation model mentioned in 2.", "How the method differs from InfoNCE is in the construction of the positive set A ( i ) for x i .", "In InfoNCE, A ( i ) only contains one positive.", "In our method, we generalize Eq.", "3 to support multiple positives learning: L = (cid:88) a A ( i ) log g ( z i , z a ) g ( z i , z a ) + (cid:80) k N ( i ) g ( z i , z k ) , (5) where A ( i ) and N ( i ) refer to the positive set and the negative set for the event x i .", "Note that we support arbitrary number of positives here.", "In our work, considering the limited GPU memory, we use A ( i ) = { z a 1 , z a 2 , z a 3 } , where z a 1 and z a 2 are two augmented representations of the same event x i , obtained with different dropout masks, and z a 3 is an augmented representation of its co-occurring event.", "Here z a 1 and z a 2 will then be used in the prototype-based clustering method (See Fig. 2 for example) as detailed later ( 3.2).", "Incorporating co-occurrence frequency.", "The co-occurrence frequency indicates the strength of the connection between two events.", "To make better use of data, we introduce a weighting mechanism to exploit the co-occurrence frequency between events as instance weights and rewrite the Eq.", "5: L cl = (cid:88) a A ( i ) log a g ( z i , z a ) g ( z i , z a ) + (cid:80) k N ( i ) g ( z i , z k ) .", "(6) Here a is a weight for the positive sample z a .", "In our work, the two weights a 1 and a 2 of the positive samples ( z a 1 and z a 2 ) obtained from the input event, are set as a 1 = a 2 = 1 |A ( i ) | 1 , where |A ( i ) | is its cardinality.", "To obtain the weight a 3 for the augmented representation z a 3 of the co-occurring event, we create a cooccurrence matrix, V with each entry corresponding to the co-occurrence frequency of two distinct events.", "Then V is normalized to V with the Min-Max normalization method, and we take the entry in V as the weight a 3 for the co-occurrence event.", "In this way, the model draws the input events closer to the events with higher co-occurrence frequency, as each entry in V indicates the strength of the connection between two events.", "To avoid semantically related events being pulled apart, we draw inspiration from the recent approach (Caron et al., 2020) in the computer vision domain and introduce a prototype-based clustering method, where we impose a prototype, which is a representative embedding for a group of semantically related events for each cluster.", "Then we cluster the data while enforce consistency between cluster assignments produced for different augmented representations of an event.", "These prototypes essentially serve as the center of data representation clusters for a group of semantically related events (See Figure 1 for example).", "Unlike the instance-wise contrastive learning, our clustering method focuses on the cluster-level semantic concepts by contrasting between representations of events and clusters.", "Cluster prediction.", "This method works by comparing two different augmented representations of the same event using their intermediate cluster assignments.", "The motivation is that if these two representations capture the same information, it should be possible to predict the cluster assignment of 3039 one augmented representation from another augmented representation.", "In detail, we consider a set of M prototypes, each associated with a learnable vector c i , where i (cid:74) M (cid:75) .", "Given an input event, we first transform the event into two augmented representations with two different dropout masks.", "Here we use the two augmented representations z a 1 and z a 2 of the event x i .", "We compute their cluster assignments q a 1 and q a 2 by matching the two augmented representations to the set of M prototypes.", "The cluster assignments are then swapped between the two augmented representations: the cluster assignment q a 1 of the augmented representation z a 1 should be predicted from the augmented representation z a 2 , and vice-versa.", "Formally, the cluster prediction loss is defined as: L cp = (cid:96) ( z a 1 , q a 2 ) + (cid:96) ( z a 2 , q a 1 ) , (7) where function (cid:96) ( z , q ) measures the fit between the representation z and the cluster assignment q , as defined by: (cid:96) ( z , q ) = q log p .", "Here p is a probability vector over the M prototypes whose components are: p ( j ) = exp( z (cid:62) c j / ) (cid:80) Mk =1 exp(exp( z (cid:62) c k / ) , (8) where is a temperature hyperparameter.", "Intuitively, this cluster prediction method links representations z a 1 and z a 2 using the intermediate cluster assignments q a 1 and q a 2 .", "Computing cluster assignments.", "We compute the cluster assignments using an Optimal Transport solver.", "This solver ensures equal partitioning of the prototypes or clusters across all augmented representations, avoiding trivial solutions where all representations are mapped to a unique prototype.", "In particular, we employ the Sinkhorn-Knopp algorithm (Cuturi, 2013).", "The algorithm first begins with a matrix RM N with each element initialized to z (cid:62) b c m , where b (cid:74) N (cid:75) is the index of each column.", "It then iteratively produces a doubly-normalized matrix, the columns of which comprise q for the minibatch.", "Our approach learns event representations by simultaneously performing weakly supervised contrastive learning and prototype-based clustering.", "The overall training objective has three terms: L overall = L cl + L cp + L mlm , (9) where and are hyperparameters.", "The first term is the weakly supervised contrastive learning loss that allows us to effectively incorporate co-occurrence information into event representation learning.", "The second term is the prototype-based clustering loss, whose goal is to cluster the events while enforcing consistency between cluster assignments produced for different augmented representations of the input event.", "Lastly, we introduce the masked language modeling (MLM) objective (Devlin et al., 2019) as an auxiliary loss to avoid forgetting of token-level knowledge.", "Following common practice in event representation learning (Weber et al., 2018; Ding et al., 2019; Zheng et al., 2020), we analyze the event representations learned by our approach on two event similarity tasks ( 4.2) and one transfer task ( 4.4).", "The event triples we use for the training data are extracted from the New York Times Gigaword Corpus using the Open Information Extraction system Ollie (Mausam et al., 2012).", "We filtered the events with frequencies less than 3 and ended up with 4,029,877 distinct events.", "We use the MCNC dataset adopted in Lee and Goldwasser (2019) 1 for the transfer task.", "Our event representation model is implemented using the Texar-PyTorch package (Hu et al., 2019).", "The model starts from the pre-trained checkpoint of BERT-based-uncased (Devlin et al., 2019) and we use the [CLS] token representation as the event representation.", "We train our model with a batch size of 256 using an Adam optimizer.", "The learning rate is set as 2e-7 for the event representation model and 2e-5 for the prototype memory.", "We adopt the temperature = 0 .", "3 and the numbers of prototypes used in our experiment is 10.", "Similarity task is a common way to measure the quality of vector representations.", "Weber et al. (2018) introduce two event related similarity tasks: (1) Hard Similarity Task and (2) Transitive Sentence Similarity .", "push away representations of dissimilar events while pulling together those of similar events.", "Weber et al. (2018) created a dataset (denoted as Original), where each sample has two types of event pairs: one with events that should be close to each other but have very little lexical overlap, and another with events that should be farther apart but have high overlap.", "This dataset contains 230 event pairs.", "After that, Ding et al. (2019) extended this dataset to 1,000 event pairs (denoted as Ex-tended).", "For this task, we use Accuracy as the evaluation metric, which measures the percentage of cases where the similar pair receives a higher cosine value than the dissimilar pair.", "Transitive Sentence Similarity.", "The transitive sentence similarity dataset (Kartsaklis and Sadrzadeh, 2014) contains 108 pairs of transitive sentences that contain a single subject, object, and verb (e.g., agent sell property ) and each pair in this dataset is manually annotated by a similarity score from 1 to 7.", "A larger score indicates that the two events are more similar.", "Following previous work (Weber et al., 2018; Ding et al., 2019; Zheng et al., 2020), we evaluate using the Spear-man's correlation of the cosine similarity predicted by each method and the annotated similarity score.", "We compare our proposed approach with a variety of baselines.", "These methods can be categorized into three types: (1) Co-occurrence : Event-comp (Weber et al., 2018), Role-factor Tensor (Weber et al., 2018) and Predicate Tensor (Weber et al., 2018) are models that use tensors to learn the interactions between the predicate and its arguments and are trained using co-occurring events as supervision.", "(2) Discourse Relations : This line of work exploits discourse relations.", "UniFA-S (Zheng et al., 2020) adopt discourse relations.", "(3) Commonsense Knowledge : Several works have shown the effectiveness of using commonsense knowledge.", "KGEB (Ding et al., 2016) incorporates knowledge graph information.", "NTN-IntSent (Ding et al., 2019) leverages external commonsense knowledge about the intent and sentiment of the event.", "Results.", "Table 1 reports the performance of different methods on the hard similarity tasks and the transitive sentence similarity task.", "The result shows that the proposed SWCC achieves the best performance among the compared methods.", "It not only outperforms the Role-factor Tensor method that based on co-occurrence information, but also has better performance than the methods trained with additional annotations and commonsense knowledge, e.g. NTN-IntSent and UniFA-S.", "This implies the co-occurrence information of events is effective but underutilized by previous works, and the proposed SWCC makes better use of the co-occurrence information.", "Ablation study.", "To investigate the effect of each component in our approach, we conduct an ablation study as reported in Table 2.", "We remove a certain component of SWCC and examine the corresponding performance of the incomplete SWCC on the similarity tasks.", "We first explore the impact of our prototype-based clustering method by removing the loss term L cp in Eq.", "9.", "We find that this component has a significant impact on the transitive sentence similarity task.", "Removing this component causes a 0.05 (maximum) point drop in performance on the transitive sentence similarity task.", "And for the weakly supervised contrastive learning method, we find that it has a strong impact on both hard simi-3041 Model Hard similarity (Accuracy %) Transitive sentence Original Extended similarity ( ) SWCC 80.9 72.1 0.82 w/o Prototype-based Clustering 77.4 (-3.5) 67.4 (-4.7) 0.77 (-0.05) w/o Weakly Supervised CL 75.7 (-5.2) 65.1 (-7.0) 0.78 (-0.04) w/o MLM 77.4 (-3.5) 70.4 (-1.7) 0.80 (-0.02) BERT (InfoNCE) 72.1 63.4 0.75 BERT (Margin) 43.5 51.4 0.67 Table 2: Ablation study for several methods evaluated on the similarity tasks.", "larity tasks, especially the extended hard similarity task.", "Removing this component causes an 7.0 point drop in performance of the model.", "We also study the impact of the MLM auxiliary objective.", "As shown in Table 2 the token-level MLM objective improves the performance on the extended hard similarity task modestly, it does not help much for the transitive sentence similarity task.", "Next, we compare the InfoNCE against the margin loss in Table 2.", "For a fair comparison, the BERT (InfoNCE) is trained using the InfoNCE objective only, with co-occurring events as positives and other samples in the minibatch as negatives, and the BERT (Margin) is trained using the margin loss, with co-occurring events as positives and randomly sampled events as negatives.", "Obviously, BERT (InfoNCE) achieves much competitive results on all tasks, suggesting that the InfoNCE with adjustable temperature works better than the margin loss.", "This can be explained by the fact that the InfoNCE weighs multiple different negatives, and an appropriate temperature can help the model learn from hard negatives, while the margin loss uses only one negative and can not weigh the negatives by their relative hardness.", "We test the generalization of the event representations by transferring to a downstream event related tasks, the Multiple Choice Narrative Cloze (MCNC) task (Granroth-Wilding and Clark, 2016), which was proposed to evaluate script knowledge.", "In particular, given an event chain which is a series of events, this task requires a reasoning system to distinguish the next event from a small set of randomly drawn events.", "We evaluate our methods with several methods based on unsupervised learning: (1) Random picks a candidate at random uniformly; (2) PPMI (Chambers and Jurafsky, 2008) uses co-occurrence information and calculates Positive PMI for event pairs; (3) BiGram (Jans et al., 2012) calculates bi-gram conditional probabilities based on event term frequencies; (4) Word2Vec (Mikolov et al., 2013) uses the word embeddings trained by Skipgram algorithm and event representations are the summation of word embeddings of predicates and arguments.", "Note that we did not compare with supervised methods (Bai et al., 2021; Zhou et al., 2021; Lv et al., 2020) since unsupervised ones are more suitable for purely evaluating event representations.", "Results.", "Table 3 reports the performance of different methods on the MCNC task.", "As shown in the table, SWCC achieves the best accuracy on the MCNC task under the zero-shot transfer setting, suggesting the proposed SWCC has better generalizability to the downstream tasks than other compared methods.", "Number of prototypes.", "Figure 3 displays the impact of the number of prototypes in training.", "As shown in the figure, the performance increases as the number M increases, but it will not further in-crease after 10.", "We speculate that because these evaluation data are too small and contain too few types of relations, a larger number of prototypes would not help much in performance improvement.", "Visualization of learned representation.", "We randomly sample 3000 events and embed the event representations learned by BERT (InfoNCE) and SWCC in 2D using the PCA method.", "The cluster label of each event is determined by matching its representation to the set of M prototypes.", "The resulting visualizations are given in Figure 4.", "It shows that the proposed SWCC yields significantly better clustering performance than the BERT (InfoNCE), which means, to a certain extent, the prototype-based clustering method can help the event representation model capture various relations of events.", "Overall, the class separation in the visualizations qualitatively agrees with the performance in Table 1.", "Case study.", "We also present sampled events from two different prototypes in Table 4 (see Appendix for more examples), to further demonstrate the ability of SWCC to capture various relations of events.", "We can see that the events belonging to Prototype1 mainly describe financial stuff, for example, earnings be reduced, while the events belonging to Prototype2 are mainly related to politics.", "Clearly, the events in the same cluster have the same topic.", "And we also find that there are also causal and temporal relations between some of these events.", "For example, earnings be reduced led to company cut costs.", "Event representation learning.", "Effectively representing events and their relations (casual, temporal, entailment (Ning et al., 2018; Yu et al., 2020)) becomes important for various downstream tasks, such as event schema induction (Li et al., 2020), event narrative modeling (Chambers and Jurafsky, 2008; Li et al., 2018; Lee and Goldwasser, 2019), event knowledge graph construction (Sap et al., 2019; Zhang et al., 2020) etc.", "Many efforts have been devoted into learning distributed event representation.", "Though driven by various motivations, the main idea of these methods is to exploit explicit relations of events as supervision signals and these supervision signals can be roughly categorized into three types: (1) discourse relations (e.g. casual and temporal relations) obtained with automatic annotation tools (Zheng et al., 2020); (2) manually annotated external knowledge (e.g. sentiments and intents) (Lee and Goldwasser, 2018; Ding et al., 2019) and (3) co-occurrence information (Weber et al., 2018).", "Existing work has focused on the first two supervision signals, with less research on how to better utilize co-occurrence information.", "Though, discourse relations and external knowledge are fine-grained relations that can provide more accurate knowledge, the current explicitly defined fine-grained relations fall under a small set of event relations.", "Co-occurrence information is easily accessible but underutilized.", "Our work focus on exploiting document-level co-occurrence information of events to learn event representations, without any additional annotations.", "Instance-wise contrastive learning.", "Recently, a number of instance-wise contrastive learning methods have emerged to greatly improve the performance of unsupervised visual and text representations (He et al., 2020; Chen et al., 2020b,a; Chen and He, 2021; Grill et al., 2020; Zbontar et al., 2021; Chen et al., 2020a; Hu et al., 2021; Gao et al., 2021; Yang et al., 2021).", "This line of work aims at learning an embedding space where samples from the same instance are pulled closer and 3043 samples from different instances are pushed apart, and usually adopt InfoNCE (van den Oord et al., 2019) objective for training their models.", "Unlike the margin loss using one positive example and one negative example, the InfoNCE can handle the case where there exists multiple negative samples.", "In our work, we extend the InfoNCE, which is a self-supervised contrastive learning approach, to a weakly supervised contrastive learning setting, allowing us to effectively leverage co-occurrence information.", "Deep unsupervised clustering.", "Clustering based methods have been proposed for representation learning (Caron et al., 2018; Zhan et al., 2020; Caron et al., 2020; Li et al., 2021; Zhang et al., 2021).", "Caron et al. (2018) use k-means assignments pseudo-labels to learn visual representations.", "Later, Asano et al. (2020) and Caron et al. (2020) cast the pseudo-label assignment problem as an instance of the optimal transport problem.", "Inspired by Caron et al. (2020), we leverage a similar formulation to map event representations to prototype vectors.", "Different from Caron et al. (2020), we simultaneously perform weakly supervised contrastive learning and prototype-based clustering.", "In this work, we propose a simple and effective framework ( SWCC ) that learns event representations by making better use of co-occurrence information of events, without any addition annotations.", "In particular, we introduce a weakly supervised contrastive learning method that allows us to consider multiple positives and multiple negatives, and a prototype-based clustering method that avoids semantically related events being pulled apart.", "Our experiments indicate that our approach not only outperforms other baselines on several event related tasks, but has a good clustering performance on events.", "We also provide a thorough analysis of the prototype-based clustering method to demonstrate that the learned prototype vectors are able to implicitly capture various relations between events.", "This work was partially supported by the National Natural Science Foundation of China (61876053, 62006062, 62176076), the Shenzhen Foundational Research Funding (JCYJ20200109113441941,", "JCYJ20210324115614039), Joint Lab of Lab of HITSZ and China Merchants Securities." ]
[ "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "method", "method", "objective", "objective", "result", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "other", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "abstain", "method", "objective", "method", "result", "objective", "other", "other" ]
[ "We improve upon pairwise annotation for active learning in coreference resolution, by asking annotators to identify mention antecedents if a presented mention pair is deemed not coreferent.", "This simple modification, when combined with a novel mention clustering algorithm for selecting which examples to label, is much more efficient in terms of the performance obtained per annotation budget.", "In experiments with existing benchmark coreference datasets, we show that the signal from this additional question leads to significant performance gains per human-annotation hour.", "Future work can use our annotation protocol to effectively develop coreference models for new domains.", "Our code is publicly available.", "1 1 Introduction Coreference resolution is the task of resolving anaphoric expressions to their antecedents (see Figure 1).", "It is often required in downstream applications such as question answering (Dasigi et al., 2019) or machine translation (Stanovsky et al., 2019).", "Exhaustively annotating coreference is an expensive process as it requires tracking coreference chains across long passages of text.", "In news stories, for example, important entities may be referenced many paragraphs after their introduction.", "Active learning is a technique which aims to reduce costs by annotating samples which will be most beneficial for the learning process, rather than fully labeling a large fixed training set.", "Active learning consists of two components: (1) a task-specific learning algorithm, and (2) an iterative sample selection algorithm, which examines the performance of the model trained at the previous iteration and selects samples to add to the annotated *Work done while at the University of Washington.", "A volcano in Mexico, known to locals as Po-po , just started spewing molten rock.", "Are the two mentions coreferent?", "No What is the first appearance of the entity that the yellowhighlighted text refers to?", "A volcano in Mexico Figure 1: Discrete annotation.", "training set.", "This method has proven successful for various tasks in low-resource domains (Garrette and Baldridge, 2013; Kholghi et al., 2015; Syed et al., 2016, 2017).", "Sachan et al. (2015) showed that active learning can be employed for the coreference resolution task.", "They used gold data to simulate pairwise human-annotations, where two entity mentions are annotated as either coreferring or not (see first question in Figure 1).", "In this paper, we propose two improvements to active learning for coreference resolution.", "First, we introduce the notion of discrete annotation (Sec-tion 3), which augments pairwise annotation by introducing a simple additional question: if the user deems the two mentions non-coreferring, they are asked to mark the first occurrence of one of the mentions (see second question in Figure 1).", "We show that this simple addition has several positive implications.", "The feedback is relatively easy for annotators to give, and provides meaningful signal which dramatically reduces the number of annotations needed to fully label a document.", "Second, we introduce mention clustering (Sec-tion 4).", "When selecting the next mention to label, we take into account aggregate model predictions for all antecedents which belong to the same cluster.", "This avoids repeated labeling that would come with separately verifying every mention pair within the same cluster, as done in previous methods.", "We conduct experiments across several sample selection algorithms using existing gold data for user labels and show that both of our contributions significantly improve performance on the CoNLL-2012 dataset (Pradhan et al., 2012).", "Overall, our active learning method presents a superior alternative to pairwise annotation for coreference resolution, achieving better performing models for a given annotation budget.", "Our work relies on two main components: a coreference resolution model and a sample selection algorithm.", "Coreference resolution model We use the span ranking model introduced by Lee et al. (2017), and later implemented in AllenNLP framework (Gard-ner et al., 2018).", "This model computes span em-beddings for all possible spans i in a document, and uses them to compute a probability distribution P ( y = ant ( i )) over the set of all candidate antecedents Y ( i ) = { K previous mentions in the document } { (cid:15) } , where (cid:15) is a dummy antecedent signifying that span i has no antecedent.", "This model does not require additional resources, such as syntactic dependencies or named entity recognition, and is thus well-suited for active learning scenarios for low-resource domains.", "Sample selection algorithm Previous approaches for the annotation of coreference resolution have used mostly pairwise selection , where pairs of mentions are shown to a human annotator who marks whether they are co-referring (Gasperin, 2009; Laws et al., 2012; Zhao and Ng, 2014; Sachan et al., 2015).", "To incorporate these binary annotations into their clustering coreference model, Sachan et al. (2015) introduced the notion of must-link and cannot-link penalties, which we describe and extend in Section 4.", "In discrete annotation , as exemplified in Figure 1, we present the annotator with a document where the least certain span i (Po-po, in the example) and i 's model-predicted antecedent, A ( i ) (locals), are", "highlighted.", "Similarly to pairwise annotation, annotators are first asked whether i and A ( i ) are coreferent.", "If they answer positively, we move on to the next sample.", "Otherwise, we deviate from pairwise sampling and ask the annotator to mark the antecedent for i (A volcano in Mexico) as the follow-up question.", "2 The annotator can abstain from answering the follow-up question in case i is not a valid mention or if it does not have an antecedent in the document.", "See Figure 5 in the Appendix for more example annotations.", "In Section 5, we show that discrete annotation is superior to the classic pairwise annotation in several aspects.", "First, it makes better use of human annotation time, as often an annotator needs to resolve the antecedent of the presented mention to answer the first question.", "For example, identifying that Po-po refers to the volcano, and not the locals.", "Second, we find that discrete annotation is a better fit for mention ranking models (Lee et al., 2017), which assign the most-likely antecedent to each mention, just as an annotator does in discrete annotation.", "We experiment with three selection techniques by applying popular active learning selectors like entropy or query-by-committee (Settles, 2010) to clusters of spans.", "Because our model outputs antecedent probabilities and predictions, we would like to aggregate these outputs, such that we have only one probability per mention cluster rather than one per antecedent.", "We motivate this with an example: suppose span i 's top two most likely antecedents are y 1 and y 2 .", "In scenario 1, y 1 and y 2 are predicted to be clustered together, and in scenario 2, they are predicted to be clustered apart.", "Span i should have a higher certainty in scenario 1 (and thus be less likely to be picked by active learning), because its two most likely antecedents both imply the same clustering, whereas in scenario 2, picking y 1 vs. y 2 results in a different downstream clustering.", "Thus, rather than simply using the raw probability i refers to a particular antecedents, we use the probability i belongs to a certain cluster .", "This implies modelling y 1 and y 2 jointly in scenario 1, and separately in scenario", "2. Formally, we compute the probability that a span i belongs in a cluster C by summing P ( ant ( i ) = y ) 2 For consistency, we ask annotators to select the first antecedent of i in the document.", "for all y that belong in some cluster C , since i having an antecedent in a cluster necessarily also implies i is also in that cluster.", "This allows us to convert the predicted antecedent probabilities to in-cluster probabilities: P ( i C ) = (cid:88) y C Y ( i ) P ( ant ( i ) = y ) (1) Similarly, for query-by-committee, we aggregate predictions such that we have one vote per cluster rather than one vote per antecedent: V ( i C ) = (cid:88) y C Y ( i ) V ( A ( i ) = y ) (2) where V ( A ( i ) = y ) { 0 , 1 , , M} refers to the number of models that voted y to be the antecedent of i .", "The cluster information ( y C Y ( i ) ) we use in Equations 1 and 2 is computed from a combination of model-predicted labels and labels queried through active learning.", "Antecedents which were not predicted to be in clusters are treated as single-ton clusters.", "Additionally, to respect user annotations during the selection process, we must keep track of all prior annotations.", "To do this, we use the concept of must-link (ML; if two mentions are judged coreferent) and cannot-link (CL; if two mentions are judged non-coreferent) relations between mentions introduced by Sachan et al. (2015), and adapt it for our purposes.", "Specifically, in our discrete setting, we build the links as follows: if the user deems the pair coreferent, it is added to ML.", "Otherwise, it is added to CL, while the user-corrected pair (from the second question) is always added to ML.", "In addition, we use these links to guide how we select for the next mention to query.", "For example, if a CL relation exists between spans m 1 and m 2 , we will be less likely to query for m 1 , since we are slightly more certain on what m 1 's antecedent should be (not m 2 ).", "Formally, we revise probabilities and votes P ( i C ) and V ( i C ) in accordance to our link relations, which affects the selector uncertainty scores.", "3 Finally, following (Sachan et al., 2015), we impose transitivity constraints, which allow us to model links beyond what has been explicitly 3 See Section A.2 in the appendix for more details.", "ML ( m i , m j ) ML ( m j , m k ) ML ( m i , m k (3) CL ( m i , m j ) ML ( m i , m k ) CL ( m j , m k (4)", "However, recomputing these closures after each active learning iteration can be extremely ineffi-cient.", "Instead, we build up the closure incrementally by adding only the minimum number of necessary links to maintain the closure every time a new link is added.", "compute entropy over cluster probabilities and select the mention with the highest clustered entropy:", "Least coreferent clustered mentions / Most coreferent unclustered mentions (LCC/MCU) We aim to select a subset of spans for which the model was least confident in its prediction.", "For each span i which was assigned a cluster C i , we compute a score s C ( i ) = P ( i C i ) , and choose n spans with the smallest s C ( i ) .", "For each single-ton j , we give an unclustered score s U ( i ) = max C all clusters P ( j C ) and choose m spans with the largest s U ( i ) .", "P ( i C i ) and P ( j C ) are computed with Equation", "1. 5 Evaluation We compare discrete versus pairwise annotation using the English CoNLL-2012 coreference dataset (Pradhan et al., 2012).", "Following Sachan et al. (2015), we conduct experiments where user judgments are simulated from gold labels.", "Annotation time estimation To compare annotation times between pairwise and discrete questions, we collected eight 30-minute sessions from 7 in-house annotators with background in NLP.", "Annotators were asked to answer as many instances as they could during those 30 minutes.", "We additionally asked 1 annotator to annotate only discrete questions for 30 minutes.", "To be as representative as possible, the active learning queries for these experiments were sampled from various stages of active learning (see Table 1).", "On average, an annotator completed about 67 questions in a single session, half of which were answered negatively, requiring the additional discrete question.", "Overall, these estimates rely on 826 annotated answers.", "Our annotation interface is publicly available, 4 see examples in Figure 5 in the Appendix.", "the discrete question after the initial pairwise question takes about the same time as answering the first question (about 16 s ).", "Furthermore, answering only discrete questions took 28 .", "01 s per question, which confirmed that having an initial pairwise question indeed saves annotator time if answered positively.", "In the following experiments, we use these measurements to calibrate pairwise and discrete followup questions when computing total annotation times.", "Baselines We implement a baseline for pairwise annotation with entropy selector.", "We also implement two discrete annotation baselines with random selection.", "The partially-labelled baseline follows the standard active learning training loop, but selects the next mention to label at random.", "The fully-labelled baseline creates a subset of the training data by taking as input an annotation time t and selecting at random a set of documents that the user can fully label in t hours using ONLY discrete annotation.", "By comparing the fully-labelled baseline against our active learning results, we can determine whether active learning is effective over labelling documents exhaustively .", "Hyperparameters We use the model hyperparameters from the AllenNLP implementation of Lee et al. (2017).", "We train up to 20 epochs with a patience of 2 before adding labels.", "After all documents have been added, we retrain from scratch.", "We use a query-by-committee of M = 3 models, due to memory constraints.", "For LCC/MCU, given L annotations per document, we split the annotations equally between clusters and singletons.", "Results Figure 2 plots the performance of discrete annotation with the various selectors from Section 4, against the performance of pairwise annotation, calibrated according to our timing experiments.", "In all figures, we report MUC, B3, and CEAFe as an averaged F1 score.", "The three non-random active learning frameworks outperform the fully-labelled baseline, show-Figure 3: Mention detection accuracy (in document-micro F1) for pairwise versus discrete selection per human annotation time.", "ing that active learning is more effective for coreference resolution when annotation budget is limited.", "Most notably, Figure 2 shows that every nonrandom discrete selection protocol outperforms pairwise annotation.", "Where the gap in performance is the largest ( > 15 minutes per document), we consistently improve by 4% absolute F 1 over pairwise selection.", "A major reason discrete annotation outperforms the pairwise baseline is that the number of pairwise annotations needed to fully label a document is much larger than the number of discrete annotations.", "In an average development document with 201 candidates per mention, the number of pairwise queries needed to fully label a document is 15 , 050 , while the maximum number of discrete queries is only 201 (i.e., asking for the antecedent of every men-tion).", "Thus, the average document can be fully annotated via discrete annotation in only 2.6% of the time it takes to fully label it with pairwise annotation, suggesting that our framework is also a viable exhaustive annotation scheme.", "Further analysis shows that the improvement in discrete selection stems in part from better use of annotation time for mention detection accuracy (Figure 3) and pronoun resolution (Figure 4), in which we measure performance only on clusters with pronouns, as identified automatically by the spaCy tagger (Honnibal and Montani, 2017) .", "Finally, Table 3 shows ablations on our discrete annotation framework, showing the contribution of each component of our paradigm.", "We presented discrete annotation, an attractive alternative to pairwise annotation in active learning of coreference resolution in low-resource domains.", "By adding a simple question to the annotation interface, we obtained significantly better models per human-annotation hour.", "In addition, we introduced a clustering technique which further optimizes sample selection during the annotation process.", "More broadly, our work suggests that improvements in annotation interfaces can elicit responses which are more efficient in terms of the obtained performance versus the invested annotation time.", "We would like to thank Christopher Clark, Terra Blevins, and the anonymous reviewers for their helpful feedback, and Aaron Jaech, Mason Kamb, Madian Khabsa, Kaushal Mangipudi, Nayeon Lee, and Anisha Uppugonduri for their participation in our timing experiments." ]
[ "result", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "method", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "result", "method", "result", "other" ]
[ "Large Transformers pretrained over clinical notes from Electronic Health Records (EHR) have afforded substantial gains in performance on predictive clinical tasks.", "The cost of training such models (and the necessity of data access to do so) coupled with their utility motivates parameter sharing, i.e., the release of pretrained models such as ClinicalBERT (Alsentzer et al., 2019).", "While most efforts have used deidentified EHR, many researchers have access to large sets of sensitive, non-deidentified EHR with which they might train a BERT model (or similar).", "Would it be safe to release the weights of such a model if they did?", "In this work, we design a battery of approaches intended to recover Personal Health Information (PHI) from a trained BERT.", "Specifically, we attempt to recover patient names and conditions with which they are associated.", "We find that simple probing methods are not able to meaningfully extract sensitive information from BERT trained over the MIMIC-III corpus of EHR.", "However, more sophisticated at-tacks may succeed in doing so: To facilitate such research, we make our experimental setup and baseline probing models available.", "1 1 Introduction Pretraining large (masked) language models such as BERT (Devlin et al., 2019) over domain specific corpora has yielded consistent performance gains across a broad range of tasks.", "In biomedical NLP, this has often meant pretraining models over collections of Electronic Health Records (EHRs) (Alsentzer et al., 2019).", "For example, Huang et al. (2019) showed that pretraining models over EHR data improves performance on clinical predictive tasks.", "Given their empirical utility, and the fact that pretraining large networks requires a nontrivial amount of compute, there is a natural desire to (cid:63) equal contribution.", "However, in the context of pretraining models over patient EHR, this poses unique potential privacy concerns: Might the parameters of trained models leak sensitive patient information?", "In the United States, the Health Insurance Portability and Accountability Act (HIPAA) prohibits the sharing of such text if it contains any reference to Protected Health Information (PHI).", "If one removes all reference to PHI, the data is considered dei-dentified, and is therefore legal to share.", "While researchers may not directly share non-deidentified text, 2 it is unclear to what extent models pretrained on non-deidentified data pose privacy risks.", "Further, recent work has shown that general purpose large language models are prone to memorizing sensitive information which can subsequently be extracted (Carlini et al., 2020).", "In the context of biomedical NLP, such concerns have been cited as reasons for withholding direct publication of trained model weights (McKinney et al., 2020).", "These uncertainties will continue to hamper dissemination of trained models among the broader biomedical NLP research community, motivating a need to investigate the susceptibility of such models to adversarial attacks.", "This work is a first step towards exploring the potential privacy implications of sharing model weights induced over non-deidentified EHR text.", "We propose and run a battery of experiments intended to evaluate the degree to which Transformers (here, BERT) pretrained via standard masked language modeling objectives over notes in EHR might reveal sensitive information (Figure 1).", "3 2 Even for deidentified data such as MIMIC (Johnson et al., 2016), one typically must complete a set of trainings before accessing the data, whereas model parameters are typically shared publicly, without any such requirement.", "3 We consider BERT rather than an auto-regressive language model such as GPT-* given the comparatively widespread adoption of the former for biomedical NLP.", "We find that simple methods are able to recover associations between patients and conditions at rates better than chance, but not with performance beyond that achievable using baseline condition frequencies.", "This holds even when we enrich clinical notes by explicitly inserting patient names into every sentence.", "Our results using a recently proposed, more sophisticated attack based on generating text (Carlini et al., 2020) are mixed, and constitute a promising direction for future work.", "Unintended memorization by machine learning models has significant privacy implications, especially where models are trained over non-deidentified data.", "Carlini et al. (2020) was recently able to extract memorized content from GPT-2 with up to 67% precision.", "This raises questions about the risks of sharing parameters of models trained over non-deidentified data.", "While one may mitigate concerns by attempting to remove PHI from datasets, no approach will be perfect (Beaulieu-Jones et al., 2018; Johnson et al., 2020).", "Further, deidentifying EHR data is a laborious step that one may be inclined to skip for models intended for internal use.", "An important practical question arises in such situations: Is it safe to share the trained model parameters?", "While prior work has investigated issues at the intersection of neural networks and privacy (Song and Shmatikov, 2018; Salem et al., 2019; Fredrikson et al., 2015), we are unaware of work that specifically focuses on attacking the modern Transformer encoders widely used in NLP (e.g., BERT) trained on EHR notes, an increasingly popular approach in the biomedical NLP community.", "In a related effort, Abdalla et al. (2020) explored the risks of using imperfect deidentification algorithms together with static word embeddings, finding that such embeddings do reveal sensitive information to at least some degree.", "However, it is not clear to what extent this finding holds for the contextualized embeddings induced by large Transformer architectures.", "Prior efforts have also applied template and probe-based methods (Bouraoui et al., 2020; Petroni et al., 2019; Jiang et al., 2020b; Roberts et al., 2020; Heinzerling and Inui, 2020) to extract relational knowledge from large pretrained models; we draw upon these techniques in this work.", "However, these works focus on general domain knowledge extraction, rather than clinical tasks which pose unique privacy concerns.", "We use the Medical Information Mart for Intensive Care III (MIMIC-III) English dataset to conduct our experiments (Johnson et al., 2016).", "We follow prior work (Huang et al., 2019) and remove all notes except for those categorized as Physician', Nursing', Nursing/Others', or Discharge Summary' note types.", "The MIMIC-III database was deidentified using a combination of regular expressions and human oversight, successfully removing almost all forms of PHI (Nea-matullah et al., 2008).", "All patient first and last names were replaced with [Known First Name ...] and [Known Last Name ...] pseudo-tokens respectively.", "We are interested in quantifying the risks of releasing contextualized embedding weights trained on non-deidentified text (to which one working at hospitals would readily have access).", "To simulate the existence of PHI in the MIMIC-III set, we randomly select new names for all patients (Stubbs et al., 2015).", "4 Specifically, we replaced [Known First Name] and [Known Last Name] with names sampled from US Census data, randomly sampling first names (that appear at least 10 times in census data) and last names (that appear at least 400 times).", "5 This procedure resulted in 11.5% and 100% of patients being assigned unique first and last names, respectively.", "While there are many forms of PHI, we are primarily interested in recovering name and condition pairs, as the ability to infer with some certainty the specific conditions that a patient has is a key privacy concern.", "This is also consistent with prior work on static word embeddings learned from EHR (Abdalla et al., 2020).", "Notes in MIMIC-III do not consistently explicitly reference patient names.", "First or last names are mentioned in at least one note for only 27,906 (out of 46,520) unique patients.", "6 Given that we cannot reasonably hope to recover information regarding tokens that the model has not observed, in this work we only consider records corresponding to these 27,906 patients.", "Despite comprising 61.3% of the total number of patients, these 27,906 patients are associated with the majority (82.6%) of all notes (1,247,291 in total).", "Further, only 10.2% of these notes contain at least one mention of a patient's first or last name.", "Of the 1,247,291 notes considered, 17,044 include first name mentions, and 220,782 feature last name mentions.", "Interestingly, for records corresponding to the 27,906 patients, there are an additional 18,345 false positive last name mentions and 29,739 false positive first name mentions; in 4 We could have used non-deidentified EHRs from a hospital, but this would preclude releasing the data, hindering reproducibility.", "5 We sampled first and last names from https: //www.ssa.gov/ and https://www.census.gov/ topics/population/genealogy/data/2010_ surnames.html , respectively.", "6 In some sense this bodes well for privacy concerns, given that language models are unlikely to memorize names that they are not exposed to; however, it is unclear how particular this observation is to the MIMIC corpus.", "these cases the name is also an English word (e.g., young').", "As the frequency with which patient names are mentioned explicitly in notes may vary by hospital conventions, we also present semisynthetic results in which we insert names into notes such that they occur more frequently.", "As a first attempt to evaluate the risk of BERT leaking sensitive information, we define the following task: Given a patient name that appears in the set of EHR used for pretraining, query the model for the conditions associated with this patient.", "Operationally this requires defining a set of conditions against which we can test each patient.", "We consider two general ways of enumerating conditions: (1) Using International Classifica-tion of Diseases, revision 9 (ICD-9) codes attached to records, and (2) Extracting condition strings from the free-text within records.", "7 Specifically, we experiment with the following variants.", "[ICD-9 Codes] We collect all ICD-9 codes associated with individual patients.", "ICD-9 is a standardized global diagnostic ontology maintained by the World Health Organization.", "Each code is also associated with a description of the condition that it represents.", "In our set of 27,906 patients, we observe 6,841 unique ICD-9 codes.", "We additionally use the short ICD-9 code descriptions, which comprise an average of 7.03 word piece tokens per description (under the BERT-Base tokenizer).", "On average, patient records are associated with 13.6 unique ICD-9 codes.", "[MedCAT] ICD-9 codes may not accurately re-flect patient status, and may not be the ideal means of representing conditions.", "Therefore, we also created lists of conditions to associate with patients by running the MedCAT concept annotation tool (Kraljevic et al., 2020) over all patient notes.", "We only keep those extracted entities that correspond to a Disease / Symptom, which we use to normalize condition mentions and map them to their UMLS (Bodenreider, 2004) CUI and description.", "This yields 2,672 unique conditions from the 27,906 patient set.", "On average, patients are associated with an average of 29.5 unique conditions, and conditions comprise 5.37 word piece tokens.", "for an experiment, we assign binary labels to patients indicating whether or not they are associated with each condition.", "We then aim to recover the conditions associated with individual patients.", "We re-train BERT (Devlin et al., 2019) over the EHR data described in Section 3 following the process outlined by Huang et al. (2019), 8 yielding our own version of ClinicalBERT.", "However, we use full-word (rather than wordpiece) masking, due to the performance benefits this provides.", "9 We adopt hyper-parameters from Huang et al. (2019), most importantly using three duplicates of static masking.", "We list all model variants considered in Table 1 (including Base and Large BERT mod-els).", "We verify that we can reproduce the results of Huang et al. (2019) for the 30-day readmission from the discharge summary prediction task.", "We also consider two easier semi-synthetic variants, i.e., where we believe it should be more likely that an adversary could recover sensitive information.", "For the Name Insertion Model , we insert (prepend) patient names to every sentence within corresponding notes (ignoring gram-mar), and train a model over this data.", "Similarly, for the Template Only Model , for each patient and every MedCAT condition they have, we create a sentence of the form: [CLS] Mr./Mrs. [First Name] [Last Name] is a yo patient with [Condition] [SEP] .", "This overrepresentation of names should make it easier to recover information about patients.", "We also explore whether PHI from the MIMIC database can be retrieved using static word embeddings derived via CBoW and skip-gram word2vec models (Mikolov et al., 2013).", "Here, we follow prior work (Abdalla et al. 2020; this was conducted on a private set of EHR, rather than MIMIC).", "We induce embeddings for (multi-word) patient names and conditions by averaging constituent word representations.", "We then calculate cosine similarities between these patient and condition embeddings (See Section 6.3).", "We first test the degree to which we are able to retrieve conditions associated with a patient, given their name.", "(We later also consider a simpler task: Querying the model as to whether or not it observed a particular patient name during training.)", "All results presented are derived over the set of 27,906 patients described in Section 4.", "The following methods output scalars indicating the likelihood of a condition, given a patient name and learned BERT weights.", "We compute metrics with these scores for each patient, measuring our ability to recover patient/condition associations.", "We aggregate metrics by averaging over all patients.", "We report AUCs and accuracy at 10 (A@10), i.e., the fraction of the top-10 scoring conditions that the patient indeed has (according to the reference set of conditions for said patient).", "We attempt to reveal information memorized during pretraining using masked template strings.", "The idea is to run such templates through BERT, and observe the rankings induced over conditions (or names).", "10 This requires specifying templates.", "Generic Templates We query the model to fill in the masked tokens in the following sequence: [CLS] Mr./Mrs. [First Name] [Last Name] is a yo patient with [MASK] + [SEP] .", "Here, Mr. and Mrs. are selected according to the gender of the patient as specified in the MIMIC corpus.", "11 The [MASK] + above is actually a sequence of [MASK] tokens, where the length of this sequence depends on the length of the tok-enized condition for which we are probing.", "Given a patient name and condition, we compute the perplexity (PPL) for condition tokens as candidates to fill the template mask.", "For example, if we wanted to know whether a patient (John Doe) was associated with a particular condition (MRSA), we would query the model with the following (populated) template: [CLS] Mr. John Doe is a yo patient with [MASK] [SEP] and measure the perplexity of MRSA assuming the [MASK] input token position.", "For multiword conditions, we first considered taking an average PPL over constituent words, but this led to 10 This is similar to methods used in work on evaluating language models as knowledge bases (Petroni et al., 2019).", "counterintuitive results: longer conditions tend to yield lower PPL.", "In general, multi-word targets are difficult to assess as PPL is not well-defined for masked language models like BERT (Jiang et al., 2020a; Salazar et al., 2020).", "Therefore, we bin conditions according to their wordpiece length and compute metrics for bins individually.", "This sim-plifies our analysis, but makes it difficult for an attacker to aggregate rankings of conditions with different lengths.", "Results We use the generic template method to score ICD-9 or MedCAT condition descriptions for each patient.", "We report the performance (aver-aged across length bins) achieved by this method in Table 2, with respect to AUC and A@10.", "This straightforward approach fares better than chance, but worse than a baseline approach of assigning scores equal to the empirical frequencies of conditions.", "12 Perhaps this is unsurprising for MIMIC-12 We note that these frequencies are derived from the MIMIC data, which affords an inherent advantage, although it seems likely that condition frequencies derived from other data sources would be similar.", "We also note that some very common conditions are associated with many patients see Appendix Figures A1 and A2 which may effectively inflate' the AUCs achieved by the frequency baseline.", "If patient names appeared more often in the notes, would this approach fare better?", "To test this, we present results for the Name Insertion and Template Only variants in Table 2.", "Recall that for these we have artificially increased the number of patient names that occur in the training data; this should make it easier to link conditions to names.", "The Template Only variant yields better performance for MedCAT labels, but still fares worse than ranking conditions according to empirical frequencies.", "However, it may be that the frequency baseline performs so well simply due to many patients sharing a few dominating conditions.", "To account for this, we additionally calculate performance using the Template Only model on MedCAT conditions that fewer than 50 patients have.", "We find that the AUC is 0.570, still far lower than the frequency baseline of 0.794 on this restricted condition set.", "Other templates, e.g., the most common phrases in the train set that start with a patient name and end with a condition, performed similarly.", "Masking the Condition (Only) Given the observed metrics achieved by the frequency' baseline, we wanted to establish whether models are effectively learning to (poorly) approximate condition frequencies, which might in turn allow for the better than chance AUCs in Table 2.", "To evaluate the degree to which the model encodes condition frequencies we design a simple template that includes only a masked condition between [CLS] and [SEP] token (e.g., [CLS] [MASK] . . . [MASK] [SEP] ).", "We then calculate the PPL of individual conditions filling these slots.", "In Table 3, we report AUCs, A@10 scores, and Spearman correlations with frequency scores (again, averaged across length bins).", "The latter are low, suggesting that the model rankings differ from overall frequencies.", "The above token prediction infill setup attacks the model only via fixed templates.", "But the induced representations might implicitly encode sensitive information that happens to not be readily exposed by the template.", "We therefore also investigate a probing setup (Alain and Bengio, 2017; Bouraoui et al., 2020), in which a representation induced by a pretrained model is provided to a second probing model which is trained to predict attributes of interest.", "Unlike masked token prediction, probing requires that the adversary have access to a subset of training data to associate targets with representations.", "We train an MLP binary classifier on top of the encoded CLS token from the last layer of BERT.", "The probe is trained to differentiate positive instances (conditions the patient has) from negative examples (conditions the patient does not have) on a randomly sampled subset of 5000 patients (we downsample the negative class for balancing).", "We use the following template to encode the patient-condition pairs: [CLS] Mr./Mrs. [NAME] is a patient with [CONDITION] [SEP] .", "For more information on the setup, see Section A.5.", "Results are reported in Table 4.", "For comparison, we also consider a simpler, condition only template of [CLS] [CONDITION] [SEP] , which does not include the patient name.", "We run experiments on the Base , Large , and Name Insertion models.", "These models achieve strong AUCs, nearly matching the frequency baseline performance in Table 2.", "13 However, it appears that removing the patient's name and simply encoding the condition to make a binary prediction yields similar (in fact, slightly better) per-13 Though the AUCs for the probing are calculated over a randomly sampled test subset of the full data used in Table 2.", "The standard probing setup encourages the model to use the frequency of target conditions to make predictions.", "To address this, we also consider a variant in which we probe for only individual conditions, rather than defining a single model probing for multiple conditions, as above.", "This means we train independent models per condition, which can then be used to score patients with respect to said conditions.", "To train such models we upsample positive examples such that we train on balanced sets of patients for each condition.", "14 This approach provides results for each condition which vary in frequency.", "To assess the comparative performance of probes over conditions of different prevalence, we group conditions into mutually exclusive bins reflecting frequency (al-lowing us to analyze differences in performance, e.g., on rare conditions).", "We group conditions by frequencies, from rarest (associated with 2-5 patients) to most common (associated with > 20 pa-tients).", "We randomly sample 50 conditions from each of these groups, and train an MLP classifier on top of the encoded CLS token from the last layer in BERT (this results in 50 different models per group, i.e., 200 independent models).", "We measure, in terms of AUC and A@10, whether the probe for a condition return comparatively higher scores for patients that have that condition.", "We report results in Table 5.", "Except for the rarest conditions (associated with < 5 patients), these models achieve AUCs that are at best modestly better than chance, with all A@10 metrics 14 We upsample the minority examples, rather than under-sampling as before, because the single-condition models are comparatively quick to train.", "Prior work (Abdalla et al., 2020) has demonstrated that static word vectors can leak information: The cosine similarities between learned embeddings of patient names and conditions are on average sig-nificantly smaller than the similarities between patient names and conditions they do not have.", "We run a similar experiment to investigate whether contextualized embeddings similarly leak information (and also to assess the degree to which this holds on the MIMIC corpus as a point of com-parison).", "We calculate the average cosine similarity between learned embeddings of patient names and those of positive conditions (conditions that the patient has) minus negative conditions (those that they do not have).", "Conditions and names span multiple tokens; we perform mean pooling over these to induce embeddings.", "Here again we evaluate on the aforementioned set of 27,906 patients.", "We report results for BERT and word2vec (CBoW and SkipGram; Mikolov et al. 2013) in Table 6. 15 Values greater than zero here suggest leakage, as this implies that patient names end up closer to conditions that patients have, relative to those that they do not.", "Even when trained over the Name Insertion data (which we manipulated to frequently mention names), we do not observe leakage from the contextualized embeddings.", "Here we try something even more basic: We attempt to determine whether a pretrained model has seen a particular patient name in training.", "The ability to reliably recover individual patient names (even if not linked to specific conditions) from BERT models trained over EHR data would be concerning if such models were to be made public.", "We consider a number of approaches to this task.", "Probing We encode the patient's name ( [CLS] [NAME] [SEP] ) using BERT and train a Logistic Regression classifier that consumes resultant CLS representations and predicts whether the corresponding patient has been observed in training.", "As mentioned above, patient names are explicitly mentioned in notes for 27,906 patients; these constitute our positive examples, and the remaining patients (of the 46,520) are negative examples.", "We split the data into equally sized train and test sets.", "We report results in Table 7. To contextualize these results, we also run this experiment on the standard BERT base model (which is not trained on this EHR data).", "We observe that the AUCs are near chance, and that the performance of the standard BERT base model is relatively similar to that of the Regular and Large base models, despite the fact that the standard BERT base model has not seen any notes from MIMIC.", "Given a first name, can we predict whether we have seen a corresponding last name?", "More specifically, we mask out a patient's last name (but not their first) in the template [CLS] [First Name] [MASK] + [SEP] and record the perplexity of the target last name.", "We take as the set of outputs all 46,520 patient names in the corpus.", "We can also flip this experiment, masking only first names.", "This is intuitively quite difficult, as only 10K / 77M sentences (0.013%) contain both the patient's first and last name.", "This number includes first and last name mentions that are also other English words (e.g. young).", "Results are reported in Table 8. We do observe reasonable signal in the semi-synthetic Name Insertion and Template Only variants.", "Recent work by Carlini et al. (2020) showed that GPT-2 (Radford et al., 2019) memorizes training data, and proposed techniques to efficiently recover sensitive information from this model (e.g., email addresses).", "They experimented only with large, auto-regressive language models (i.e., GPT-2), but their techniques are sufficiently general for us to use here.", "More specifically, to apply their approaches to a BERT-based model 16 we must be able to sample text from BERT, which is complicated by the fact that it is not a proper (auto-regressive) language model.", "To generate outputs from BERT we therefore followed a method proposed in prior work (Wang and Cho, 2019).", "This entails treating BERT as a Markov random field language model and using a Gibbs sampling procedure to generate outputs.", "We then analyze these outputs from", "(a) our regular BERT-based model trained on MIMIC;", "(b) the Name Insertion model, and;", "(c) a standard BERT Base model (De-vlin et al., 2019).", "We generate 500k samples from each, each sample consisting of 100 wordpiece tokens.", "Comparator Model Perplexity Following Carlini et al. (2020), we attempt to identify which pieces of generated text are most likely to contain memorized names (in this case, from EHR).", "To this end, we examine segments of the text in which the difference in likelihood of our trained BERT model versus the standard BERT-base model (De-vlin et al., 2019) is high.", "For the samples generated from the standard BERT-base model (not trained on MIMIC), we use our ClinicalBERT model as the comparator.", "17 Using an off-the-shelf NER tagger (Honnibal et al., 2020), we identify samples containing name tokens.", "For each sample, we mask name tokens individually and calculate their perplexity under each of the the respective models.", "We take the difference between these to yield a score (sequences with high likelihood under the trained model and low likelihood according to the general-domain BERT may contain vestiges of training data) and use it to rank our extracted names; we then use this to calculate A@100.", "As expected, the Name Insertion model produced more names than the Base model, with approximately 60% of all sentences containing a name (not necessarily in MIMIC).", "Additionally, the A@100 of the Name Insertion model substantially outperforms the Base model.", "However, when we use spaCy to examine sentences that contain both a condition and a patient's name (of the 27,906), we find that 23.5% of the time the pa-16 Which, at least at present, remains the default encoder used in biomedical NLP.", "17 Note that this means that even though samples are generated from a model that cannot have memorized anything in the EHR, using a comparator model that was to re-rank these samples may effectively reveal information.", "tient does indeed have a condition produced by the Base model.", "It is unclear to what extent this re-flects memorization of concrete patient-condition pairs per se, as opposed to learning more diffused patient-agnostic distributions of conditions in the MIMIC dataset.", "The corresponding statistic for the Name Insertion variant (4.17%) may be low because this tends to produce poor quality outputs with many names, but not many conditions.", "This is an intriguing result that warrants further research.", "However, we caution that these generation experiments are affected by the accuracy of NER taggers used.", "For example, many of the extracted names tend to also be generic words (e.g., young', date', yo', etc.) which may artificially inflate our scores.", "In addition, MedCAT sometimes uses abbreviations as conditions, which may also yield false positives' for conditions.", "This work has important limitations.", "We have considered only relatively simple attacks, based on token in-filling and probing.", "Our preliminary results using the more advanced generation approach (inspired by Carlini et al. 2020) is a promising future direction, although the quality of generation from BERT which is not naturally a language model may mitigate this.", "This highlights a second limitation: We have only considered BERT, as it is currently the most common choice of pretrained Transformer in the bioNLP community.", "Auto-regressive models such as GPT-2 may be more prone to memorization.", "Larger models (e.g., T5 (Raffel et al., 2020) or GPT-3 (Brown et al., 2020)) are also likely to heighten the risk of data leakage if trained over EHR.", "Another limitation is that we have only considered the MIMIC-III corpus here, and the style in which notes are written in this dataset names appear very infrequently likely renders it particularly difficult for BERT to recover implicit associations between patient names and conditions.", "We attempted to address this issue with the semisynthetic Name Insertion variant, where we artificially inserted patient names into every sentence; this did not yield qualitatively different results for most experiments.", "Nonetheless, it is possible that experiments on EHR datasets from other hospitals (with different distributions over tokens and names) would change the degree to which one is able to recover PHI.", "Finally, these results for BERT may change under different masking strategies for example, dynamic masking (Liu et al., 2019) or choice of tokenizer.", "Both of these may affect memorization and extraction method performance.", "We have performed an initial investigation into the degree to which large Transformers pretrained over EHR data might reveal sensitive personal health information (PHI).", "We ran a battery of experiments in which we attempted to recover such information from BERT model weights estimated over the MIMIC-III dataset (into which we artificially reintroduced patient names, as MIMIC is deidentified).", "Across these experiments, we found that we were mostly unable to meaningfully expose PHI using simple methods.", "Moreover, even when we constructed a variant of data in which we prepended patient names to every sentence prior to pretraining BERT, we were still unable to recover sensitive information reliably.", "Our initial results using more advanced techniques based on generation (Carlini et al. 2020; Table 9) are intriguing but inconclusive at present.", "Our results certainly do not rule out the possibility that more advanced methods might reveal PHI.", "But, these findings do at least suggest that doing so is not trivial.", "To facilitate further research, we make our experimental setup and baseline probing models available: https://github.com/ elehman16/exposing_patient_data_release .", "This work has ethical implications relevant to patient privacy.", "HIPAA prohibits the distribution of PHI, for good reason.", "Without this type of privacy law, patient information, for example, could be passed on to a lender and be used to deny a patient's application for mortgages or credit card.", "It is therefore essential that patient information remain private.", "This raises an important practical concerning methods in NLP that we have sought to address: Does releasing models pretrained over sensitive data pose a privacy risk?", "While we were unable to reliably recover PHI in this work, we hope that this effort encourages the community to develop more advanced attacks to probe this potential vulnerability.", "We would still advise researchers to err on the side of caution and only consider releasing models trained over fully deidentified data (e.g. MIMIC).", "We thank Peter Szolovits for early feedback on a draft of this manuscript, and the anonymous NAACL reviewers for their comments.", "This material is based upon work supported in part by the National Science Foundation under Grant No. 1901117.", "This Research was also supported with Cloud TPUs from Google's Tensor-Flow Research Cloud (TFRC)." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "result", "abstain", "abstain", "method", "method", "method", "result", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "result", "result", "abstain", "method", "result", "abstain", "result", "method", "abstain", "abstain", "method", "result", "abstain", "result", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "result", "method", "result", "result", "abstain", "method", "method", "method", "method", "result", "result", "result", "abstain", "method", "result", "method", "method", "result", "result", "method", "method", "method", "method", "abstain", "abstain", "result", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "method", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other" ]
[ "Distributed representations of words have been an indispensable component for natural language processing (NLP) tasks.", "However, the large memory footprint of word embeddings makes it challenging to deploy NLP models to memory-constrained devices (e.g., self-driving cars, mobile devices).", "In this paper, we propose a novel method to adaptively compress word embeddings.", "We fundamentally follow a code-book approach that represents words as discrete codes such as (8, 5, 2, 4).", "However, unlike prior works that assign the same length of codes to all words, we adaptively assign different lengths of codes to each word by learning downstream tasks.", "The proposed method works in two steps.", "First, each word directly learns to select its code length in an end-to-end manner by applying the Gumbel-softmax tricks.", "After selecting the code length, each word learns discrete codes through a neural network with a binary constraint.", "To showcase the general applicability of the proposed method, we evaluate the performance on four different downstream tasks.", "Comprehensive evaluation results clearly show that our method is effective and makes the highly compressed word embeddings without hurting the task accuracy.", "Moreover, we show that our model assigns word to each code-book by considering the significance of tasks.", "Deep neural networks have greatly improved the performance in various tasks, such as image classification (Huang et al., 2017), text classification (Liu and Lapata, 2018), and machine translation (Edunov et al., 2018).", "This break-through performance facilitates the demand to deploy such models to embedded systems (e.g., self-driving cars, mobile devices).", "However, the neural models typically require a large storage or memory footprint, which is a significant concern when deploying neural models to memory-constrained devices (Hinton et al., 2015).", "To alleviate this limitation, several works have proposed methods that compress the neural models while minimizing loss of accuracy as much as possible (Han et al., 2015, 2016; Liu and Zhu, 2018).", "However, deploying models for natural language processing (NLP) tasks is challenging.", "Unlike other domains, NLP models have an embedding layer which maps words and phrases to real-valued vectors.", "The problem is that these embeddings usually take more parameters than the remaining networks.", "In practice, for a neural translation model in OpenNMT (Klein et al., 2017), the word embedding parameters accout for 80% of the total parameters.", "Therefore, it is significant to reduce the parameters of the embedding layer for deploying NLP models to memory-constrained devices.", "To compress word embeddings, several works proposed code-book based approaches (Shu and Nakayama, 2018; Tissier et al., 2019), which represent each word as few discrete and shared codes.", "For example, the word dog and dogs could be represented as (3, 5, 2, 1) and (3, 5, 2, 7), respectively.", "This sharing scheme and discrete codes make the embeddings have smaller parameters and interpretability to some extent.", "However, these methods assign the same length of codes to each word without considering the significance of downstream tasks.", "It means that, for a sentiment analysis, excellent and the require the same amount of memory.", "This observation makes room for improvement in compressing word embeddings.", "In this paper, we attempt to further compress word embeddings by adaptively assigning different lengths of codes to each word in an end-to-end manner.", "We propose AdaComp that adaptively learns to compress word embeddings by considering downstream tasks.", "The proposed compression works in two steps.", "First, each word in pre-trained word embeddings learns to select its code length in an end-to-end manner by applying Gumbel-softmax tricks (Jang et al., 2016).", "After selecting its code length, each word learns discrete codes through a binary-constraint encoder and decoder network.", "To instill task-specific features to the selection process, we compress each word embedding by learning a downstream task.", "This allows us to learn the task-specific features naturally.", "Compared to prior works, AdaComp could give each word more options to represent their meaning since the proposed model utilizes a number of different code-books.", "To showcase the general applicability of AdaComp, we conduct four different NLP tasks, which are sentiment classification, chunking, natural language inference, and language modeling.", "Comprehensive evaluation results not only show that our method could compress original word embeddings quite well without hurting task accuracy but also demonstrate that AdaComp assigns each word to different code-books by considering the significance of a task.", "AdaComp could be applied to most existing NLP systems with minor modifications since the proposed model is a network-agnostic, in-place architecture.", "We thus believe that existing NLP systems could benefit from our work.", "We organize the remainder of this paper as follows.", "In Section 2, we discuss related work.", "In Section 3, we describe the proposed method.", "We report our performance evaluation results and analyze our methodology in detail in Section 4 and 5, respectively.", "Finally, we conclude this paper in Section 6.", "In this section, we review several studies that attempt to compress neural models, including an embedding layer.", "The majority of works for compression is to compress neural networks itself (e.g., convolutional neural network, recurrent neural network), and most of them focus on compressing neural models in the field of computer vision.", "These approaches usually include pruning, quantization, and low precision representation methods.", "For pruning, several works (Han et al., 2015; Li et al., 2017; Lee et al., 2019) focus on how each connection (i.e., weights) affects to tasks, and they remove redundant or unimportant connections from the networks.", "Some works (Han et al., 2016; Chen et al., 2016; Louizos et al., 2019) quantize the connections into several bins to enforce weight sharing.", "These approaches represent each connection as some representative values, and such values are selected by clustering (centroids) or hashing (hash buckets) techniques.", "Representing each connection with low precision (i.e., few bits or binary) is also appealing for compressing neural networks (Anwar et al., 2015; Courbariaux et al., 2015; Hubara et al., 2016).", "In particular, Courbariaux et al. (2015) and Hubara et al. (2016) show that binary constraint is sufficiently effective in network learning without largely affecting the task accuracy.", "Several studies have proposed compressing methods for word embeddings because the majority of parameters in NLP models lies in an embedding layer.", "For example, Ling et al. (2016) reduces the memory requirement of word embeddings by quantizing each dimension of embeddings into significantly fewer bits than the standard 64 bits.", "It shows that 4 or 8 bit is enough to represent each word embedding.", "Instead of reducing the parameters of each word embedding, Chen et al. (2016) reduces the number of words in vocabulary by filtering out uncommon words.", "For the removed words, they reconstruct these embeddings by combining several frequent words.", "Recently, several methods (Shu and Nakayama, 2018; Shi and Yu, 2018; Tissier et al., 2019) decompose each word into a few numbers of codes and learn corresponding code vectors to represent the original embeddings.", "Shu and Nakayama (2018) uses a deep code-book approach to represent each word.", "To automatically learn discrete codes, they utilize reparameterization tricks in an encoder and decoder architecture.", "Similarly, Tissier et al. (2019) utilizes an auto-encoder with a binary constraint to represent words.", "Compared to the aforementioned methods, AdaComp is the first work that represents each word differently in terms of length of codes.", "Furthermore, we learn task-specific features directly by learning a downstream task at the same time.", "In this section, we describe the proposed method, which is denoted as AdaComp , in detail.", "The primary strategy of AdaComp is straightforward and (cid:1) (cid:1) 4 (cid:1) 2 (cid:4) (cid:5) (cid:6) (cid:6) (cid:7) (cid:8) (cid:8) (cid:8) (cid:9) (cid:6) 2 (cid:6) 2(cid:7) (cid:6) 3 (cid:6) 3 (cid:7) Figure 1: Main strategy of our compression model (AdaComp).", "is shown in Figure 1.", "We start with the pre-trained word embeddings (e.g., GloVe (Pennington et al., 2014), word2vec (Mikolov et al., 2013)), and the compression method works in two steps.", "Given an input embedding, AdaComp learns to adaptively select its code length in an end-to-end manner by applying Gumbel-softmax tricks (Jang et al., 2016) (Section 3.1).", "After selecting a code length, each word learns its discrete codes through an encoder and decoder, which has a binary latent space (Sec-tion 3.2).", "To represent each word as discrete codes, several code-book approaches build a single code-book C k where k is the length of codes.", "Instead of assigning the same length of codes, we adaptively assign different lengths of codes to each word.", "To this end, we have a set of different code-books C = { C k 1 , C k 2 , ..., C k n } .", "The objective for the first phase is to select a single code-book from the set of code-books in an end-to-end manner.", "where R d | C | , (cid:48) R d d and b , b (cid:48) are trainable weight matrices and biases of the networks, respectively, where d is the dimension of the original embeddings.", "The functions 1 ( ) , 2 ( ) are the softplus and tanh function, respectively.", "Then, we could select a single code-book by applying an argmax or a sign function into the resultant encoding.", "However, deriving discrete values (i.e., the index of the code-books) in the neural networks is not trivial since the aforementioned functions are not differentiable.", "To handle such problem, several methods proposed to deal with discrete values in a neural network naturally.", "In our work, we use the Gumbel softmax tricks since we need a one-hot vector to represent the discrete index of the set of codebooks.", "The Gumbel softmax allows the neural networks to naturally have a k -dimensional one-hot vector in the intermediate of the networks.", "Let u w be the one-hot vector for a word w , the i -th element of the vector is computed as follows: u iw = softmax (log iw + g i )) = exp ((log iw + g i ) / ) (cid:80) | C | j =1 exp ((log jw + g j ) / ) (2) where g i , ..., g | C | are i.i.d noise samples drawn from Gumbel distribution 1 and is the relaxation factor of the Gumbel softmax.", "Similarly, (Shu and Nakayama, 2018) utilized Gumbel softmax for compression.", "However, they used it to derive discrete codes of each word, not the index of the set of code-books as in AdaComp.", "After selecting a specific code-book from the set C , AdaComp learns the discrete codes in the selected code-book.", "To this end, we use a binary constraint encoder and decoder, which has a binary latent space.", "When the training converges, the binary latent vector of each word is used as the discrete code, and the decoder is used as the code vectors in each code-book.", "Again, we start from the original word embedding.", "To produce discrete codes, we feed the embeddings to the binary constraint networks.", "Let w be the word in an input text and n be the code length of the selected code-book, and the code learning works as follows: e (cid:48) w = W ( WT e w + b ) + b (cid:48) (3) where W R d n and b , b (cid:48) are trainable weight matrices and biases in the encoder and decoder, respectively.", "As can be seen from the equation, we use the same weights at the encoding and decoding phase.", "This is because such tied weights enable 1 Gumbel distribution can be sampled using inverse transform sampling by drawing u Uniform[0, 1] and computing g = log( log( u )) faster training and have a greater regularization effect than individual weights (Alain and Bengio, 2014; Gulordava et al., 2018).", "The function is the binary constraint function.", "We use the following threshold function 2 : ( x i ) = ReLU ( Sign ( x i )) = (cid:40) +1 x i 0, 0 otherwise, This function produces the binary codes, which consist of 1 and 0.", "However, we face the same problem with the previous section.", "The derivative of the sign function is zero almost everywhere, making it incompatible with back-propagation.", "To naturally learn the binary codes in an end-to-end manner, we apply the straight-through estimator (Hinton, 2013) to the threshold function.", "This estimator allows gradients to skip the threshold function.", "In our work, we use a different version of the straight-through estimator to take into account a saturation effect.", "Let the gradients above the threshold function as LN , we obtain gradients of the threshold function as follows: L = L N 1 | g | 1 (4) where g is the value of the gradients above the threshold function.", "This function allows us to naturally learn binary codes by preserving the information of the gradients and canceling the gradient when g is too large, which could mess up the training.", "Thus far, we adaptively select the code-book from the set, which has a different length of codebooks, and produce the binary codes of each word.", "To jointly learn the above two phases in an end-to-end manner, we relate them as follows: o w = E Tw u w (5) where E Tw R | C | d is the reconstructed embeddings of w for all code lengths.", "By multiplying the selection vector (i.e., u w ) by the reconstructed embeddings, AdaComp learns two phases in an end-to-end manner.", "We feed the reconstructed embedding o w to task-specific networks for learning a downstream task.", "2 We also experimented with only applying sign function which results in -1 and +1.", "We empirically found that the two functions produced nearly identical results.", "We thus use ReLU with Sign function for a decoding efficiency.", "To cover a large number of words in the vocabulary, reducing the redundancy of representations for each code vector is significant.", "We thus put the orthogonality constraint into code vectors, which penalizes redundant latent representations and encourages each code vector to encode different aspects of the original word embeddings.", "where W is the parameters of the code vectors (i.e., decoder), I is an identity matrix.", "(cid:107) (cid:107) F stands for Frobenius norm of a matrix.", "We add this term to our objective function.", "Since AdaComp learns compression by learning a downstream task, the objective function depends on each task.", "For example, if the task is sentiment classification, the objective function could be negative log-likelihood over sentiments.", "Let the objective function be L task , the total objective function is as follows: L = L task + P (7) where is the control factor of orthogonality, and we set this to 0.01.", "We empirically found that pretraining AdaComp significantly increases the performance for several tasks (detailed in Section 5.1).", "We thus pretrain our model using an auto-encoder loss, which is as follows: L pre = (cid:88) w V (cid:107) o w e w (cid:107) 22 (8) When the loss of pretraining converges, we attach the pre-trained AdaComp to an embedding layer of task-specific networks and learn a downstream task using Eq.7.", "In this section, we show the performance evaluation of the proposed model.", "To showcase the general applicability of AdaComp, we conduct four different tasks, which are sentence classification, chunking, natural language inference, and language model.", "Through the above tasks, we validate the efficacy of AdaComp on the settings of many-to-one (senti-ment classification), many-to-many (chunking, language modeling), and multiple inputs (natural language inference).", "The proposed compressing model starts from pre-trained word embeddings.", "In the experiments, we use the publicly available GloVe 3 (Penning-ton et al., 2014) with 300 dimension for 400k words.", "For hyper-parameter settings, we use Adam (Kingma and Ba, 2014) optimizer with 0.001 learning rate and the batch size is 64.", "We choose the above parameters by validating both sentiment classification and natural language inference tasks.", "In this paper, we examine the following methods which use different kind of compressing methods:", "QWE (Ling et al., 2016): This model quantizes the weights from floating-point to few bits of precision less than standard 64 bits.", "We evaluate two settings, which are 4 and 8-bit representations.", "Pruning (Han et al., 2015): This model prunes redundant weights from the networks.", "We prune the weights of word embeddings until this technique removes 80% or 90% neurons from the embeddings.", "NC (Shu and Nakayama, 2018): This model compresses the pre-trained embeddings using a single code-book using a deep neural network.", "We compare two different settings, which are the moderate size (16x16 codebook) and the large size (32x16 code-book).", "Bin (Tissier et al., 2019): This model compresses word embeddings through an auto-encoder which has binary constraint on a latent space.", "Among their two methods, we choose rec since it performs better with deep 3 https://nlp.stanford.edu/projects/glove/ neural networks (i.e., LSTM, CNN).", "We compare two settings that have 64 and 128 binary codes.", "AdaComp (Ours): This is the proposed model in this paper.", "We use four different codebooks since we found that using four codebooks leads to the most effective performance with a memory requirement (detailed in Section 5.2).", "We use three different settings on the four code-books which have ( 128 , 64, 32, 16), ( 64 , 32, 16, 8) and ( 32 , 16, 8, 4) length of code-books.", "On the tables and figures, we use the max length of codes to denote each model.", "The aforementioned methods do not learn task-specific features since they learn to compress embeddings using the auto-encoder loss.", "To fairly compare with our method, we apply the strategy in (Shu and Nakayama, 2018) to each model.", "In short, we first fine-tune the original embeddings to tasks and then compress the learned embeddings through the above methods.", "4 Evaluation metrics We report both a task performance and a total memory size.", "The total memory size is estimated from all parameters which are used to represent all words in tasks.", "Note that it does not contain the size of task-specific networks.", "For our method, we report memory size and performance when we deploy our model to other systems.", "It contains the parameters of multiple code-books and binary codes about each word.", "The memory size of the original embeddings about each task is listed in Table 1.", "Table 2 shows the overall results on four tasks.", "We describe each task and the task-specific networks as below.", "Sentence classification is the task of classifying a sentence into pre-defined classes.", "We use the stanford sentiment treebank (SST) dataset as a representative dataset.", "The SST has 5 classes about sentiment (very negative, negative, neutral, positive, very positive).", "The performance is measured 4 We have also applied an end-to-end compression learning to each model.", "However, we confirmed that this training was only significant in AdaComp and, for the other methods, produced nearly identical results with the strategy in (Shu and Nakayama, 2018).", "by the accuracy on test set.", "For text classification model, we reproduce the LSTM model used in (Zhang et al., 2015) as a baseline.", "It feeds word embeddings in sequence, and averages hidden states of the last layer to represent an input sentence for classification.", "In this model, we set the hidden states to 450 dimension and use two-stacked LSTMs.", "As can be seen from the table, code-book approaches (i.e., NC, Bin, AdaComp) basically show better results than others in both performance and memory size.", "Among them, AdaComp makes more highly compressed embeddings than others with better performance.", "For example, AdaComp (32 ) achieve as much as 11% improvement on test accuracy compared to other code-book approaches which use the same number of codes with the longest codes in ours.", "Furthermore, our model requires nearly 2x less memory sizes compared to others.", "Chunking is the task of dividing a sentence into syntactically correlated parts of words.", "The CoNLL 2000 shared task (Tjong Kim Sang and Buchholz, 2000) is a benchmark dataset for text chunking.", "It has 24 tags about each word with its start and end symbols.", "The performance is measured by F1 score.", "For the chunking model, we use an LSTM-based sequence tagger which was proposed by (Huang et al., 2015).", "We set the hidden states to 300 dimensions and use two-stacked LSTMs.", "The results are shown in the same table.", "Even though the quantization method (i.e., QWE 8-bit) achieves the best performance when they restrict the values into 8-bits, the compression ratio is quite lower than other methods, and the performance starts to degrade as they use smaller bits to represent words.", "Compared to the other code-book methods, AdaComp achieves strong performance with highly compressed embeddings.", "For example, AdaComp (128) does not hurt the accuracy of the original embeddings with approximately 44x compressed embeddings.", "Textual entailment is the task of determining whether a hypothesis is true, given a premise.", "The Stanford Natural Language Inference (SNLI) (Bow-man et al., 2015) dataset is a benchmark for this task.", "This dataset contains approximately 550K hypothesis/premise pairs with entailment, contradiction, and neutral labels.", "For this task, we use an LSTM-based encoder model which was proposed by (Bowman et al., 2016).", "It uses two different LSTMs with 300-dimensional hidden states to encode each information (i.e., premise and hypothe-sis).", "The concatenated vectors for two sentences are classified into the three labels.", "Even though the performance of our method, including others, is lower than the elementary embeddings, AdaComp yields strong performance with a high compression ratio in this task.", "Compared to other methods that use the largest memory, the proposed model (i.e., AdaComp (128)) requires the Figure 2: Ratio of code-book assignment on each setting.", "Language modeling is the task of scoring a sentence whether it is natural or not comparing to the training dataset.", "This task has been widely used in several mobile applications by recommending the next word or sentence based on a user text.", "In this task, we use Penn Treebank (PTB) to evaluate the performance.", "We report test perplexity about each method.", "For this task, we use a word-based LSTM model which was used in (Kim et al., 2016).", "We select a medium-size model with 650-dimensional hidden states to encode each word and apply dropout (Srivastava et al., 2014) to the top of LSTMs.", "Similar to the previous task, the performance of the methods is lower than the original embeddings.", "We conjecture the lower performance comes from that these tasks (i.e., language modeling, natural language inference) require more generalized features than other tasks.", "This is why these tasks are used to pretrain neural models for various NLP tasks (Cer et al., 2018; Radford et al.).", "Compared to others, again, AdaComp achieves the best results in terms of both metrics.", "Before AdaComp learns to compress word embeddings, we pretrain the model using the auto-encoder loss (Eq. 8).", "To show that pretraining step is indeed effective, we report accuracy and a ratio of code-book assignment.", "Here, we evaluate the performance of all tasks when we use different set of code-books (detailed in Section 5.3).", "Figure 3a shows the performance results.", "The result shows", "that the model with pretraining performs better than the model, which is not pretrained.", "This is clearly evident when we use smaller code-books to represent words.", "We believe that the pretraining step guides our model towards basins of attraction of minima that support a better generalization.", "This is the similar results with (Erhan et al., 2010).", "Figure 2 shows the comparison of the codebook assignment on each setting for the SNLI task.", "When we only pretrain the compressing model, the large portion of words, around 80%, is assigned to the largest code-book (i.e., 128).", "However, when we fine-tune the pre-trained models to the task, the ratio of the large one is significantly decreased.", "This means that fine-tuning could reduce the memory requirement by a large margin.", "Without the pretraining step, fine-tuning model achieves a smaller memory size than the pre-trained models.", "However, we have shown that pretraining leads to better Figure 4: Visualization of the reconstructed embeddings with their code-book assignment.", "We evaluate the performance of our method when we use different lengths or numbers of code-books.", "We first plot the results of different lengths of codebooks in Figure 3a.", "Here, we use four code-books as default and the length of codes is divided by two along with the next smaller code-book.", "For example, the value 64 in the axis means (64, 32, 16, 8) and 32 means (32, 16, 8, 4).", "As you can see in Figure 3a, utilizing the large size of code-books leads to improved performance than the models with smaller lengths.", "These results come from that larger code-books could represent more aspects of original embeddings.", "Figure 3b shows the performance variation of different number of code-books.", "Here, we use 128 code vectors and divide these vectors into several code-books.", "The x-axis means the number of code-books that correspond to (128), (64, 64), (64, 32, 32), (64, 32, 16, 16), (64, 32, 16, 8, 8).", "We observe that the performance does not depend on different code-books very much compared to lengths of code-books.", "To get better performance with high compression ratio, we have used four code-books in the experiments.", "To confirm how the model assigns each word to different code-books, we visualize the code-book assignment.", "To this end, we project the reconstructed embeddings into 2-dimensional space using t-SNE (Maaten and Hinton, 2008), and we use the embeddings when we perform the sentiment classification task using AdaComp (64).", "To show important words (i.e., sentiment words) to the task, we take the sentiment words (positive and negative) from (Hu and Liu, 2004) and denote these words if they existed in the embeddings of AdaComp.", "Figure 4 shows the 2-dimensional projection of the reconstructed embeddings with their assigned code-books.", "We observe that important sentiment words are assigned to the longest code-book, and the ratio of sentiment words are significantly decreased along with the smaller code-books.", "This result shows that AdaComp uses longer codes to represent task-sensitive words and shorter codes to represent less significant words to the task.", "In this paper, we have described AdaComp that adaptively compresses word embeddings by using different lengths of code-books.", "To this end, we have used the Gumbel-softmax tricks and the binary-constraint networks to learn the code-book selection and its discrete codes in an end-to-end manner.", "To showcase the general applicability of AdaComp, we conduct four different NLP tasks, which are sentence classification, chunking, natural language inference, and language modeling.", "Evaluation results have clearly shown that AdaComp could obtain better results than other methods in terms of both accuracy and memory requirement.", "We also found that AdaComp assigns each word to different code-books by considering the significance of tasks.", "Although we have focused on compressing the embeddings by learning task-specific features, AdaComp could be used at NLP tasks without fine-tuning.", "We believe that our method can benefit simultaneously from other compression techniques, such as pruning (Han et al., 2016) and low-precision representation (Ling et al., 2016).", "We leave this as an avenue for future work.", "This work was supported by Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No.", "2019-0-00079 , Artificial Intelligence Graduate School Program(Korea University)), and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2018R1A2A1A05078380)." ]
[ "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "abstain", "method", "method", "abstain", "objective", "result", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "result", "method", "method", "abstain", "other", "other" ]
[ "Abstract Spatial commonsense, the knowledge about spatial position and relationship between objects (like the relative size of a lion and a girl , and the position of a boy relative to a bicycle when cycling ), is an important part of commonsense knowledge.", "Although pretrained language models (PLMs) succeed in many NLP tasks, they are shown to be ineffective in spatial commonsense reasoning.", "Starting from the observation that images are more likely to exhibit spatial commonsense than texts, we explore whether models with visual signals learn more spatial commonsense than text-based PLMs.", "We propose a spatial commonsense benchmark that focuses on the relative scales of objects, and the positional relationship between people and objects under different actions.", "We probe PLMs and models with visual signals, including vision-language pretrained models and image synthesis models, on this benchmark, and find that image synthesis models are more capable of learning accurate and consistent spatial knowledge than other models.", "The spatial knowledge from image synthesis models also helps in natural language understanding tasks that require spatial commonsense.", "Code and data are available at https://github.com/ xxxiaol/spatial-commonsense .", "Spatial perception, the ability to detect the spatial position and to infer the relationship between visual stimuli (Donnon et al., 2005; Saj and Baris-nikov, 2015), is basic but important for human beings (Pellegrino et al., 1984).", "It is of everyday use, from understanding the surrounding environment, like when seeing a woman sitting in a car with her hands on the steering wheel, we know she is probably driving , to processing spatial information and performing reasoning, like navigating Corresponding author.", "through a dense forest .", "We regard the knowledge needed in spatial perception as spatial commonsense.", "Humans start to develop spatial perception and acquire spatial commonsense from infancy, and apply the commonsense through lifetime (Kuipers et al., 1990; Poole et al., 2006).", "Although text-based Pretrained Language Models (PLMs) achieve great performance on various commonsense reasoning tasks (Davison et al., 2019; Zhou et al., 2020), they are shown to be ineffective when dealing with spatial commonsense.", "Zhang et al. (2020) and Aroca-Ouellette et al. (2021) show that current PLMs lack the ability to reason about object scales.", "Bhagavatula et al. (2020) find that BERT (Devlin et al., 2019) under-performs on instances involving spatial locations.", "The struggle of PLMs with spatial commonsense is partly because spatial commonsense is rarely expressed explicitly in texts.", "We may write sentences like lions are big animals , but we seldom explicitly mention how big lions are; we also rarely write about the spatial relationship between a boy and a bicycle when he is cycling.", "Spatial commonsense is exhibited in images more commonly (Cui et al., 2020).", "As shown in Figure 1, the two Wikipedia articles provide little 2365 spatial information, but a picture of a lion and a girl provides a reference to the size of a lion; and a painting of a boy riding a bicycle depicts that he sits on the bicycle.", "Hence, a natural idea is to elicit spatial knowledge from models with visual signals.", "We first study whether models with visual signals learn more spatial knowledge than text-only models .", "We select Vision-Language PreTrained Models (VL-PTMs) and Image Synthesis Models (ISMs) for investigation.", "VL-PTMs encode texts and images together, fusing their features to deal with downstream tasks.", "ISMs take texts as input, and generate images based on the texts.", "To evaluate the spatial commonsense in PLMs and models with visual signals, we design a benchmark that involves two subtasks: 1) comparing sizes and heights of different objects (like a lion and a girl ), and 2) determining the positional relationship between a person and an object when a certain action happens (like a boy's position when riding a bicycle ).", "The subtasks are designed to examine the model's capability to master two kinds of spatial commonsense: understanding spatial scales, and the relationship between surrounding objects and ourselves.", "As shown in Figure 2, we probe models with text prompts on this benchmark.", "We feed text prompts with masks to PLMs and VL-PTMs, and take the possible word with the highest probability as their prediction.", "We probe ISMs in a similar way: we first feed the text prompts to ISMs and then evaluate the generated images.", "We evaluate the images with two methods: automatically comparing bounding boxes of objects and conducting human evaluation.", "Results show that models with visual signals learn more accurate spatial commonsense than PLMs.", "Besides the performance comparison, we are also interested in how is the quality of spatial commonsense learned by different models?", "We investigate how consistent the spatial knowledge learnt by a model is, like whether it can manifest a lion is larger than a girl and a girl is smaller than a lion simultaneously; and to what extent models can generalize the knowledge when uncommon scenarios like an enchantress lights the sparkler appear.", "We observe that ISMs are capable of generating consistent spatial knowledge and the performance is robust in uncommon scenarios.", "The following problem is how to benefit natural language understanding tasks with the spatial knowledge from ISMs?", "We investigate this in the question answering scenario.", "Take a question like A boy is riding a bicycle.", "Is he on the bicycle?", "We generate an image about the question context a boy who is riding a bicycle with a text prompt using ISMs, and feed both the question and the generated image into vision-language models to predict an answer.", "This framework outperforms strong question answering models pretrained on texts only.", "While this is a simplified scenario of spatial commonsense reasoning, it manifests a possible way to employ the spatial knowledge learned by ISMs in natural language understanding.", "Motivated by the observation that images contain more spatial commonsense than texts, we 1) design a framework, including the data and probing methods, to compare the spatial reasoning ability of models with different modalities; 2) propose methods to evaluate the quality of learned spatial commonsense, and find that models with visual signals, especially ISMs, learn more precise and robust spatial knowledge than PLMs; and 3) demonstrate the improvement in spatial commonsense question answering with the help of visual models.", "Object Scales.", "Bagherinezhad et al. (2016) build a dataset for objects' size comparison, and Elazar et al. (2019) provide distributional information about objects' lengths.", "Forbes and Choi (2017) also involve spatial comparison but are criticized for ill-defined comparison (Elazar et al., 2019).", "Aroca-Ouellette et al. (2021) design a physical reasoning dataset that requires not only spatial commonsense but also a complex reasoning process, which is extremely challenging for existing models.", "We choose the formulation of object comparison in pairs as this kind of knowledge is easy to be probed from different models.", "Spatial Relationship.", "Collell et al. (2018) introduce a dataset of spatial templates for objects under different relations, but the spatial relations are represented as relative positions of bounding boxes, which are hard to express in language.", "Yatskar et al. (2016) extract statements of spatial relationship from object co-occurrences in MS-COCO (Lin et al., 2014).", "Mirzaee et al. (2021) design a textual spatial reasoning benchmark, and Johnson et al. (2017) and Hudson and Manning (2019) involve spatial reasoning in images, but they focus on logical reasoning rather than commonsense.", "Contrast 2366 A sofa is [MASK] than a mountain.", "to them, we build a dataset to describe the spatial relationship between people and objects in certain actions with preposition words.", "Early attempts in probing PLMs (Liu et al., 2019a; Hewitt and Manning, 2019) mainly train a classifier on the task of interest with the encoded representations.", "However, the probing performance is highly influenced by the probe design (Pimentel et al., 2020), thus is hard to reflect the ability of PLMs.", "Recently, prompt-based methods (Petroni et al., 2019; Zhou et al., 2020) become more prevalent to study what knowledge PLMs already encode.", "PLMs take a prompt as input, and generate the continuation (for generative PLMs) or predict masked words (for discriminative PLMs).", "This does not need additional training, and only a small development set is used to choose optimal prompts and answers (Jiang et al., 2020).", "In this work, we probe PLMs and VL-PTMs with prompts.", "Prompt-based methods are also used in model training (Schick and Schtze, 2021; Zhou et al., 2021), while we focus on the knowledge already learned by models.", "Basaj et al. (2021); Oleszkiewicz et al. (2021) try to apply the probing methods into the computer vision domain, but they focus on probing representations of visual models.", "In contrast, we probe ISMs by evaluating the generated images.", "Size and Height.", "Inspired by the cognitive discovery (Hersh and Caramazza, 1976) that people tend to categorize objects scales into fuzzy sets, we select 25 common objects in daily life, and categorize them into 5 groups as shown in Table 1a to construct the dataset for size comparison.", "Typical Size 1 ant, coin, nut, bullet, dice 2 bird, cup, shell, bottle, wallet 3 tyre, chair, microwave, dog, suitcase 4 human, sofa, bookshelf, tiger, bed 5 house, cinema, mountain, truck, plane", "objects in the former group are smaller than those in the latter group.", "We form 250 pairs of objects from different groups, like ant, bird , where the first object is smaller than the second in commonsense.", "Models are asked to compare the size of objects in pairs.", "To avoid an imbalance of answer distribution, we also consider the reversed pairs like bird, ant , so there are 500 instances in total.", "The dataset for comparing objects' heights is constructed similarly, as shown in Table 1b.", "We also form 500 instances with the objects.", "The comparison between objects is validated by 5 human annotators for both datasets.", "Positional Relationship.", "The positional relationship dataset consists of human actions regarding objects and the most likely positional relation between the person and the object.", "We consider four types of positional relations: above, below, inside, beside , as they do not overlap with each other.", "We select common objects, and write actions between people and the objects.", "The actions do not contain prepositions, like sit on the chair .", "Each ob-2367 A man <verb> the car.", "ject is accompanied by two actions with different positional relations.", "Take Figure 3 as an example.", "The man is beside the car when washing the car, whereas he is inside the car when driving it.", "Therefore, the relation cannot be easily inferred from collocations between the person and the object.", "The dataset contains 224 instances, validated by 5 annotators.", "We probe PLMs and VL-PTMs through masked word prediction.", "Given a text prompt with masks and a set of possible words, a model calculates the probability of each possible word filling the masked position.", "The word with the highest possibility is regarded as the prediction.", "We also probe ISMs through text prompts.", "The input is a piece of descriptive text, and the output is the image generated by an ISM.", "We assess the image with two methods as described in 3.3.", "PLMs are found to perform poorly in scenarios involving complex reasoning over spatial knowledge (Aroca-Ouellette et al., 2021), and we want to investigate whether they even fail in early stages, like whether they have learned spatial knowledge.", "So we probe models with simple tasks.", "In the subtask of size and height, the prompt for PLMs and VL-PTMs is in the form of O a is [MASK] than O b , where O a , O b is an object pair.", "The possible answer set is { larger, smaller } for size and { taller, shorter } for height.", "The prompt for ISMs is in the form of O a and O b , and the objects in generated images are compared for size and height.", "In the subtask of positional relationship, the prompt for PLMs and VL-PTMs contains an event scenario and a masked token for the positional relationship, like A woman washes the car.", "She is [MASK] the car.", "The possible answer set is { above, below, inside, beside } .", "The prompt for ISMs describes the scenario only, like A woman washes the car.", "We assess the images generated by ISMs with two methods.", "We first use the spatial information of bounding boxes (referred to as ISM (Box)).", "For each object mentioned in the prompt, we select the classified bounding box with the highest confidence.", "To mitigate the effect of viewpoint (an object closer to the camera may appear larger in the image), we compute the average depth of the box as the object's depth.", "We use the object detector from Zhang et al. (2021), and the depth estimator from Godard et al. (2019).", "When probing the relative size, we compare area depth 2 of the two objects' boxes; and when probing the relative height, we compare height depth .", "When classifying positional relations, we use the mapping rules between spatial relations and image regions from Visual Dependency Grammar (VDG) (Elliott and Keller, 2013).", "We list the rules in Appendix A.1.", "Some generated images are vague while object detection models are trained to process clear pictures, so a number of objects are not recognized.", "To precisely assess the generated images, we conduct human evaluation on all images (referred to as ISM (Human)).", "Annotators are asked to compare the size/height of the objects in the images (for the first subtask) and classify the positional relationship between the person and the object (for the second subtask).", "Each image is evaluated by two annotators, and the average performance is reported.", "Specifically, we report the accuracy and macro F1 between models' predictions and correct answers.", "Besides the performance of ISMs on the subset of recognized instances, we also report the performance on the full dataset, giving the unrecognized instances a random guess.", "We take BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b) as examples of text-only PLMs.", "For VL-PTMs, we choose VinVL (Zhang et al., 2021), which performs well in various vision-language tasks.", "It uses a transformer-based backbone and is pretrained on various vision-language datasets including image caption datasets, visual QA datasets, etc.", "As it preserves the masked word prediction objective like PLMs, it can also be probed with prompts.", "We choose VQGAN+CLIP 1 1 Originated by Ryan Murdoch, @advadnoun on Twitter.", "Implementation details are in Appendix A.2.", "(a) Comparing sizes of objects.", "Both objects are recognized by the object detection model in 15% images and are recognized by humans in 86% images.", "as a representative of ISMs.", "It uses CLIP (Rad-ford et al., 2021) to guide VQGAN (Esser et al., 2021) to generate images that best match the given text.", "To make a fair comparison regarding model size, we select BERT-large, RoBERTa-large, and VinVL-large.", "We use VQGAN with codebook size Z = 16384 and downsampling factor f = 16 , and CLIP with ViT-B/32 (Dosovitskiy et al., 2020) architecture.", "All four models are of similar sizes.", "As language models are sensitive to the expressions in probing (Liu et al., 2021) (like changing an answer choice from larger to bigger , the predictions of BERT may differ a lot), we generate new prompts and answers based on those originally designed in the benchmark, and search for the optimal ones for PLMs and VL-PTMs.", "Similar to Jiang et al. (2020), we use back-translation to generate 10 candidates for prompts and answers, and filter out the repeated ones.", "To select prompts and answers, we split the dataset into 5 folds, where different folds do not share the same objects.", "For each run, one fold is used as the development set to choose the best candidate, and the model is probed on other folds with the chosen prompt.", "We report average performance of 5 runs.", "Size and Height.", "Table 2 reports the probing performance of comparing the scales of objects.", "We also demonstrate probing results on RelativeSize (Bagherinezhad et al., 2016) in Appendix B. We observe that PLMs perform similarly.", "Even the A house and a bird A bottle and a bookshelf A plane and a bullet Size Height A bird and a trash can A trash can and a theatre An apartment and a horse Human Box Human Box Human Box Human Box Human Box Human Box Human Box Human Box Human Box Human Box Human Box Human Box Figure 4: Images generated by ISM in scale comparison.", "best PLMs are slightly better than random guesses, indicating they are ineffective in predicting object scales.", "Although RoBERTa is trained on more texts and assumed to encode more knowledge, its performance is similar to BERT's.", "It shows that PLMs do not learn much spatial commonsense from texts even if the pretrained corpus greatly increases.", "With the help of visual features in pretraining, VinVL greatly outperforms PLMs.", "ISM (Box), which simply compares bounding boxes in images generated by the ISM, also outperforms PLMs.", "Since only a small portion of instances are recognized with bounding boxes, if we only consider the predictions on these instances, the gap between ISM (Box) and PLMs is more than 15%.", "These 2369 Model Acc (avg. / ) F1 (avg. / ) BERT 26.1 / 4.15 19.0 / 5.20 RoBERTa 31.0 / 15.4 20.1 / 9.29 VinVL 56.1 / 7.09 41.8 / 6.69 Model Acc F1 Best PLM 31.0 (32.5) 20.1 (17.6) VinVL 56.1 ( 56.0 ) 41.8 ( 36.0 ) ISM (Box) 33.0 (42.5) 26.5 (26.1) Best PLM 31.0 (30.5) 20.1 (20.1) VinVL 56.1 (56.4) 41.8 (42.9) ISM (Human) 73.4 ( 75.4 ) 65.1 ( 68.0 ) Table 3: Probing performance on positional relationship (%).", "indicate that models with visual signals learn accurate spatial commonsense knowledge from images.", "ISM (Box) outperforms VinVL on those recognizable instances (81.6 vs. 53.8), but the recognition ratio is admittedly low.", "We conduct human evaluation on the generated images for more precise assessment.", "More than 80% of images are recognized by humans and these images accurately reflect the spatial commonsense compared to PLMs and VinVL.", "2 The gap between VinVL and ISM (Human) may be due to different ways of using visual signals in pretraining.", "A training objective of VinVL, and other VL-PTMs, is aligning text with image regions.", "The discriminative features of objects are amplified, while other features may not receive as much attention.", "For instance, the shape and color are the discriminative features of an ap-ple , and its size is not that important in recognition.", "In image synthesis, models need to learn comprehensive knowledge of objects for reconstruction, and spatial knowledge may be learned implicitly in this process.", "Figure 4 demonstrates images generated by the ISM given the prompts of object pairs.", "ISM grasps the main characteristics of the objects, including their scales.", "Some objects (like theatre at the bottom of the middle column) can be identified by humans but are difficult for the object detection model because they are obstructed by objects in the foreground.", "And some objects are generated in multiple fragments (like plane and horse in the right column), therefore cannot be recognized by either the object detection model or humans.", "Positional Relationship.", "The probing performance on positional relationship is shown in Table 3. VinVL outperforms PLMs more than 20%, and ISM (Human) outperforms PLMs more than 35%, suggesting that models with visual signals learn more knowledge of the scenarios, especially the positions of objects relative to people.", "The gap between PLMs and ISM (Box) is smaller compared to the gap in the subtask of size and height.", "One reason is that the rules defined in VDG cannot perfectly reflect the true positional relationship in images.", "For example, the man is beside the car in the left image of Figure 3, but he will be regarded as inside the car by the rules, as the region of car covers the region of man.", "Text-based PLMs tend to lean towards certain positional relations between a person and an object, without referring to the action.", "In 64% cases, RoBERTa chooses the same option for a person, object pair with different actions, while the proportion is 21% for VinVL, and 28% for ISM (Human).", "Models that master better spatial knowledge should be able to infer the relative scale of two objects from intermediate references.", "For example, if a model knows a dog is larger than an ant and a sofa is larger than a dog , it may learn a sofa is larger than an ant , even if it has not seen sofa and ant together.", "We inspect models on how consistent their probing results are.", "The consistency is measured in two aspects: symmetry and transitivity .", "Symmetry implies that if a model predicts A > B , then it should also predict B < A , and vice versa: A < B = B > A .", "Here > and < are in terms of size or height.", "We enumerate the object pairs and count the percent-2370 a n t c o i n nu t b u ll e t d i c e b i r d c u p s h e ll b o tt l e w a ll e t t y r e c h a i r m i c r o w a v e d o g s u i t c a s e hu m a n s o f a b oo k s h e l f t i g e r b e d h o u s e c i n e m a m o un t a i n t r u c k p l a n e Current Object 0.0 0.5 1.0 1.5 2.0 R a t i o Each object's size prediction from RoBERTa #(c>a)/|A|, a A #(a>c)/|A|, a A #(c<a)/|A|, a A #(a<c)/|A|, a A a n t c o i n nu t b u ll e t d i c e b i r d c u p s h e ll b o tt l e w a ll e t t y r e c h a i r m i c r o w a v e d o g s u i t c a s e hu m a n s o f a b oo k s h e l f t i g e r b e d h o u s e c i n e m a m o un t a i n t r u c k p l a n e Current Object 0.0 0.5 1.0 1.5 2.0 R a t i o Each object's size prediction from VinVL #(c>a)/|A|, a A #(a>c)/|A|, a A #(c<a)/|A|, a A #(a<c)/|A|, a A Figure 5: Predictions from RoBERTa and VinVL in the subtask of objects' sizes.", "age of predictions that meet the symmetry criterion.", "Transitivity implies that if a model predicts A > B and B > C , then it should predict A > C .", "It also works for < , A < B B < C = A < C .", "We enumerate the triples A, B, C where the predicted relation between A, B is identical to the prediction between B, C , and count the percentage that the prediction between A, C meets the transitivity criterion.", "Note that we only evaluate whether the predictions are consistent with each other, regardless of the gold answers.", "We evaluate the consistency of predictions from PLMs that perform the best in the probing tasks (RoBERTa for size and BERT for height), VinVL, and ISM (Human).", "The results are in Table 4. VinVL outperforms the best PLM in both metrics, and the characteristics of them are similar: the transitive consistency is high, while the symmetric consistency is low.", "To further analyze this phenomenon, we exhibit each object's size predictions from RoBERTa and VinVL in Figure 5. The models exhibit different behaviors in recognizing object scales.", "As the objects (X-axis of Figure 5) are roughly listed from smaller to larger groups, the bottom blue bars are expected to follow a non-descending order from left to right, and the solid orange bars should be non-ascending.", "The predictions of VinVL are generally in line with this trend, while RoBERTa's predictions are disordered.", "For example, ant is predicted to be larger than other objects with high probability, and cinema is larger than others is unlikely to happen.", "On the other hand, if the model predictions are consistent, Model Acc (avg. / ) F1 (avg. / ) BERT 27.4 / 3.17 19.7 / 7.25 RoBERTa 29.5 / 16.0 20.1 / 9.90 VinVL 58.1 / 1.97 44.4 / 1.63 Model Acc F1 Best PLM 29.5 (28.4) 20.1 (19.1) VinVL 58.1 (52.3) 44.4 (41.0) ISM (Human) 66.5 ( 74.8 ) 59.4 ( 69.2 ) Table 5: Probing models on the generalized dataset of positional relationship.", "the two solid bars should sum to 1.", "However, the sum is far above 1 for most objects in VinVL's predictions.", "This bias towards words indicating the choice of large may come from the pretraining corpus.", "For example, sofa occurs twice as many times with words indicating large as with words indicating small in COCO (Lin et al., 2014), one of VinVL's pretraining datasets.", "ISM's predictions comply with the symmetry criterion, outperforming other models by 40%, while also having good transitive consistency.", "The knowledge probed from ISM is more consistent.", "Figure 6 exhibits the symmetric and transitive consistency of images generated by ISM.", "The consistency of scale knowledge makes the predictions more convincing, and gives models a chance to learn new comparisons between objects.", "cycle is a common scenario and may frequently exist in ISM's training set, so models can generate images more easily when being fed with the text prompts like a boy rides a bicycle .", "To further challenge ISM's capability, we make a generalized version of our original positional relationship dataset.", "It is designed to examine whether models are able to robustly reflect the spatial commonsense knowledge when facing uncommon scenarios.", "A generalized scenario is built upon the original one by replacing the person and object in the text prompts.", "We select the new person and new object from the subterms of the original ones (those with IsA relation in ConceptNet (Speer et al., 2017), like enchantress is a woman ).", "To ensure these newly constructed scenarios are not likely to appear in the training data of models, we search them in BookCorpus (Zhu et al., 2015) and remove the scenarios that have appeared.", "The newly generated scenarios are also validated by humans to ensure that they are reasonable.", "Results of probing PLMs, VinVL, and ISM 3 on the generalized dataset are in Table 5. PLMs and VinVL achieve similar performance on both the generalized dataset and the original one, indicating that they behave robustly when facing uncommon scenarios.", "The performance gap between other models and ISM (Human) slightly narrows down, but ISM (Human) still outperforms VinVL more than 8%.", "Figure 7 exhibits images generated by ISM with the generalized prompts.", "Although it is 3 We do not consider ISM (Box) because many new objects we used are unfamiliar to object detection models.", "Only 17% of the objects are in the object detection classes.", "difficult for ISM to generate unfamiliar objects, it is still capable of capturing the positional relations.", "We investigate how to acquire spatial knowledge from ISMs and whether the knowledge is effective in natural language understanding scenarios.", "To our best knowledge, there is no appropriate task that focuses on spatial commonsense, so we create a toy task by transforming our probing benchmark into the form of question answering (QA).", "Dataset.", "We construct a QA dataset of yes/no questions.", "Questions of objects' sizes are in the form of Is O a larger/smaller than O b ?", "And questions of objects' heights are like Is O a taller/shorter than O b ?", ", where O a and O b are two objects.", "Questions about positional relationship are accompanied with the action: for instance, A man washes the car.", "Is the man inside the car?", "To avoid bias in answer distribution, the numbers of yes and no are equal in gold answers.", "There are 500 questions for size, 500 for height, and 448 for positional relationship.", "Models.", "We use VinVL-base together with our image synthesis model VQGAN+CLIP to answer spatial commonsense questions.", "The VinVL here is finetuned on the VQA (Goyal et al., 2017) task.", "It takes images generated from ISM with textual prompts from questions, and predicts the answer based on the question and image together.", "Note that the VQA training corpus does not contain commonsense reasoning questions.", "We choose UnifiedQA (Khashabi et al., 2020) as a text-based QA model for comparison.", "Based on the pretrained T5 model (Raffel et al., 2019), UnifiedQA is continually trained on various QA tasks, including three yes/no datasets.", "We use UnifiedQA-large, which is comparable with our synthesis and reasoning model (ISM w/ VinVL) in size.", "Results.", "As shown in Table 6, ISM w/ VinVL outperforms UnifiedQA on all subtasks, showing that spatial knowledge from ISMs can be directly used by vision-language models without additional training.", "Although some images cannot be precisely recognized by object detection models, vision-language models may find regions that are related to the objects mentioned in questions, and make decisions based on the features of these regions.", "The results on the simple natural language task show that it is beneficial to tackle natural language tasks with vision-language methods, and ISMs can be a bridge between the two modalities .", "With the development of ISMs and object detection techniques, we believe the generated images will help more.", "We propose a new spatial commonsense probing framework to investigate object scales and positional relationship knowledge in text-based pretrained models and models with visual signals.", "Experimental results show that models with visual signals, especially ISMs, learn more accurate and consistent spatial commonsense than text-only models.", "Integrating ISMs with visual reasoning models outperforms PLMs in answering spatial questions.", "This manifests the potential of using spatial knowledge from ISMs in natural language understanding tasks.", "This work is supported in part by National Key R&D Program of China (No. 2020AAA0106600) and NSFC (62161160339).", "We would like to thank the anonymous reviewers and action editor for the helpful discussions and suggestions.", "Also, we would thank Quzhe Huang, Chen Zhang, Chen Henry Wu, Yuxuan Lai and Nan Hu for their detailed comments.", "For any correspondence, please contact Yansong Feng." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "result", "method", "objective", "method", "abstain", "abstain", "objective", "result", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "method", "other", "objective", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Mixed initiative in open-domain dialogue requires a system to pro-actively introduce new topics.", "The one-turn topic transition task explores how a system connects two topics in a cooperative and coherent manner.", "The goal of the task is to generate a bridging utterance connecting the new topic to the topic of the previous conversation turn.", "We are especially interested in commonsense explanations of how a new topic relates to what has been mentioned before.", "We first collect a new dataset of human one-turn topic transitions, which we call OTTers 1 .", "We then explore different strategies used by humans when asked to complete such a task, and notice that the use of a bridging utterance to connect the two topics is the approach used the most.", "We finally show how existing state-of-the-art text generation models can be adapted to this task and examine the performance of these baselines on different splits of the OTTers data.", "For a conversation to be truly engaging, we typically assume that both participants take initiative, e.g. by introducing a new topic.", "We call this a mixed-initiative dialogue.", "Open-domain systems trained on vast amounts of data (Jiang et al., 2020; Zhang et al., 2020; Gao et al., 2018; Li et al., 2017, 2016; Vinyals and Le, 2015), however, are often purely responsive, make abrupt transitions, or fail to take initiative (see examples in Table 1).", "In this paper, we consider the case where the system pro-actively introduces a new topic in a conversation by providing a commonsense link of how this new topic relates to what was mentioned previously (see Fig.1).", "We call this transition strategy bridging.", "Humans deploy a range of strategies 1 https://github.com/karinseve/OTTers User A Source Topic : I spend a lot of time outside .", "in addition to bridging, including disengagement, discourse markers or silence (Riou, 2015).", "We hypothesise that introducing a new topic by making a connection with the previous dialogue turn can be perceived as a less abrupt transition.", "More specifically, we investigate bridging transitions between two user utterances in the form of one or more sentences that contain at least one main linking concept.", "These inherently can allow for better grounding to external resources such as entities in large Knowledge Graphs (KG) (e.g., Wikidata), or named entities mentioned in documents (e.g., Wikipedia, or news articles), ultimately leading to more controlled and interpretable outputs.", "To this end, we crowdsource a corpus of human-written topic transitions focused on these bridging strategies, where humans introduce a missing link concept, given a source and target topic in the form of two short user utterances (Fig. 1).", "By grounding the topics on a KG using automatically recognised entities associated with each topic, we can then identify commonsense connections which are similar to these missing links.", "By modelling such topic transitions in the form of Cause-Effect relationships in a KG, we can then perform abductive inference on commonsense knowledge for which we provide a language generation baseline.", "In particular, we fine-tune a multihop reasoning model (Ji et al., 2020) which was trained on a similar task called Abductive NLG ( NLG) to generate an explanatory hypothesis given two observations.", "We find that combining a reasoning module over a KG (ConceptNet) with a language model achieves the best performance on our topic transition task for both the predicted entity path as well as the generated utterance.", "In addition, we show that existing multi-topic dialogue datasets, such as PersonaChat (Zhang et al., 2018) and TopicalChat (Gopalakrishnan et al., 2019), cannot be easily adapted to this task, due to the different nature of the tasks they were designed for.", "Our contributions are as follows: We propose a new Natural Language Generation task based on one-turn topic transitions for open-domain dialogue based on a bridging strategy, which promotes grounding on KG entities.", "We collect a crowdsourced dataset, OTTers, and present a rigorous analysis in terms of transition strategies, linguistic properties and entity linking to a KG.", "We show that our KG-grounded dataset can effectively leverage the reasoning component of an existing Transformer-based model (Ji et al., 2020) to generate better output compared to a vanilla GPT-2 (Radford et al., 2019) decoder, both in in-domain and out-of-domain data splits.", "Topic Transitions in the Linguistic Literature.", "There is no common definition for the term topic (Goutsos, 1997; Purver et al., 2011); however, there are a number of definitions which are helpful for our purposes.", "Goutsos (1997) divide a topic into two main components:", "1) what constitutes a topic (the what) and", "2) how participants perceive and manage a topic (the how).", "An early work from Brown and Yule (1983) declares that topics should be described as the most frequently used, unexplained term in the analysis of discourse.", "In general, discourse topics can be explained as what a portion of the interaction is about, therefore the aboutness (Berthoud and Mondada, 1995; Porhiel, 2005).", "More specifically Chafe (1994) defines the notion of topic as the totality of information that is semiactive at one time.", "Prior work has shown that the introduction of a new topic usually co-occurs with cues such as wrapping things up about the current topic (May-nard, 1980), preceding silence, or the use of discourse markers (Riou, 2015).", "Also, backchannel signals, e.g., yeah , right , you know , indicate that both agents are involved in the interaction and show consent for the topic development (James, 1995).", "Beyond these overt cues, James (1995) and Geluykens (1993) describe semantic topic transitions: each topic has a tendency to lead to the next; to provide the opening for another (James, 1995), and topics are typically co-constructed, requiring each speaker to contribute to the conversation for further progression and development (Geluykens, 1993).", "The identification of topic transition is indeed not an easy task.", "It is not only about linguistic cues such as discourse markers and prosodic cues, as sometimes a topic switch can be identified with the introduction of a new entity (James, 1995).", "Additionally, in a conversation topics are created and introduced by participants themselves in real time, making topics participantand interaction-specific (Mondada, 2001, 2003).", "Moreover, the entities in focus at a given point in the discourse will be that partially-ordered subset of activated entities which are likely to be continued as topics of subsequent utterances (Gundel et al., 1993).", "These cooperative elements emphasise the importance of mixed-initiative topic management for open-domain dialogue systems.", "Current Multi-topic Open-domain Systems.", "Previous work in open-domain dialogue systems has largely avoided explicitly modelling topic transitions and instead focused on grounding system behaviour in a persona (a set of statements about hobbies, demographics, or preferences) (Zhang et al., 2018; Li et al., 2016) or by conditioning conversations on knowledge sources such as newspaper articles, fun facts or Wikipedia articles (Gopalakrishnan et al., 2019; Dinan et al., 2019) to generate engaging responses while avoiding generic replies, improving coherence, and raising new and interesting topics.", "These approaches often lead to poor topic transitions, as illustrated in Table 1.", "The PersonaChat example shows neither initiative nor common sense while transitioning to a new topic; it only displays passive acknowledgement from User B. Whereas the TopicalChat example presents a very abrupt topic shift by User B. Our dataset is the first corpus focused specifically on one-turn topic transitions; however, there are several human-to-human dialogue corpora wherein participants discuss assigned topics.", "Two prominent such corpora are TopicalChat (Gopalakrishnan et al., 2019) and PersonaChat (Zhang et al., 2018).", "In TopicalChat both participants used source documents from Wikipedia to discuss a shared topic.", "The dialogues in this corpus tend to flow less naturally than those in PersonaChat with participants generally focusing on expressing the main facts, often by copy and pasting from their source documents rather than having a natural conversation.", "Therefore we focus on PersonaChat as a point of comparison.", "PersonaChat dialogues consist of chit-chat conversations based on a set of persona traits assigned to each participant.", "Because participants seek to express their persona to each other, the conversations require mentioning various topics (i.e. their persona traits) in a natural way.", "Indeed, Zhang et al. (2018, Sec. 3.3) adjusted their design to encourage users to engage with each other's topics and not simply state their own topics as quickly as possible to end the dialogue.", "PersonaChat does not contain annotations for the topic of each turn and participants had the freedom to mention their topics (i.e. persona traits) in any order.", "We use PersonaChat in two different ways:", "1) using their persona traits as starting and goal topics for our own data collection, and", "2) as a point of comparison for our dataset.", "Commonsense-Aware Neural Text Generation.", "Large Language Models still suffer in cases where reasoning over underlying commonsense knowledge is required during generation, including dialogue generation (Zhou et al., 2018), story ending generation (Guan et al., 2019), and topic-to-essay generation (Yang et al., 2019).", "Recently, Guan et al. (2019); Bhagavatula et al. (2020) attempted to integrate external commonsense knowledge into generative pretrained language models, which we will also attempt in Section 4 using the Abductive NLG ( NLG) dataset (Bhagavatula et al., 2020).", "Our setup is similar in spirit to NLG, which is a conditional generation task for explanations given observations in natural language.", "In particular, the model has to generate an explanatory hypothesis given two observations: the cause (e.g. The Smith family went on a cruise for their summer vacation ) and the consequence (e.g. From then on, the Smiths went to the beach each summer instead ).", "Here, a possible explanation might be: The Smith family got seasick on the cruise .", "The NLG dataset contains 20 k pairs observations and 200 k explicative hypotheses, which we will later use for fine-tuning our models (see Section 4).", "Task Description.", "We assume there are topics t a and t b for utterances u a and u b (with u = t for this paper).", "The goal of the task is to generate a one-turn transition utterance u t to serve as a smooth link between t a and t b so that its concatenation with utterance u b is a sensible response to u a .", "A bridging transition occurs when one or more of the entities e t e t mentioned in u t lies on a path in the knowledge graph between entities e a e a and e b e b mentioned in u a and u b , respectively.", "Knowledge Graph Construction.", "We use PersonaChat persona traits as the starting point for our data collection.", "In order to model commonsense connections, we built a knowledge graph (KG) using the entities found in each persona trait through the Yahoo Entity Linker (Blanco et al., 2015; Pappu et al., 2017).", "Each entity is linked to its correspondent Wikidata identifier, while a SPARQL query retrieved the entity's super-classes and sub-classes, which were added to the KG.", "Furthermore, the KG has been augmented by retrieving the commonsense connections for each entity from ConceptNet (Speer et al., 2017) and by parsing Wikipedia abstracts mentions.", "To select which traits to use for the data collection, we first selected all pairs of entities connected with k -hops ( 1 < k < 20 ) in the KG.", "Then, we recovered the entities mentions in the persona traits and saved every pair (nearly 30 k) as potential pairs for our data collection.", "Data Collection.", "We crowdsourced the data collection for OTTers on Amazon Mechanical Turk (AMT).", "Each user was provided with two topics A, B from the PersonaChat persona traits, along with instructions explaining the task.", "The instructions ask the user to imagine they are having a conversation where the first topic A from the pair represents the last turn of the other person, and the second topic B contains the final topic the user wants to talk about.", "The user then has to write a short utterance to transition to the new topic B in the least abrupt way possible.", "Additionally, in order to encourage crowd-workers to ground their utterances in actual topics, we asked them to report the topics mentioned in their sentence (see Figure 2).", "For each topic pair in the study we collected three different transition utterances to provide more insight into the different strategies users adopt when transitioning to a new topic.", "Basic Statistics.", "Table 2 provides summary statistics describing OTTers.", "Our corpus consists of 4 , 316 utterances for 1 , 421 unique topic pairs, with an average utterance length of 1 .", "3 sentences and 16 .", "4 words.", "The KG path statistics for OTTers are based on all of the paths found by the Yahoo Entity Linker between the 1421 unique topic pairs in the corpus, a total of just over 12 k paths.", "KG coverage.", "We calculated the distance between each pair of topics in the knowledge graph described in Sec. 3.1 to facilitate analyses of the role of topic distance in transition strategy and transition quality.", "To extract entities from the utterances in our corpus, we extended the tagger built-in to the Yahoo Entity Linker with the spaCy Named Entity Recognizer to include all nouns and adjectives as potential entities.", "2 Using these extracted entities we analyse the overlap between entities mentioned in the given topics A, B and those mentioned in the crowdsourced transition utterances.", "The Jaccard distance between these two sets is 1 for nearly a quarter of the topic-pairs and utterances in our dataset, with a mean of 0 .", "842 , meaning that the overlap between entities mentioned in the utterances and entities mentioned in the topics is fairly low.", "This indicates that users 2 This modified version allowed us to identify a wider range of topic-related entities.", "transition from Topic A to Topic B mentioning new unseen entities, following a path that can be grounded on a knowledge graph.", "In contrast, the overlap between the entities in the KG path between the topics and the entities mentioned in the transition utterances is higher: both the mean and the mode Jaccard distances drop to below 0 .", "8 , suggesting that crowdworkers make similar connections to the ones we can find in our knowledge graph a substantial portion of the times.", "This suggests that our KG-grounded approach can find plausible entities to be mentioned to bridge between topics, similar to the commonsense connections made by humans shifting between topics.", "To examine the strategies humans applied while completing the OTTers task, we adapted the categories of Riou (2015) for a manual analysis of our data.", "Riou (2015) distinguishes between disjunctive and stepwise transitions between topics.", "Disjunctive transitions make no attempt to relate the new topic to the previous topic, switching abruptly to the new topic without acknowledging the previous topic, whereas stepwise transitions are akin to the previously described transition strategies.", "We distinguish between bridging and acknowledge & continue strategies: in the former, the speaker aims to produce an utterance which connects the previous and new topics directly; in the latter, the speaker acknowledges the previous topic before introducing their own topic, without explicitly relating the two to one another.", "In addition to these categories, we also annotated utterances as off-task (e.g. replying to or continuing the first topic without any attention paid to the second topic) or off-topic when the utterance had nothing to do with either of the two topics (e.g. random greetings or generic questions).", "Two of the authors annotated 10 utterances from 10 different users, resulting in 200 total annotations.", "The initial inter-annotator agreement was 71% , classified as substantial (Krippendorff's = 0 . 34 ), after which the annotators collaborated to reach a consensus annotation for each of the examples that presented a disagreement.", "Table 5 contains a prototypical example for each of the annotated classes.", "More than 80% of the data contains some form of transition to the second topic, with 79% containing a bridging utterance, 5% applying an acknowledge and continue strategy, and only 2% using the disjoint transition strategy.", "12% of the data is connected to one or more of the topics in some way but does not serve as a transition, and 2% of the data is completely off-topic.", "This analysis suggests that our corpus indeed represents the kind of knowledge-based transitions we are interested in.", "KG distance and discourse markers.", "We hypothesize that speakers are less likely to use explicit topic management strategies (e.g. topic wrap-ups, discourse markers) when topics are more closely related to each other, e.g. as measured by graph distance in a large knowledge graph.", "This would be in line with findings about the use of explicit discourse markers versus leaving discourse relations implicit.", "Torabi Asr and Demberg (2012, 2013) found that explicit markers are more likely to be omitted when the discourse relation is highly predictable based on the content of the arguments.", "Based on Riou (2015) we examined the frequency of discourse markers in utterances to test our hypothesis, examining both general conversational discourse markers and those associated with specific discourse relations.", "For conversational discourse markers we use the Cambridge Dictionary, which provides a list of spoken and written markers, including well, you know, etc., while for markers signalling particular discourse relations we use the list from the Penn Discourse Treebank (Webber et al., 2019; Prasad et al., 2008, PDTB); these include markers like because indicating a causal relationship or in addition for an additive relationship.", "We find a small but significant correlation ( 0 . 04 ) between conversational discourse Acknowledge and continue A: i like to eat the same thing as ninja turtles.", "This suggests that users are somewhat more likely to use conversational discourse markers as the distance between topics in the knowledge graph increases, in line with our hypothesis.", "We evaluate whether the transition strategies in OTTers are less abrupt than those found in Per-sonaChatby constructing a comparable subset of PersonaChatand performing a human evaluation.", "Comparable Corpus Construction.", "We first extract a subset of PersonaChat where two consecutive turns contain different topics.", "In other words: turns where one speaker changed the topic from what the previous speaker has just said.", "Since PersonaChat turns do not incorporate topic annotations, we use a heuristic based on BERTScore to assign a topic to each turn.", "Given topics t and turns u for a dialogue in PersonaChat, we calculate the BERTScore similarity between each u u and each t t .", "For each turn u we then assign t = argmax t ( BERTScore ( u, t )) , if and only if BERTScore ( u, t ) BERTScore ( u, t (cid:48) ) > d (1) Figure 3: Interface for crowdsourced validation.", "where t (cid:48) is the topic achieving the second highest BERTScore relative to u , and d is a threshold to ensure that we only assign a topic to a turn if it is a substantially better fit than the other topics.", "3 While this means that not every turn is assigned a topic, this is necessary to ensure that we do not assign topics to, e.g., greetings like hi, how are you?'.", "This way of assigning topics yields a subset consisting of 22 , 010 utterances which have a different topic from the preceding utterance.", "Most of these topic-pairs ( 20 , 491 ) are only expressed through one utterance in the dataset, while 1 , 188 are expressed by two utterances, 248 by three, and 83 by more than 3 utterances.", "Moreover, there are 445 topic-pairs which also occur in our corpus.", "Crowdsourced Validation.", "Using the comparable sub-corpus of PersonaChat, we asked crowdworkers to vote which of two potential transition utterances was less abrupt (Fig. 3) for 49 topic-pairs occurring in both datasets.", "We collected 3 votes for each utterance and only counted instances where 2/3 workers agreed on the same choice.", "The results confirm that OTTers has less abrupt transitions: the utterances in OTTers were judged as less abrupt in 44/49 cases, with the comparable PersonaChat utterance judged less abrupt in one case, and both utterances rated bad in another.", "Only 3 cases did not present a majority class.", "Having confirmed the quality of our corpus, we now adapt two existing text generation models as baselines for this task.", "We also explore different train-dev-test splits and conduct an error analysis.", "The first baseline we consider is a vanilla GPT-2 language model (Radford et al., 2019) fine-tuned on OTTers (vGPT2).", "Next, we test the recent MultiGen (Ji et al., 2020) on this task, which extends GPT-2 with multi-hop reasoning on commonsense knowledge graphs.", "In particular, this model combines the vocabulary distribution generated by GPT2 with a concept distribution in order to produce knowledge grounded responses.", "The concept distribution is given by reasoning performed on the commonsense knowledge graph ConceptnetIO, using the context modeled through GPT-2.", "The first split is an out-of-domain split ( ood ), which ensures that none of the topics in the test-set are present in any of the topic-pairs in the train-set.", "For the second split, this restriction is relaxed to create an in-domain split ( id ), allowing one of the topics in each pair in the test-set to appear in the train-set, although with a different second topic.", "The ood split resembles a zero-shot scenario, where the model has to generate a shift between two topics it has never been fine-tuned on.", "Hence, we expect results to be lower than the ones from id .", "The number of unique and total topic pairs for each split is illustrated in Table 6.", "We evaluate two aspects of the transition task:", "1) whether the model can find a sensible path through intermediate topics and", "2) whether the model can generate a natural utterance which mentions such intermediate topics.", "To evaluate the former, we assess the entities mentioned in the transition utterance to determine how well they bridge the gap between Topic A and Topic B. We use hits@k ratio as an automatic approximation, which measures the number of relevant entities correctly predicted by the model, out of the k most important entities identified in the target references.", "This metric shows how well the models ground the concepts introduced in the two dialogue turns and how the reasoning compares to the human standard presented in OTTers.", "For (2) we adopt the same automated metrics used for evaluating MultiGen on the NLG dataset for comparability: ROUGE-L (Lin, 2004), METEOR (Banerjee and Lavie, 2005), and CIDEr (Vedantam et al., 2015).", "However, we report the full BLEU score (Papineni et al., 2002) 4 that accounts for the overlap across 1-4 ngrams instead of just 4-grams (BLEU-4).", "As word-overlap based metrics have been widely criticised due to their lack of correlation with human judgements (Novikova et al., 2017; Reiter, 2018), we also provide an example-based error analysis in Section 4.4.", "For each aforementioned split we evaluated three different models to compare performance: the pretrained vGPT2 fine-tuned on each split for OTTers, the MultiGen model fine-tuned only on NLG, and the same model additionally fine-tuned on OTTers (called NLGft).", "Overview of Results.", "Table 7 shows the results of these experiments.", "vGPT2 performs poorly on the one-turn transition task, regardless of the train-dev-test split, which we attribute to the small size of OTTers: with only a few thousand utterances, vGPT2 is unable to learn the task.", "We notice, however, that the system tends to repeat the main entity in Topic A, therefore scoring surprisingly well on the hits@k metric, despite the fact that the utterances themselves are of low quality (see Table 8).", "The reasoning component added by MultiGen leads to substantial improvements in most of the evaluation metrics but not hits@k ( NLG in the table).", "Therefore, the improvements in text quality metrics appear to be due primarily to the similarity between the structure of the abductive NLG task and the increased amount of data for fine-tuning ( 688 k tokens) compared to fine-tuning vGPT2 on our 71 k tokens alone.", "Further fine-tuning MultiGen on OTTers leads to substantial improvements on all metrics for both in-domain & out-of-domain splits.", "The performance improvement is considerable especially because of the relatively small size of the training set ( 693 unique topic pairs on in-domain, see Table 6), further justifying the compatibility between the original task MultiGen was trained on and OTTers.", "Nonetheless, the BLEU scores from Table 7 indicate there is still space for improvement.", "We hypothesise METEOR are higher than BLEU scores, because they also consider paraphrases.", "These results confirm that our newly introduced one-turn topic transition task needs a reliable language model combined with an advanced reasoning component.", "Detailed Discussion and Model Limitations.", "We further analyse the results to understand model limitations.", "First, we observe that Multigen's hits@k ratio is quite low, especially when compared to vGPT2.", "This is surprising considering vGPT2's generated sentences are mostly very short and repetitive, and the predicted concepts mostly match the ones contained in the Topic A' sentence.", "One possible explanation is that Multigen's reasoning module uses a gate loss, which determines whether to select a concept from the provided knowledge graph or a word from the GPT2 dictionary.", "We observed that the majority of the times the model will use a word from the GPT2 dictionary rather than selecting a concept from the knowledge graph.", "Moreover, we observe that only 65% of the concepts found in the target sentences are actually nodes in Multigen's subgraphs.", "One possible explanation is that Multigen's reasoning model has a limited input capacity of up to 100 nodes that are at most 2 hops away in order to prune the very large knowledge graph from ConceptNet.", "The English vocabulary from ConceptNet contains approximately 1 , 500 , 000 nodes, which makes the process of determining the concept distributions very computationally expensive and time inefficient.", "Therefore, the pruning strategy adapted by Ji et al. (2020) overcomes these problems but cannot be applied to the OTTers task, as the selection of the concepts is just as important as the output sentence being fluent.", "Contrary to our expectations, expanding the size of the knowledge graphs from 100 nodes to 200 and 300 did not improve the hits@k ratio.", "Most likely because the concepts added to the graphs are either not relevant or misleading for the model.", "This suggests that improving concept selection is a promising future direction to improve the performance of the reasoning module, leading to overall better topic transitions.", "Error Analysis.", "In addition, we preform an example-based error analysis to further understand the strengths and weaknesses of the individual models.", "Table 8 shows representative system outputs for each of the models on the in-domain data split.", "First, we observe that vGPT2 often generates very simple sentences (e.g., family. ', in Ex. 4), repeated non-content bearing tokens (e.g., I love it.' , in Ex. 2), or incoherent and often not specific enough output to form a successful bridging transition (e.g., a lot of cooking.' , in Ex. 3, is not a well-formed sentence, and only loosely connected to Topic A about agricultural experience ), contributing to low BLEU scores. However, this also reinforces the idea that the hits@k scores are artificially in-flated simply due to vGPT2 choosing to include one of the entities from the first topic. The outputs from MultiGen tested on OTTers show a better performance than vGPT2, given that the topic selection for the model is grounded on ConceptNet. However, since the Abductive NLG task is different than the Topic Transition' task addressed in OTTers, there is a discrepancy in the use of the language.", "The model often outputs coherent sentences that use generic commonsense facts which may not be related to Topic B (e.g., I decided to give birth to a baby' , in Ex. 1).", "The texts generated from MultiGen fine-tuned on OTTers on the other hand, introduce interesting connections between Topic A and Topic B (e.g., I like to make babies laugh when I'm not working.' , in Ex.", "1) and leverage commonsense (e.g., I like the look of Italian cars' , in Ex. 2, where the look' creates a connection with being in good shape' from Topic B).", "Ethical Considerations.", "We recognise that any mixed-initiative dialogue system carries risks related to dual-use: in addition to helpful systems which serve to help users explore a new topic or discover more about the world, a system which can effectively change the topic of conversation could also be used to manipulate user behaviour.", "For example, bridging strategies for topic transitions could be used by virtual assistants to encourage users to make a purchase or to express their opinion or preference regarding sensitive subjects.", "Conclusion.", "We have defined a new NLG task exploring one-turn topic transitions for mixed-initiative in open-domain systems.", "Our OTTers corpus provides training data for modelling topic transitions based on missing link' topics which connect the previous conversation subject to a new topic.", "Baseline models based on state-of-the-art approaches to text generation illustrate possible approaches to the task and show that there is room for improvement.", "In particular, we show that commonsense knowledge grounding is necessary for this task, outperforming fine-tuned large language models.", "In future work, we will explore model architectures specifically designed for topic transitions, as well as fine-tuning strategies to deal with small datasets.", "We also plan to evaluate the impact of bridging transitions on user (dis)engagement in an open-domain dialogue system.", "This research received funding from the EPSRC project MaDrIgAL (EP/N017536/1), as well as Google Research Grant to support NLU and dialog research at Heriot-Watt University." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "objective", "abstain", "abstain", "objective", "abstain", "other", "abstain", "objective", "objective", "abstain", "method", "method", "method", "method", "result", "result", "objective", "method", "result", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "method", "other", "other", "other", "other", "method", "objective", "abstain", "other", "other", "method", "abstain", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "abstain", "result", "objective", "method", "other" ]
[ "A dialogue response is malevolent if it is grounded in negative emotions, inappropriate behavior, or an unethical value basis in terms of content and dialogue acts.", "The detection of malevolent dialogue responses is attracting growing interest.", "Current research on detecting dialogue malevolence has limitations in terms of datasets and methods.", "First, available dialogue datasets related to malevolence are labeled with a single category, but in practice assigning a single category to each utterance may not be appropriate as some malevolent utterances belong to multiple labels.", "Second, current methods for detecting dialogue malevolence neglect label correlation.", "Therefore, we propose the task of multi-label dialogue malevolence detection and crowdsource a multi-label dataset, multi-label dialogue malevolence detection (MDMD) for evaluation.", "We also propose a multi-label malevolence detection model, multi-faceted label correlation enhanced CRF (MCRF), with two label correlation mechanisms, label correlation in taxonomy (LCT) and label correlation in context (LCC).", "Experiments on MDMD show that our method outperforms the best performing baseline by a large margin, i.e., 16.1%, 11.9%, 12.0% and 6.1% on precision, recall, F1, and Jaccard score, respectively.", "Safety is an increasingly important aspect of artifi-cial intelligence development (Amodei et al., 2016; Roegiest et al., 2019; Sun et al., 2021).", "When it comes to dialogue agents, taking measures to avoid risks of generating undesirable and harmful responses may have a profound positive impact on the adoption of conversational technology (Xu et al., 2020).", "Research on safe dialogue agents involves aspects such as inaccurate information (Gun-son et al., 2021), fairness (Liu et al., 2020), and Corresponding author.", "unauthorized expertise (Sun et al., 2021).", "Malevolence is another key aspect (Zhang et al., 2021b,a), e.g., whether the dialogue utterance contains malevolent content that is related to offensiveness (Dinan et al., 2019), toxicity (Gehman et al., 2020), ad hominem (Sheng et al., 2021), and toxicity agreement (Baheti et al., 2021), etc.", "There have been several studies targeting malevolence detection (Roussinov and Robles-Flores, 2007; Saral et al., 2018; Zhang et al., 2021a,b).", "We build on the work of Zhang et al. (2021b) who introduce the malevolent dialogue response detection and classification task, present a hierarchical malevolent dialogue taxonomy, create a labeled multi-turn dialogue data set, and apply state-of-the-art text classification methods to the task.", "One important limitation of their work is that they only explore single-label dialogue malevolence detection (SDMD), i.e., they assume that each dialogue utterance corresponds to a single malevolence or non-malevolence label.", "However, some utterances have more than one label, e.g., in Figure 1, the utterance f** people are disgusting 1 belongs to both disgust and negative intergroup attitude (NIA).", "This is because malevolence labels are correlated with one another, which we refer to as label correlation in taxonomy (LCT).", "Zhang et al. (2021b) propose a hierarchical malevolent dialogue taxonomy that classifies correlated malevolence labels into the same group by investigating three dimensions negative emotions, negative psychological behavior, and unethical issues.", "However, the correlation of malevolence labels in different groups is not well captured.", "Another limitation is that the above studies neglect the impact of malevolence in dialogue contexts (i.e., previous turns) on the current utterance.", "Previous work concatenates the dialogue context as model input without explicitly modeling the malevolence 1 Words that turn a statement into a statement that may cause harm are masked in this work.", "transition.", "For example, in Figure 1, blame is likely to cause blame for the same person, while for different persons, blame is likely to cause anger.", "This is due to label correlation in context (LCC).", "Zhang et al. (2021b) do not take correlation of malevolence labels in different dialogue turns into account and our label-correlation mechanisms are different from previous methods which require multi-label training sets (Kurata et al., 2016; Tsai et al., 2021).", "We address the two limitations listed above.", "Our goal is to boost multi-label dialogue malevolence detection (MDMD) by incorporating label correlation in taxonomy and context based on a single-label dataset with re-annotated multi-label evaluation data.", "This goal comes with two main challenges: (1) A dataset challenge, as we only have one label per utterance in the training data, which increases the negative effect of unobserved labels during training: how to improve the single gold labels via LCT and decrease the probability of over fitting; (2) A classification method challenge: how to capture LCC to help improve the classification.", "Based on Conditional Random Field (CRF), we propose a multi-faceted label correlation enhanced CRF (MCRF) framework to improve MDMD from single-label training data.", "The approach contains a position-based label correlation in taxonomy (PLCT)-based encoder and a multi-faceted CRF layer, which includes a LCC-based feature function and LCT-based label distribution learning.", "For the dataset challenge, we build a LCT-based label distribution learning module to exploit the label correlation in hierarchical taxonomy, which can alleviate the unobserved label problem.", "For the classification method challenge, we build an LCC-based transition function to exploit the label correlation in context.", "We crowdsource a new dataset based on the previously released malevolent dialogue response detection and classifying (MDRDC) dataset, conduct experiments on this dataset, and show that MCRF with a pretrained model, i.e., BERT-MCRF, outperforms competitive baselines by a large margin.", "We also conduct further analyses of the LCT and LCC modules, which reveal that multi-faceted label correlation does enhance multi-label dialogue malevolence detection.", "We summarize our contributions as follows: (1) We crowdsource a new dataset, i.e., MDMD, for the task of multi-label dialogue malevolence detection from single-label training data.", "(2) We propose multi-faceted label correlation, including LCC and LCT, which is shown to be beneficial for dialogue malevolence detection.", "(3) We introduce a new framework, MCRF, and compare it with competitive baseline models on the MDMD dataset and demonstrate its effectiveness.", "The taxonomies for hate speech, aggressiveness, offensiveness, and condescending only contain a few categories (Waseem and Hovy, 2016; Kumar et al., 2018; Zampieri et al., 2019; Wang and Potts, 2019), which are lack of unified understanding of what constitutes malevolence.", "To address this gap, Sheng et al. (2021) introduce a two-level ad hominem taxonomy and Sun et al. (2021) introduce a safety taxonomy, both of which contain seven different aspects.", "Furthermore, Zhang et al. (2021b) define a three-level malevolence taxonomy that contains eighteen categories in total.", "In this work, we follow the taxonomy proposed by Zhang et al. (2021b).", "There are several datasets to support malevolence classification or detection research.", "Many of them investigate hate speech detection, e.g., Predictive Features for Hate Speech Detection (PFHSD) (Waseem and Hovy, 2016), Hate Speech Detection Dataset (HSDD) (Davidson et al., 2017), and Multilingual Detection of Hate Speech (MDHS) (Basile et al., 2019), which are all col-3544 Figure 2: Framework of the proposed multi-faceted label correlation enhanced CRF (MCRF) model.", "lected from Twitter.", "These datasets lack diversity, have a small data size, low inter-annotator agreement, and small lexicon size.", "The others are on aggressiveness, offensiveness, and condescending, e.g., Trolling, Aggression and Cyber-bullying (TRAC) (Kumar et al., 2018), Offensive Language Identification Dataset (OLID) (Zampieri et al., 2019), and TALKDOWN (Wang and Potts, 2019), which have been collected from Facebook, Reddit, and Twitter, respectively.", "These datasets have a larger size than those mentioned before, but problems such as low diversity and limited lexicon size affect them too.", "To sum up, none of these datasets is in the form of multi-turn dialogues.", "To address this, recent studies have released the TOXICHAT (Baheti et al., 2021), ADHOM-INTWEETS (Sheng et al., 2021), MDRDC (Zhang et al., 2021b), and DIASAFETY datasets (Sun et al., 2021), for research into offensiveness, ad hominem, safety detection, etc.", "However, the above datasets all fall into single-label dialogue malevolence detection.", "In contrast, we build a dataset for the evaluation of multi-label malevolence detection, considering an utterance may contain multiple labels.", "Methods for malevolence detection include rule based (Roussinov and Robles-Flores, 2007), traditional machine learning based (Waseem and Hovy, 2016; Davidson et al., 2017; Saral et al., 2018; Basile et al., 2019), and deep learning based (Ku-mar et al., 2018; Zampieri et al., 2019; Wang and Potts, 2019; Sheng et al., 2021; Zhang et al., 2021b) approaches.", "Roussinov and Robles-Flores (2007) define malevolence by filtering the keywords.", "Saral et al. (2018) survey the machine learning-based detection methods, including KNN and SVM-based methods.", "The performance of these methods is not strong enough as malevolence detection requires a deep understanding of semantics.", "Kumar et al. (2018) apply CNNs and LSTMs for aggressiveness detection.", "Zampieri et al. (2019) apply CNNs and Bi-LSTMs for offensiveness detection.", "More recently, pretrained models, e.g., BERT and RoBERTa, have been used for ad hominem, malevolence, and safety (Sheng et al., 2021; Zhang et al., 2021b; Sun et al., 2021) , demonstrating better performance than LSTM, CNN, RCNN, and GNN based models (Zhang et al., 2021b).", "Compared with previous methods, we model malevolence detection as a multi-label dialogue malevolence detection task instead of a single-label dialogue malevolence detection task.", "Moreover, we propose two label correlation mechanisms, i.e., label correlation in taxonomy (LCT) and label correlation in context (LCC).", "Given a dialogue that contains m utterances, x = [ x 1 , x 2 , . . . , x i , . . . , x m ] and x i is the i -th utterance in the dialogue.", "y = [ y 1 , y 2 , . . . , y i , . . . , y m ] denotes the label sequence of one dialogue, where y i { 0 , 1 } n is the label for each utterance.", "l = { l 1 , l 2 , . . . , l j , . . . , l n } denotes the label set, where l j is the j -th label, n is the total number of label categories.", "Multi-label dialogue malevolence detection (MDMD) aims to assign the most reliable labels to each x i .", "Since there is no large-scale MDMD dataset, during training, we observe one 3545 non-malevolent label or only observe one malevolent label per utterance, while the other malevolent labels are unknown.", "We build a MDMD dataset for evaluation only, the details of which can be found in the experiments.", "We propose a model, multi-faceted label correlation enhanced CRF (MCRF), for MDMD.", "As shown in Figure 2, MCRF consists of a PLCT-based encoder and a multi-faceted CRF layer, where the PLCT-based encoder is used to encode the utterances x and labels l , and output the representations H and R ; the representations are fed into the multi-faceted CRF layer to predict the multi-labels y .", "The PLCT-based encoder is enhanced by a taxonomy tree-based position embedding e pos ; the multi-faceted CRF layer is enhanced by learning-based label correlation in taxonomy (LLCT) (i.e., y ), LCC (i.e., T and T (cid:48) ), and the representation output of the PLCT-based encoder (i.e., H and R ).", "In the following subsections, we detail each component.", "As shown in Figure 2, the utterance and label encoder takes the utterances and labels as input, and the output is the representations of utterances and labels.", "Following Liu and Lapata (2019), each utterance is encoded separately by inserting [CLS] at the start of each utterance and [SEP] at the end of each utterance.", "The labels are encoded by inserting [CLS] between the last utterance and labels and [SEP] at the end of labels.", "We utilize three kinds of embeddings, namely token embeddings, segment embeddings, and position embeddings.", "Token embeddings follow the original transformer paper (Vaswani et al., 2017).", "Segment embeddings distinguish each utterance, as well as the labels, by e A or e B , where e A and e B are odd or even.", "Position embeddings for utterances capture the position of the utterances (Wang and Chen, 2020).", "In order to improve the representation of labels, we change the position embeddings of labels into PLCT-based position embedding (see 3.3).", "We feed the three embeddings into a pretrained model (i.e., BERT) to get the representations of utterances and labels: H, R = P T M ([ e ( x i ) , e ( l j )]) , e = e tok + e seg + e pos , (1) where P T M is the pretrained model; e tok , e seg , and e pos are the token, segment and Figure 3: Demonstration of taxonomy tree of labels.", "position embeddings, respectively.", "H = { h 1 , h 2 , . . . , h i , . . . , h m } denotes the repsenta-tions of the utterances with h i (corresponding to pooler output of [CLS]) representing the i -th utterance x i .", "R = { r 1 , r 2 , . . . , r j , . . . , r n } are the representations of the labels with r j (correspond-ing to sequence output of labels) representing the j -th label l j .", "Multi-faceted label correlation is the main component of MCRF, which is composed of two major modules: LCT and LCC.", "The former is meant to decrease the probability of over-fitting caused by single-label annotated data, while the latter is meant to leverage the influence of the previous label on the next label of the utterances from the same user and the other user.", "Label correlation in taxonomy.", "The LCT module contains two parts: PLCT and LLCT.", "First, the PLCT module captures label correlation in the taxonomy tree.", "The input of the module is the taxonomy tree, the output is the label position, and the module is used for improving the encoder.", "PLCT is defined by the taxonomy tree-based position of each label, which is formulated by its path from the root in the taxonomy tree (Wang et al., 2021).", "The taxonomy of malevolence consists of a root and three levels of labels.", "We use the 1st-level, 2nd-level, and 3rd-level of labels to get the coordinate for the 3rd-level labels.", "For instance, in Figure 3, the taxonomy tree-based positional label embedding for blame is (1 , 2 , 0) .", "We use label position output of PLCT to improve e pos in Eq.", "1, and the encoder is improved as PLCT-based encoder .", "Second, the LLCT module captures label correlation by learning a correlation matrix V n n .", "Each element of the matrix corresponds to the correlation of two labels accordingly as follows: V = 1 2 ( V j,j (cid:48) + V (cid:48) j,j (cid:48) ) , (2) 3546 where V is the learned LCT correlation matrix by representations of labels, V j,j (cid:48) = d ( r j , r j (cid:48) ) ; V (cid:48) is the fixed LCT correlation matrix, V (cid:48) j,j (cid:48) = d ( c j , c j (cid:48) ) ; d is the correlation function and we use the Cosine similarity; r j and r (cid:48) j are the representations of the j -th and j (cid:48) -th label by PLCT-based encoder with taxonomy tree position, i.e., R from Eq.", "1; c j and c (cid:48) j are the n-gram bag-of-words vectors of the utterances belong to the j -th and j (cid:48) -th label, respectively.", "The label correlation matrix V is used for hierarchical label distribution learning later in 3.4.", "Label correlation in context.", "The LCC module captures the label correlation between the labels of different utterance turns.", "We use two kinds of LCC correlation functions, i.e., label correlation functions between utterance turns from different users ( t ) and the same user ( t (cid:48) ), which are defined as follows: t ( y i 1 = l j , y i = l j (cid:48) ) = T ( l j ,l j (cid:48) ) , t (cid:48) ( y i 2 = l j , y i = l j (cid:48) ) = T (cid:48) ( l j ,l j (cid:48) ) , (3) where l j and l j (cid:48) denote the j -th and j (cid:48) -th labels.", "T and T (cid:48) are two n n matrices initialized randomly and trained by LCC-based label distribution learning, which is introduced next.", "Given a sequence of utterances, a linear chain CRF can be used to predict the label of an utterance:", "p ( y | x ) = 1 Z ( x ) exp (cid:32)(cid:88) i ( x i , y i ) (cid:33) , (4) where Z is a normalization function, and ( x, y ) = (cid:88) i s ( y i , x ) + (cid:88) i t ( y i 1 , y i ) , (5)", "where t is defined in Eq.", "3. s is the emission function.", "Next, we introduce the components of our multi-faceted CRF layer, including the LCC-based feature function and the LCT-based label distribution learning.", "LCC-based feature function.", "The LCC-based feature function contains two parts: the emission function and the LCC-based transition function.", "First, the emission function s is defined as follows: s ( y i , x ) = softmax ( h i ) , (6) where h i is the representation of each utterance x i .", "LCT-based label distribution learning.", "We get the estimated gold label distribution y for CRF label distribution learning.", "We calculate the estimated distribution y i from the original distribution y i of the i -th utterance as follows: y i = V y i + y i , (8) where denotes how much the original one-hot distribution is redefined and V is the matrix that estimates the LCT in Eq.", "2. Our training objective is the KL-divergence loss except that we replace gold label y with estimated gold label y : L = (cid:88) y q ( y | x ) log q ( y | x ) p ( y | x ) , (9) where q ( y | x ) is the target distribution to learn, we use the probability of y given x for q ( y | x ) ; p ( y | x ) is the predicted distribution.", "The KL loss can be transformed into the following function by expanding and marginalizing p ( y | x ) (Liu and Hockenmaier, 2020): L = (cid:88) i (cid:88) y i { q ( y i | x ) log q ( y i | x ) } (cid:88) y { q ( y | x ) (cid:48) ( y, x ) } + log Z ( x ) , (10) where q is the target distribution, (cid:48) is the feature function, Z is the normalization function.", "We conduct experiments to answer the following research questions: (RQ1) How does BERT-MCRF compare to baselines on the MDMD test set?", "(RQ2)", "What is the impact of the number of labels on the performance of BERT-MCRF?", "(RQ3)", "What is the influence of different LCT and LCC settings?", "(RQ4)", "What do the components of BERT-MCRF contribute to its overall performance?", "We conduct experiments on an extension of the MDRDC dataset released by Zhang et al. (2021b).", "The original MDRDC dataset is for single-label dialogue malevolence detection; it contains 6,000 dialogues (with 10,299 malevolent utterances and 21,081 non-malevolent utterances) annotated by Amazon MTurk workers.", "To conduct the evaluation for multi-label dialogue malevolence detection, we re-annotate the validation and test set of the MDRDC dataset using Amazon MTurk following the annotation protocols in (Zhang et al., 2021b).", "We select workers with a test score of at least 90, 500 approved human intelligence tasks (HITs) and 98% HIT approval rate and the location is limited to countries where English is one of the official languages.", "The workers are also asked to consider dialogue context and implicit words.", "Before the annotation, we warn the crowd workers that the task may contain malevolent content.", "The crowd workers are asked to annotate each utterance of the dialogue with 18 3rd-level labels in the taxonomy of Zhang et al. (2021b).", "We ask three workers to annotate the data.", "Cohen's multi-Kappa value of the three workers is 0.701 for the re-annotated data, which is considered substantial (McHugh, 2012).", "The MDMD dataset statistics are shown in Table 1. We have re-annotated 8,462 utterances in total, with 2,098 malevolent and 6,364 non-malevolent utterances.", "There are 7,510 (88.7%), 838 (9.9%), 107 (1.3%) and 7 (0.1%) utterances for 1-label, 2-label, 3-label and 4-label group separately.", "For all the collected data, 952 (11.3%) of 8,462 utterances have 24 labels.", "For the malevolent utterances, 952 (45.4%) of 2,098 utterances have 24 labels, which indicates the importance of MDMD task considering the percentage of multi-label utterances.", "We use the training, validation, and test splits provided in (Zhang et al., 2021b), which has a ratio of 7:1:2.", "We compare BERT-MCRF against BERT and BERT-CRF.", "The two baselines are competitive since BERT with a softmax classifier performs well in a previous SDMD task (Zhang et al., 2021b), and BERT-CRF with modified encoder for separate sentences is the state-of-the-art model for sequence labeling task (Cohan et al., 2019).", "We use the bert-base-uncased' version of BERT as the pretrained model with a vocabulary size of 30,522.", "The max sequence length is set to 512.", "For BERT-MCRF, we first do BERT fine-tuning with learning rate 2e-5, and BERT is fine-tuned with 2 epochs.", "Then, we train the multi-faceted CRF layer and fine-tune BERT together, with multi-faceted CRF layer learning rate 7e-4 and BERT-encoder learning rate 5e-7, we train 10 epochs together.", "The batch size is 8 for training, validation, and test.", "The dropout ratio is 0.1.", "More runtime and parameter details are provided in Appendix B. All the neural models are trained on GeForce GTX TitanX GPUs.", "We use the precision, recall, F1 score, and Jaccard score as our evaluation metrics (Manning et al., 2008).", "We report the macro scores since the data is imbalanced in terms of labels (Zhang et al., 2021b).", "To determine how MCRF compares to baseline models on the MDMD task, we report the results in terms of precision, recall, F1, and Jaccard score in Table 2. In terms of overall performance, adding", "LCT and LCC improves the performance of dialogue malevolence detection.", "In general, the performance of BERT-MCRF is better than BERT and BERT-CRF.", "The precision, recall, F1, and Jaccard score of BERT-MCRF outperform the second-best model (i.e., BERT-CRF) by 16.1%, 11.9%, 12.0%, 3548", "and 6.1%, respectively.", "The results in terms of precision and recall indicate that incorporating LCT and LCC provides benefits to both precision and recall, and more benefits to precision than recall.", "We divide the samples in the MDMD test set into different groups according to the number of labels.", "We report the Jaccard scores of different label groups in Table 3. Model 1-label 2-label 3-label 4-label BERT 40.16 11.84 11.48 8.00 BERT-CRF 44.02 13.06 11.89 11.33 BERT-MCRF 46.39 15.23 12.88 10.00 Table 3: Jaccard scores of different label groups.", "First, the results suggest that BERT-MCRF has better performance with regard to different label groups.", "BERT-MCRF's Jaccard scores for the 1-label, 2-label, and 3-label are 5.4%, 16.6%, 8.3% higher than the second best performing approach.", "An exception is that for the 4-label group, the result of BERT-MCRF is lower than BERT-CRF.", "The reason is that the size of 4-label utterances is small for the test set and the performance of 4-label changes dramatically when we evaluate at different epochs.", "Second, the results show that the MDMD task becomes more challenging as the number of labels increases.", "The Jaccard score results for all the models in Table 3 decrease as the number of labels increases.", "First, we study the influence of the hyperparameter of LCT in Eq.", "8, as shown in the upper part of Table 4. As increases, the performance increases and then decreases.", "The reason is that as with overly large , the original one-hot distribution is redefined too much as to make the learning target deviate from the real target.", "We visualize the LCT confusion matrix V (Eq. 8) in Figure", "4(a).", "Yellow or blue suggests the correlation is low or high, separately.", "The variation of correlation value suggests our model can capture the label correlation in taxonomy, which contributes to final results.", "Second, we study the influence of different transition function matrices of LCC, i.e., T is LCC between the same user, T (cid:48) is LCC between different users, as shown in the bottom part of Table 4. For the three LCC settings, T has better recall thus improving the final performance compared with T (cid:48) ; T (cid:48) has the better precision than the other two groups, but he overall performance is the lowest; BERT-MCRF with both T and T (cid:48) combine the advantages to achieve the best performance.", "We visualize the LCC confusion matrices T in Figure", "4(b) and T (cid:48) in Figure", "4(c); yellow and blue suggests a negative and positive correlation, respectively.", "First, LCC captured by transition matrices can be both positive and negative, e.g., for T (cid:48) , non-malevolent is likely to transit to non-malevolent and not-likely to transit to immoral & illegal; 3549 second, the LCC captured by T and T (cid:48) is different.", "We perform an ablation study on BERT-MCRF by removing LCT or LCC.", "The results are reported in Table 5. The results suggest that both LCC and LCT are important for BERT-MCRF.", "First, removing LCC decreases the performance of BERT-MCRF by 2.9%, 1.3%, and 0.1% for recall, F1 and Jaccard, respectively, while the precision increase by 1.7%.", "LCC has a positive influence since it considers both the LCC from the same user and different users, while BERT-CRF only contains the label correlation from different users, as explained in 5.3.", "Second, removing LLCT decreases the performance of recall, F1 and Jaccard score by 3.7%, 2.5%, and 1.6%; LLCT has a positive influence since it predicts estimated gold labels to improve model learning.", "An exception is that the precision increases by 0.7%, which is not significant, and the reason might be that BERT-MCRF tends to predict more labels, which results in a much higher recall but decreases precision a bit.", "Third, removing PLCT decreases the performance of precision, recall, F1, and Jaccard by 16.4%, 11.5%, 12.1%, and 6.0%.", "The performance suggests that PLCT has a positive influence on the results.", "The fixed correlation between the 3rd-level labels with the same node based on the taxonomy tree is captured well by the position embedding.", "Fourth, removing both LLCT and PLCT decreases the performance of recall, F1, and Jaccard score by 15.8%, 13.2%, 13.4%, and 6.1%.", "Compared with the results with LLCT ablation and PLCT ablation, both LLCT and PLCT have a positive influence on the BERT-CRF model.", "Previously, some methods have utilized label correlation in training data to improve multi-label classification, i.e., label co-occurrence (Zhang et al., 2018).", "However, for MDMD, there is no label co-occurrence information; our results suggest that LCT is able to increase the MDMD; the reason might be that the LCT reduces overfitting of single-label training data.", "We randomly select two examples from the test set to illustrate the performance of BERT, BERT-CRF,", "and BERT-MCRF (see Table 7 in Appendix A.2).", "First, for the first example, BERT-MCRF predicts the right labels violence and self-hurt.", "The LCT correlation value between label violence and self-hurt is 0.1923, and suggests that LCT may help predict the two labels together.", "Second, in the second example, BERT-MCRF predicts a sequence of labels for different dialogue turns more accurately than BERT and BERT-CRF.", "We found that the LCC value between non-malevolent and non-malevolent is 0 .", "2725 , while the LCC value between non-malevolent and immoral & ille-gal is 0 .", "1183 , which implies that it helps BERT-MCRF predict the right label non-malevolent for the third utterance considering the label of the first utterance.", "In summary, LCC is able to boost the performance of BERT-MCRF.", "In addition, there are also cases where BERT-MCRF fails.", "An example is the label with implicit expression, i.e., deceit, which leaves room for further improvement by considering implicit meaning.", "We have studied multi-label dialogue malevolence detection and built a dataset MDMD.", "The dataset statistics suggest that the dataset quality is substantial and that it is essential to do multi-label dialogue malevolence detection as almost 12% of the utterances have more than one malevolent label.", "We have proposed BERT-MCRF by considering label correlation in taxonomy (LCT) and label correlation in context (LCC).", "Experimental results suggest that BERT-MCRF outperforms competitive baselines.", "Further analyses have demonstrated the effectiveness of LCT and LCC.", "A limitation of BERT-MCRF is that it is not good at detecting implicitly malevolent utterances, e.g., deceit.", "As to future work, we plan to address this type of utterance and investigate how to enhance BERT-MCRF in terms of implicit multi-label dialogue malevolence detection by semi-supervised learning as there are large-scale unlabeled datasets.", "The data collection process for the re-annotated MDMD dataset follows the regulations of Twitter.", "The data is anonymized so the data can not be linked to a particular user.", "The crowd workers are fairly compensated with a minimum wage per hour (using the minimum wage from a Western European country).", "The data collection process has been approved by the ethics committee of the authors' university.", "The data will be made available to researchers that agree to the ethical regulations of the ethics committee.", "Characteristics and quality control of the re-annotated dataset are described in Section 5. The claims in the paper match the results and the model can be generalized to multi-label dialogue safety detection tasks.", "This paper can be used for the deployment of dialogue systems, hoping to improve the ability of dialogue systems to detect malevolent human natural language.", "Multi-label classification has a positive impact on the application of dialogue systems.", "Detecting and filtering dialogue responses that are not malevolent may decrease the diversity of the dialogue.", "For the deployment of non-malevolent dialogue systems, it is better to consider the extent of malevolence according to malevolence label counts of each utterance or the perception of different labels.", "This research was partially supported by the Natural Science Foundation of China (62102234, 61972234, 61902219, 62072279), the Natural Science Foundation of Shandong Province (ZR2021QF129), the National Key R&D Program of China with grant No. 2020YFB1406704, the Key Scientific and Technological Innovation Program of Shandong Province (2019JZZY010129), the China Scholarship Council and the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, https:// hybrid-intelligence-centre.nl .", "All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "abstain", "method", "method", "objective", "result", "objective", "objective", "objective", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Existing approaches to mapping-based cross-lingual word embeddings are based on the assumption that the source and target embedding spaces are structurally similar.", "The structures of embedding spaces largely depend on the co-occurrence statistics of each word, which the choice of context window determines.", "Despite this obvious connection between the context window and mapping-based cross-lingual embeddings, their relationship has been underex-plored in prior work.", "In this work, we provide a thorough evaluation, in various languages, domains, and tasks, of bilingual embeddings trained with different context windows.", "The highlight of our findings is that increasing the size of both the source and target window sizes improves the performance of bilingual lexicon induction, especially the performance on frequent nouns.", "Cross-lingual word embeddings can capture word semantics invariant among multiple languages, and facilitate cross-lingual transfer for low-resource languages ( Ruder et al., 2019).", "Recent research has focused on mapping-based methods, which find a linear transformation from the source to target embedding spaces (Mikolov et al., 2013b; Artetxe et al., 2016; Lample et al., 2018).", "Learning a linear transformation is based on a strong assumption that the two embedding spaces are structurally similar or isometric.", "The structure of word embeddings heavily depends on the co-occurrence information of words (Turney and Pantel, 2010; Baroni et al., 2014), i.e. , word embeddings are computed by counting other words that appear in a specific context window of each word.", "The choice of context window changes the co-occurrence statistics of words and thus is crucial to determine the structure of an embedding space.", "For example, it has been known that an embedding space trained with a smaller linear window captures functional similarities, while a larger window captures topical similarities (Levy and Goldberg, 2014a).", "Despite this important relationship between the choice of context window and the structure of embedding space, how the choice of context window affects the structural similarity of two embedding spaces has not been fully explored yet.", "In this paper, we attempt to deepen the understanding of cross-lingual word embeddings from the perspective of the choice of the context window through carefully designed experiments.", "We experiment with a variety of settings, with different domains and languages.", "We train monolingual word embeddings varying the context window sizes, align them with a mapping-based method, and then evaluate them with both intrinsic and downstream cross-lingual transfer tasks.", "Our research questions and the summary of the findings are as follows: RQ1: What kind of context windows produces a better alignment of two embedding spaces?", "Our result shows that increasing the window sizes of both the source and target embeddings improves the accuracy of bilingual dictionary induction consistently regardless of the domains of the source and target corpora.", "Our fine-grained analysis reveals that frequent nouns receive the most benefit from larger context sizes.", "RQ2.", "In downstream cross-lingual transfer, do the context windows that perform well on the source language also perform well on the target languages?", "No.", "We find that even when some context window performs well on the source language task, that is often not the best choice for the target language.", "The general tendency is that broader context windows produce better performance for the target languages.", "Word embeddings are computed from the co-occurrence information of words, i.e. , context words that appear around a given word.", "The embedding algorithm used in this work is the skip-gram with negative sampling (Mikolov et al., 2013c).", "In the skip-gram model, each word w in the vocabulary W is associated with a word vector v w and a context vector c w .", "1 The objective is to maximize the dot-product v w t (cid:1) c w c for the observed word-context pairs ( w t ; w c ) , and to minimize the dot-product for negative examples.", "The most common type of context is a linear window.", "When the window size is set to k , the context words of a target word w t in a sentence [ w 1 ; w 2 ; :::; w t ; :::w L ] are [ w t (cid:0) k ; :::; w t (cid:0) 1 ; w t +1 ; :::; w t + k ] .", "The choice of context is crucial to the resulting embeddings as it will change the co-occurrence statistics associated with each target word.", "Table 1 demonstrates the effect of the context window size on the nearest neighbor structure of embedding space; with a small window size, the resulting embeddings capture functional similarity, while with a larger window size, the embeddings capture topical similarities.", "Among the other types of context windows that have been explored by researchers are linear windows enriched with positional information ( Levy and Goldberg, 2014b; Ling et al., 2015a; Li et al., 2017), syntactically informed context windows based on dependency trees ( Levy and Goldberg, 2014a; Li et al., 2017), and one that dynamically weights the surrounding words with the attention mechanism (Ling et al., 2015b).", "In this paper, we mainly discuss the most common linear window and investigate how the choice of the window size affects the isomorphism of two embedding spaces and the performance of cross-lingual transfer.", "Cross-lingual word embeddings aim to learn a shared semantic space in multiple languages.", "One promising solution is to jointly train the source and target embedding, so-called joint methods , by exploiting cross-lingual supervision signals 1 Conceptually, the word and context vocabularies are regarded as separated, but for simplicity, we assume that they share the vocabulary.", "in the form of word dictionaries (Duong et al., 2016), parallel corpora (Gouws et al., 2015; Luong et al., 2015), document-aligned corpora ( Vulic and Moens, 2016).", "Another line of research is off-line mapping-based approaches (Ruder et al., 2019), where monolingual embeddings are independently trained in multiple languages, and a post-hoc alignment matrix is learned to align the embedding spaces with a seed word dictionary (Mikolov et al., 2013b; Xing et al., 2015; Artetxe et al., 2016), with only a little supervision such as identical strings or numerals ( Artetxe et al., 2017; Smith et al., 2017), or even in a completely unsupervised manner (Lample et al., 2018; Artetxe et al., 2018).", "Mapping-based approaches have recently been popularized by their cheaper computational cost compared to joint approaches, as they can make use of pre-trained monolingual word embeddings.", "The assumption behind the mapping-based methods is the isomorphism of monolingual embedding spaces, i.e. , the embedding spaces are structurally similar, or the nearest neighbor graphs from the different languages are approximately isomorphic (Sgaard et al., 2018).", "Considering that the structures of the monolingual embedding spaces are closely related to the choice of the context window, it is natural to expect that the context window has a considerable impact on the performance of mapping-based bilingual word embeddings.", "However, most existing work has not provided empirical results on the effect of the context window on cross-lingual embeddings, as their focus is 997 on how to learn a mapping between the two embedding spaces.", "In order to shed light on the effect of the context window on cross-lingual embeddings, we trained cross-lingual embeddings with different context windows, and carefully analyzed the implications of their varying performance on both intrinsic and extrinsic tasks.", "The experiment is designed to deal with multiple settings to fully understand the effect of the context window.", "Languages.", "As the target language, we choose English ( E n) because of its richness of resources, and as the source languages, we choose French ( F r), German ( D e), Russian ( R u), Japanese ( J a), taking into account the typological variety and availability of evaluation resource.", "Note that the language pairs analyzed in this paper are limited to those including English, and there is a possibility that some results may not generalize to other language pairs.", "Corpus for Training Word Embeddings.", "To train the monolingual embeddings, we use the Wikipedia Comparable Corpora 2 .", "We choose comparable corpora for the main analysis in order to accentuate the effect of context window by setting an ideal situation for training cross-lingual embeddings.", "We also experiment with different domain settings, where we use corpora from the news domain 3 for the source languages, because the isomorphism assumption is shown to be very sensitive to the domains of the source and target corpora (Sgaard et al., 2018).", "We refer to those results when we are interested in whether the same trend with respect to context window can be observed in the different domain settings.", "For the size of the data, to simulate the setting of transferring from a low-resource language to a high-resource language, we use 5M sentences for the target language (English), and 1M sentences for the source languages.", "4 2 https://linguatools.org/tools/ corpora/wikipedia-comparable-corpora/ 3 https://wortschatz.uni-leipzig.de/en/download 4 We also experimented with very low-resource settings, where the source corpus size is set to 100K, but the results showed similar trends to the 1M setting, and thus we only include the result of the 1M settings in this paper.", "Context Window.", "Since we want to measure the effect of the context window size, we vary the window size among 1, 2, 3, 4, 5, 7, 10, 15, and 20.", "Besides the linear window, we also experimented with the unbound dependency context (Li et al., 2017), where we extract context words that are the head, modifiers, and siblings in a dependency tree.", "Our initial motivation was that, while the linear context is directly affected by different word orders, the dependency context can mitigate the effect of language differences, and thus may produce better cross-lingual embeddings.", "However, the performance of the dependency context turned out to be always in the middle between smaller and larger linear windows, and we found nothing notable.", "Therefore, the following analysis only focuses on the results of the linear context window.", "Implementation of Word2Vec.", "Note that some common existing implementations of the skip-gram may obfuscate the effect of the window size.", "The original C implementation of word2vec and its python implementation Gensim 5 adopt a dynamic window mechanism where the window size is uniformly sampled between 1 and the specified window size for each target word ( Mikolov et al., 2013a).", "Also, those implementations remove frequent tokens by subsampling before extracting word-context pairs (so-called dirty subsampling) ( Levy et al., 2015), which enlarges the context size in effect.", "Our experiment is based on word2vecf , 6 which takes arbitrary word-context pairs as input.", "We extract word-context pairs from a fixed window size and afterward perform subsampling.", "We train 300-dimensional embeddings.", "For details on the hyperparameters, we refer the readers to Appendix A. 3.2 Aligning Monolingual Embeddings After training monolingual embeddings in the source and target languages, we align them with a mapping-based algorithm.", "To induce a alignment matrix W for the source and target embeddings x; y , we use a simple supervised method of solving the Procrustes problem arg min W mi =1 W x i (cid:0) y i 2 , with a training word dictionary ( x i ; y i ) mi =1 (Mikolov et al., 5 https://radimrehurek.com/gensim/ 6 https://bitbucket.org/yoavgo/ word2vecf/src/default/ 998 Figure 1: BLI performance in the comparable setting. The target window size is fixed and the source window size is varied. 2013b), with the orthogonality constraint on W , length normalization and mean-centering as preprocessing for the source and target embeddings (Artetxe et al., 2016).", "The word dictionaries are automatically created by using Google Translate.", "7 We translate all words in our English vocabulary into the source languages and filter out words that do not exist in the source vocabularies.", "We also perform this process in the opposite direction (translated from the source languages into English), and take the union of the two corresponding dictionaries.", "We then randomly select 5K tuples for training and 2K for testing.", "Although using word dictionaries automatically derived from a system is currently a common practice in this field, it should be acknowledged that this may sometimes pose problems: the generated dictionaries are noisy, and the definition of word translation is unclear ( e.g., how do we handle polysemy?).", "It can hinder valid comparisons between systems or detailed analysis of them, and should be addressed in future research.", "For each setting, we train three pairs of aligned embeddings with different random seeds in the monolingual embedding training, as training word embeddings is known to be unstable and different runs result in different nearest neighbors (Wendlandt et al., 2018).", "The following results are presented with their averages and standard deviations.", "We first evaluate the learned bilingual embeddings with bilingual lexicon induction (BLI).", "The task is to retrieve the target translations with source words by searching for nearest neighbors with co-sine similarity in the bilingual embedding space.", "The evaluation metric used in prior work is usually top-k precision, but here we use a more informative measure, mean reciprocal rank (MRR) as recommended by Glava s et al. (2019).", "Fixed Target Context Window Settings.", "First, we consider the settings where the target context size is fixed, and the source context size is configurable.", "This setting assumes common situations where the embedding of the target language is available in the form of pre-trained embeddings.", "Figure 1 shows the result of the four languages.", "Firstly, we observe that too small windows (1 to", "3) for source embeddings do not yield good performance, probably because the model failed to train accurate word embedding models with insufficient training word-context pairs that the small windows capture.", "At first, this result may seem to contradict with the result from Sgaard et al. (2018).", "They trained English and Spanish embeddings with fasttext (Bojanowski et al., 2017) and the window size of 2, and then aligned them with an unsupervised mapping algorithm (Lample et al., 2018).", "When they changed the window size of the Spanish embedding to 10, they only observed a very slight drop on top-1 precision (from 81.89 to 81.28).", "We suspect that the discrepancy with our result is due to the different settings.", "First of 999 Figure 2: BLI performance for each PoS in the comparable setting.", "all, fasttext adopts a dynamic window mechanism, which may obfuscate the difference in the context window.", "Also, they trained embeddings with full Wikipedia articles, which is an order of magnitude larger than ours; the fasttext algorithm, which takes into account the character n-gram information of words, can exploit a nontrivial amount of subword overlap between the quite similar languages.", "Overall, we observe that the best context window size for the source embeddings increases as the target context size increases, and increasing the context sizes of both the source and target embedding seems beneficial to the BLI performance.", "Configurable Source/Target Context Window Settings.", "Hereafter, we present the results where both the source and target sizes are configurable and set to the same.", "Figure 3 summarizes the result of the same domain setting.", "As we expected from the observation of the settings where the target window size is fixed, the performance consistently improves as the source Figure 4: BLI performance in the different domain setting.", "and target context sizes increase.", "Given that the larger context windows tend to capture topical similarities of words, we hypothesize that the more topical the embeddings are, the easier they are to be aligned.", "Topics are invariant across different languages to some extent as long as the corpora are comparable.", "It is natural to think that topic-oriented embeddings capture language-agnostic semantics of words and thus are easier to be aligned among different languages.", "This hypothesis can be further supported by looking at the metrics of each part-of-speech (PoS).", "Intuitively, nouns tend to be more representative of topics than other PoS, and thus are expected to show a high correlation with the window size.", "Figure 2 shows the scores for each PoS.", "8 In all languages, nouns and adjectives show stronger (almost perfect) correlation than verbs and adverbs.", "8 We assigned to each word its most frequent PoS tag in the Brown Corpus (Kucera and Francis, 1967), following Wada et al. (2019).", "Different-domain Settings.", "The results so far are obtained in the settings where the source and target corpora are comparable.", "When the corpora are comparable, it is natural that topical embeddings are easier to be aligned as comparable corpora share their topics.", "In order to see if the observations from the comparable settings hold true for different-domain settings, we also present the result from the different-domain (news) source corpora in Figure 4.", "Firstly, compared to the same-domain settings (Figure 3), the scores are lower by around 0.1 to 0.2 points across the languages and context windows, even with the same amount of training data.", "This result confirms previous findings showing that domain consistency is important to the isomorphism assumption (Sgaard et al., 2018).", "As to the relation between the BLI performance and the context window, we observe a similar trend to the comparable settings: increasing the Figure 7: BLI performance on the top 500 frequent and rare words in the different domain setting.", "context window size basically improves the performance.", "Figure 5 summarizes the results for each PoS.", "The performance on nouns and adjectives still accounts for much of the correlation with the window size.", "This suggests that even when the source and target domains are different, some domain-invariant topics are captured by larger-context embeddings for nouns and adjectives.", "Frequency Analysis.", "To further gain insight into what kind of words receive the benefit of larger context windows, we analyze the effect of word frequency.", "We extract the top and bottom 500 frequent words 9 from the test vocabularies and evaluate the performance on them respectively.", "The results of the comparable setting in each language are shown in Figure 6.", "9 The frequencies were calculated from our subset of the English Wikipedia corpus.", "The scores for the frequent words (top500) are notably higher than the rare words (bottom500).", "This confirms previous empirical results that existing mapping-based methods perform signifi-cantly worse for rare words ( Braune et al., 2018; Czarnowska et al., 2019).", "With respect to the relation with the context size, both frequent and rare words benefit from larger window sizes, although the gain in the rare words is less obvious in some languages ( J a and R u).", "In the different domain settings, as shown in Figure 7, the rare words, in turn, suffer from larger window sizes, especially for F r and R u, but the performance on frequent words still improves as the context window increases.", "We conjecture that when training a skip-gram model, frequent words observe many context words, and that would mitigate the effect of irrelevant words (noise) caused by a larger window size and result in high-quality topical embeddings; however, rare words have to rely on a limited number of context words, and larger windows just amplify the noise and domain difference to result in an inaccurate alignment of them.", "Although BLI is a common evaluation method for bilingual embeddings, good performance on BLI does not necessarily generalize to downstream tasks ( Glavas et al., 2019).", "To further gain insight into the effect of the context size on bilingual embeddings, we evaluate the embeddings with three downstream tasks:", "1) sentiment analysis;", "2) document classification;", "3) dependency parsing.", "Here, we briefly describe the dataset and model used for each task.", "Sentiment Analysis (SA).", "We use the Webis-CLS-10 corpus 10 (Prettenhofer and Stein, 2010), which is comprised of Amazon product reviews in the four languages: English, German, French, and Japanese (no Russian data available).", "We cast sentiment analysis as a binary classification task, where we label reviews with the scores of 1 or 2 as negative and reviews with 4 or 5 as positive .", "For the model, we employ a simple CNN encoder followed by a multi-layer per-ceptrons classifier.", "Document Classification (DC).", "MLDoc 11 (Schwenk and Li, 2018) is compiled from the Reuters corpus for eight languages including all the languages used in this paper.", "The task is a four-way classification of the news article topics: Corporate/Industrial , Economics , Government/Social , and Markets .", "We use the same model architecture as sentiment analysis.", "Dependency Parsing (DP).", "We train deep biaffine parsers (Dozat and Manning, 2017) with the UD English EWT dataset 12 (Silveira et al., 2014).", "We use the PUD treebanks 13 as test data.", "are shown in Appendix B. Evaluation Setup.", "We evaluate in a cross-lingual transfer setup how well the bilingual embeddings trained with different context windows transfer lexical knowledge across languages.", "Here, we focus on the settings where both the source and target context sizes are varied.", "For each task, we train models with our pre-trained English embeddings.", "We do not update the parameters of the embedding during training.", "Then, we evaluate the model with the test data in other languages available in the dataset.", "At test time, we feed the model with the word embeddings of the test language aligned to the training English embeddings.", "We train nine models in total for each setting with different random seeds and English embeddings, and we present their average scores and standard deviations.", "Result and Discussion.", "The results from all the three tasks are presented in Figure 8.", "For sentiment analysis and document classification, we observe a similar trend where the best window size is around 3 to 5 for the source English task, but for the test languages, larger context windows achieve better results.", "The only deviation is the Japanese document classification, where the score does not show a significant correlation.", "We attribute this to low-quality alignments due to the large typological difference between English and Japanese.", "For dependency parsing, embeddings with smaller context windows perform better in the source English task, which is consistent with 11 https://github.com/facebookresearch/ MLDoc 12 https://universaldependencies.org/ treebanks/en_ewt/index.html 13 https://universaldependencies.org/ conll17/ the observation that smaller context windows tend to produce syntax-oriented embeddings (Levy and Goldberg, 2014a).", "However, the performance of the small-window embeddings does not transfer to the test languages.", "The best context window for the English development data (the size of", "1) performs the worst for all the test languages, and the transferred accuracy seems to benefit from larger context sizes, although it does not always correlate with the window size.", "This observation highlights the difficulty of transferring syntactic knowledge across languages.", "Word embeddings trained with small windows capture more grammatical aspects of words in each language, which, as different languages have different grammars, makes the source and target embedding spaces so different that it is difficult to align them.", "In summary, a general trend we observe here is that good context windows in the source language task do not necessarily produce good transferrable bilingual embeddings.", "In practice, it seems better to choose a context window that aligns the source and target well, rather than using the window size that just performs the best for the source language.", "Despite their obvious connection, the relation between the choice of context window and the structural similarity of two embedding spaces has not been fully investigated in prior work.", "In this study, we have offered the first thorough empirical results on the relation between the context window size and bilingual embeddings, and shed new light on the property of bilingual embeddings.", "In summary, we have shown that: (cid:15) larger context windows for both the source and target facilitate the alignment of words, especially nouns.", "(cid:15) for cross-lingual transfer, the best context window for the source task is often not the best for test languages.", "Especially for dependency parsing, the smallest context size produces the best result for the source task, but performs the worst for test languages.", "We hope that our study will provide insights into ways to improve cross-lingual embeddings by not only mapping methods but also the properties of monolingual embedding spaces.", "We thank the anonymous reviewers for their valuable comments and suggestions.", "This work was supported by JST CREST Grant Number JP-MJCR1513, Japan." ]
[ "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "method", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "result", "other", "other" ]
[ "Simultaneous machine translation (SiMT) starts translating while receiving the streaming source inputs, and hence the source sentence is always incomplete during translating.", "Different from the full-sentence MT using the conventional seq-to-seq architecture, SiMT often applies prefix-to-prefix architecture, which forces each target word to only align with a partial source prefix to adapt to the incomplete source in streaming inputs.", "However, the source words in the front positions are always illusoryly considered more important since they appear in more prefixes, resulting in position bias , which makes the model pay more attention on the front source positions in testing.", "In this paper, we first analyze the phenomenon of position bias in SiMT, and develop a Length-Aware Framework to reduce the position bias by bridging the structural gap between SiMT and full-sentence MT. Specifically, given the streaming inputs, we first predict the full-sentence length and then fill the future source position with positional encoding, thereby turning the streaming inputs into a pseudo full-sentence.", "The proposed framework can be integrated into most existing SiMT methods to further improve performance.", "Experiments on two representative SiMT methods, including the state-of-the-art adaptive policy, show that our method successfully reduces the position bias and thereby achieves better SiMT performance.", "Simultaneous machine translation (SiMT) (Cho and Esipova, 2016; Gu et al., 2017; Ma et al., 2019; Arivazhagan et al., 2019) starts translating while receiving the streaming source inputs, which is crucial to many live scenarios, such as simultaneous interpretation, live broadcast and synchronized subtitles.", "Compared with full-sentence machine translation (MT) waiting for the complete source senCorresponding author: Yang Feng.", "tence, SiMT is more challenging since the source sentence is always incomplete during translating.", "To process the incomplete source, SiMT has a different architecture from full-sentence MT, as shown in Figure 1.", "Full-sentence MT applies the seq-to-seq architecture (Sutskever et al., 2014), where each target word can be translated based on a complete source sentence.", "SiMT always applies prefix-to-prefix architecture (Ma et al., 2019) to force each target word to only align with a source prefix rather than the complete source sentence, where the source prefix consists of partial source words in the front position and is monotonically non-decreasing at each step.", "Although the prefix-to-prefix architecture effectively adapts to the streaming inputs by removing the subsequent source words, it intensifies the structural gap between SiMT and full-sentence MT, resulting in the following issues.", "First, since each target word is forced to align with a monotonically non-decreasing source prefix, the source words in different positions become no longer fair.", "Specifically, the source words in the front position participate in more target words' translation due to earlier appearance, and hence are always illusoryly 6775 considered more important, resulting in position bias (Ko et al., 2020; Yan et al., 2021).", "Due to the position bias, SiMT model prefers to pay more attention to the source words in front position during testing, which not only robs the attention of the words that are supposed to be aligned (increase mis-translation error) (Zhang and Feng, 2021b), but also results in great overlap on attention distribution (aggravate the duplication translation error) (Elbayad et al., 2020).", "We will analyze the detailed causes and disadvantages of position bias in Sec.3.", "Second, prefix-to-prefix architecture directly removes the subsequent source words, resulting in the lost of some potential full-sentence information (Zhang et al., 2021).", "Most importantly, the prefix-to-prefix training makes the model insensitive to the full-sentence length, which can provide a global planning for translation (Feng et al., 2020, 2021).", "Under these grounds, we propose a Length-Aware Framework ( LAF ) for SiMT to turn the incomplete source into a pseudo full-sentence, thereby reducing the position bias.", "We aim to extend the incomplete source sentence in SiMT to the full-sentence length and meanwhile guarantee that future source words would not be leaked to fulfill the streaming inputs during testing.", "To this end, LAF first predicts the full-sentence length based on the current incomplete source sentence.", "Then, LAF fills the future source positions (between the current source length and predicted full-sentence length) with the positional encoding (Vaswani et al., 2017) to construct the pseudo full-sentence.", "Accordingly, each target word is translated based on the pseudo full-sentence and no longer forced to align with the source prefix.", "LAF can be integrated into most of the existing SiMT methods to further improve performance by bridging the structural gap between SiMT and full-sentence MT. We apply LAF on two representative and strong SiMT methods, and experiments on IWSLT15 En Vi and WMT15 De En tasks show that our method achieves better performance in both cases.", "target length I .", "Transformer (Vaswani et al., 2017) is the currently most widely used model for full-sentence MT, which consists of encoder and decoder.", "The encoder maps x into the source hidden states h = { h 1 , , h J } , and the decoder generates the i th target word y i based on source hidden states h and previous target words y <i .", "Overall, the decoding probability of full-sentence MT is: p full ( y | x ) = I (cid:89) i =1 p ( y i | x , y <i ) (1) Attention Transformer calculates the attention weights with dot-product attention, and the encoder-decoder cross-attention ij is calculated based on target hidden state s i and source hidden state h j : ij = softmax (cid:32) s i WQ (cid:0) h j WK (cid:1) d k (cid:33) (2) where WQ and WK are input matrices, and d k is the input dimension.", "Positional encoding Transformer (Vaswani et al., 2017) adds positional encoding (PE) to the input embedding to capture the position information, which is fixed and only related to the absolute position.", "The d th dimension of the positional encoding in position pos is calculated as: PE ( pos, 2 d ) = sin (cid:16) pos/ 10000 2 d/d model (cid:17) (3) PE ( pos, 2 d +1) = cos (cid:16) pos/ 10000 2 d/d model (cid:17) (4) where d model is the dimension of input embedding.", "Different from full-sentence MT waiting for the complete sentence, SiMT translates concurrently with the streaming inputs and hence prefix-to-prefix architecture (Ma et al., 2019) is proposed to adapt to the incomplete source, where the target word y i is generated based on a partial source prefix.", "Prefix-to-prefix architecture Let g ( i ) be a monotonically non-decreasing function of i that denotes the length of received source sentence (i.e., source prefix) when translating the target word y i .", "Given g ( i ) , the probability of generating the target word y i is p (cid:0) y i | x g ( i ) , y <i (cid:1) , where x g ( i ) is first g ( i ) source words and y <i is previous target words.", "Overall, the decoding probability of SiMT is: p sim ( y | x ) = I (cid:89) i =1 p (cid:0) y i | x g ( i ) , y <i (cid:1) (5) 6776 To determine g ( i ) during translating process, SiMT requires a policy to determine translating' a target word or waiting' for the next source word, falling into fixed policy and adaptive policy.", "Fixed policy performs waiting' or translating' according to pre-defined rules.", "Wait-k policy (Ma et al., 2019) is the most widely used fixed policy, which first waits for k source words and then translates one target word and waits for one source word alternately.", "Besides, Ma et al. (2019) also proposed a test-time wait-k policy , using a full-sentence model to perform wait-k policy in testing.", "Adaptive policy can dynamically adjust waiting' or translating' according to the current state.", "Monotonic multi-head attention ( MMA ) (Ma et al., 2020) is the current state-of-the-art adaptive policy, which predicts a Bernoulli action READ/WRITE to decide to wait for the next source word (READ) or translate a target word (WRITE).", "To train the Bernoulli actions, MMA predicts the writing probability of y i when receiving x j , denoted as ij , and uses it to approximate the READ/WRITE actions during training (Arivazhagan et al., 2019).", "In this section, we analyze the phenomenon and cause of position bias in SiMT.", "In full-sentence MT, the source sentence is complete, so that each source word participates in the translation of all target words.", "While in prefix-to-prefix architecture for SiMT, each target word is forced to align with an increasing source prefix, which directly causes that the source words in the front position participate in the translation of more target words during training and hence are always illusoryly considered more important, resulting in position bias .", "A theoretical analysis of position bias refers to Appendix A. During testing, position bias is reflected in the preference of paying more attention to the source words in front positions.", "To explore the specific impact of position bias, we select the samples with the same source length (77 sentences) in WMT15 De En test set as a bucket, and then calculated the average attention weight obtained by each source position in the bucket.", "Since the times of each source position being paid attention to may be different in SiMT, the average attention weight is averaged on the times of being attended, so the evaluation is fair for each source position.", "Specifically, give the attention weight ij between target word y i and source word x j , the average attention weight 2 4 6 8 10 12 14 16 18 20 Source Position 0.00 0.05 0.10 0.15 A v e r a g e A tt e n ti on Full-sentence Wait-k MMA", "(b) Wait-k v.s. Test-time Wait-k Figure 2: Average attention A obtained by different source positions on the De En task, showing wait-5, test-time wait-k, MMA and full-sentence MT. A j at source position j is calculated as: A j = (cid:80) Ii =1 ij (cid:80) Ii =1 1 j g ( i ) (6) where (cid:80) Ii =1 ij is the sum of attention on the j th source position, and (cid:80) Ii =1 1 j g ( i ) counts the times of the j th source position being paid attention to.", "What is position bias?", "Figure", "3(a) shows the average attention obtained by different source positions 1 in two representative SiMT methods, compared with full-sentence MT. SiMT has a significant difference from the full-sentence MT on the average attention to the source position.", "In full-sentence MT, the average attention on each position is similar and the back position gets slightly more attention (Voita et al., 2021).", "However, in both the fix and adaptive policy in SiMT, the front source positions obviously get more attention due 1 Note that we do not add bos in front of the source sentence, and the word in the first source position is x 1 .", "to position bias, especially the first source word.", "Compared with wait-k, MMA alleviates the position bias by dynamically adjusting waiting' or translating', but the first source position still abnormally gets more attention.", "Note that the average attention on the back positions in SiMT is higher since the times they are attended are less (the denominator in", "Eq.(6) is smaller).", "Specific attention characteristics Furthermore, we compare the characteristics of attention distribution in full-sentence MT and SiMT, shown in Figure 3.", "In SiMT, more attention weights are concentrated on the front source positions (Arivazha-gan et al., 2019; Zhang and Feng, 2022a), which is not conducive to translation.", "First, the biased attention on front positions robs the attention of the aligned source word, resulting in mis-translation error.", "Second, much overlapping on attention distribution aggravates the duplication translation error, where a human evaluation proposed by Elbayad et al. (2020) shows that duplication error in SiMT is 500% of full-sentence MT. Besides, in some cases, even if the aligned source words have not been received, the prefix-to-prefix architecture still forces the target word to align with the irrelevant source prefix, resulting in the confusion on attention (Chen et al., 2021).", "Does position bias affect SiMT performance?", "To analyze whether the position bias in SiMT results in poor translation quality, we use the ratio of the average attention on the first source position to all positions ( A 1 / (cid:80) j A j ) to reflect the degree of position bias, and accordingly divide WMT15 De En test set into 5 parts evenly.", "We report the translation quality of these 5 parts in Figure 4, Bottom Bottom-Mid Mid Top-Mid Top Degree of Position Bias (weak heavy) 20 25 30 35 BLEU 32.32 32.65 32.99 31.48 29.84 27.94 28.71 28.81 26.96 19.99 4.38 3.94 4.18 4.52 9.85 Full-sentence Wait-k", "(a) Divided based on position bias degree in wait-k.", "(b) Divided based on position bias degree in MMA.", "where the position bias is heavier from Bottom' to Top'.", "The translation quality of both wait-k and MMA significantly decrease as the position bias becomes heavy, while full-sentence MT remained high-quality translation on these parts.", "More importantly, as the position bias intensifies, the performance gap between SiMT and full-sentence MT is amplified, where wait-k and MMA are 9.85 BLEU and 7.03 BLEU lower than full-sentence MT respectively on the Top' set.", "Therefore, the position bias is an important cause of the performance gap between SiMT and full-sentence MT. What is the position bias caused by?", "To verify that the preference for front source positions is caused by the structural gap between SiMT and full-sentence MT rather than streaming inputs during testing, we compare the average attention of wait-k and test-time wait-k' in Figure", "3(b), where test-time wait-k' is trained with full-sentence structure and tested with wait-k policy.", "After replacing the prefix-to-prefix architecture with the seq-to-seq architecture during training, the position bias in the test-time wait-k' is significantly weakened, which shows that prefix-to-prefix training is the main cause of position bias.", "However, directly training with full-sentence structure leaks many future source words, where the obvious training-testing mismatch results in inferior translation quality of test-time wait-k' (Ma et al., 2019).", "In practice, prefix-to-prefix architecture forces the target word to assign attention to the prefix even if its corresponding source word has not been read in, which will undoubtedly cause the attention to become chaotic and tend to be distributed to the front position.", "This also explains why the position bias is more serious in the fixed policy, since the read/write cannot be adjusted, in more cases the prefix does not contain the corresponding source word but is forced to pay attention to.", "Besides, prefix-to-prefix architecture increases the frequency of front source positions during training, and previous works (Zhou and Liu, 2006; Luong et al., 2015; Gu et al., 2020) show that NMT models have a tendency towards over-fitting on high-frequency words, resulting in the position bias.", "Based on the preliminary analyses on position bias, we hope that in SiMT, target words can also align with the reasonable source positions as them in full-sentence MT, including the future positions even though the words on these positions have not yet been received.", "Along this line, we develop a Length-Aware Framework ( LAF ) to turn the streaming inputs into pseudo full-sentence and thereby allow the target words to align with the full-sentence positions rather than a prefix, as shown in Figure 5.", "The details are introduced following.", "Length prediction To turn the incomplete source into pseudo full-sentence, full-sentence length is an essential factor.", "Therefore, at step i , LAF predicts the full-sentence length L i based on the received source sentence x g ( i ) , through a classification task.", "Note that the predicted length dynamically updates with the increase of received source words.", "Formally, the probability of full-sentence length L i is predicted through a multi-layer perceptron (MLP) based on the received source words: p l (cid:0) L i | x g ( i ) (cid:1) =softmax (cid:0) W tanh (cid:0) V h g ( i ) (cid:1)(cid:1) (7) where h g ( i ) = 1 g ( i ) (cid:80) g ( i ) j =1 h j is the the mean of hidden states of the currently received source words.", "V R d model d model and W RN max d model are the parameters of MLP, where N max is the max length of the source sentence in the corpus.", "Note that softmax( ) is normalized on all possible length values.", "In testing, the value with the highest probability is selected as the full-sentence length.", "If source sentence is already complete (receiving eos ) or the predicted length L i is not larger than the received source length ( L i g ( i ) ), we use the current length g ( i ) as the full-sentence length.", "Pseudo full-sentence Given the predicted full-sentence length, we fill the future source position ( g ( i ) , L i ] with positional encoding to construct the pseudo full-sentence.", "Formally, given the hidden states of received source word h g ( i ) and the predicted full-sentence length L i , the pseudo full-sentence hidden states (cid:101) h ( i ) at step i is: (cid:101) h ( i ) = (cid:0) h 1 , , h g ( i ) , PE g ( i )+1 , , PEL i (cid:1) (8) Note that pseudo full-sentence is constructed at the hidden states level, so there is no need to recompute the source hidden states.", "Then, the target word y i is generated based on the pseudo full-sentence hidden states (cid:101) h ( i ) , and hence cross-attention ij in", "Eq.(2) can be assigned to future positions, rewritten as: ij = softmax s i WQ (cid:16)(cid:101) h ( i ) j WK (cid:17) d k (9) Overall, the decoding probability of the length-aware framework is: p laf ( y | x ) = I (cid:89) i =1 p l (cid:0) L i | x g ( i ) (cid:1) p (cid:0) y i | x g ( i ) , y <i , L i (cid:1) (10) 4.2 Training Objective The length-aware framework consists of a length prediction module and a translation module.", "For the length prediction module, we take the complete source length J as the ground-truth length label and train the model with cross-entropy loss: L len = I (cid:88) i =1 log p l (cid:0) J | x g ( i ) (cid:1) (11) 6779 For the translation module, we complement the source prefix to the ground-truth source length J with positional encoding and train the translation module by minimizing the cross-entropy loss: L ce = I (cid:88) i =1 log p (cid:0) y i | x g ( i ) , y <i , J (cid:1) (12) where y is the ground-truth target sentence.", "During testing, we apply the predicted full-sentence length to complement the source prefix.", "We will compare the performance of training with ground-truth or predicted full-sentence length in Sec.7.1.", "Finally, the total loss of LAF is calculated as: L laf = L ce + L len (13) 4.3 Integrated into SiMT Policy The length-aware framework can be integrated into most existing SiMT methods.", "We take wait-k and MMA as representatives to introduce the slight difference when integrated to fix and adaptive policy respectively.", "LAF predicts the full-sentence length based on the currently received source words x g ( i ) , so the key is to calculate g ( i ) , which may be different in fix and adaptive policy.", "Fixed policy Since wait-k is a pre-defined fixed policy, g wait k ( i ) in wait-k during both training and testing is invariably calculated as: g wait k ( i ) = min { k + i 1 , J } (14) Adaptive policy Since MMA can dynamically predict READ/WRITE actions, the calculation of g ( i ) during training and testing is different.", "During testing, we take the number of source words received by the model when starting to translate y i as g ( i ) .", "During training, MMA does not have explicit READ/WRITE actions, but predicts the writing probability ij , where ij represents the probability of translating y i after receiving source word x j .", "Therefore, we select the position of x j with the highest writing probability as g mma ( i ) : g mma ( i ) = argmax j ij (15) 5 Related Work The main architectures of SiMT model are divided into two categories: seq-to-seq architecture and prefix-to-prefix architecture.", "The early SiMT methods always used a full-sentence MT model trained by seq-to-seq architecture to translate each segment divided by the SiMT policy (Bangalore et al., 2012; Cho and Es-ipova, 2016; Siahbani et al., 2018).", "Gu et al. (2017) used reinforcement learning to train an agent to decide whether to start translating.", "Alinejad et al. (2018) added a predict operation based on Gu et al. (2017).", "Zhang et al. (2020b) proposed an adaptive segmentation policy based on meaning units.", "However, the mismatch between training and testing usually leads to inferior translation quality.", "The recent SiMT methods, including fix and adaptive policies, mainly used prefix-to-prefix architecture.", "For the fixed policy, Ma et al. (2019) proposed a wait-k policy, which always translates k words behind the source words.", "Zhang and Feng (2021a) proposed a char-level wait-k policy.", "Zhang and Feng (2021c) proposed a universal SiMT with the mixture-of-experts wait-k policy.", "For the adaptive policy, Zheng et al. (2019a) trained an agent with the golden read/write action sequence.", "Zheng et al. (2019b) added a delay token and introduced limited dynamic prediction.", "Arivazhagan et al. (2019) proposed MILk, using a Bernoulli variable to determine whether to write.", "Ma et al. (2020) proposed MMA to implement MILK on the Transformer.", "Wilken et al. (2020) and Zhang and Feng (2022b) proposed alignment-based SiMT policy.", "Liu et al. (2021a) proposed cross-attention augmented transducer for SiMT.", "Zhang et al. (2021) and Alinejad et al. (2021) introduced a full-sentence model to guide SiMT policy.", "Miao et al. (2021) proposed a generative SiMT policy.", "Although the prefix-to-prefix architecture simulates the streaming inputs, it brings the position bias described in Sec.3.", "Therefore, we proposed a length-aware framework to reduce the position bias and meanwhile fulfill the streaming inputs.", "IWSLT15 2 English Vietnamese (En Vi) (133K pairs) (Cettolo et al., 2015) We use TED tst2012 as validation set (1553 pairs) and TED tst2013 as test set (1268 pairs).", "Following the previous setting (Raffel et al., 2017; Ma et al., 2020), we replace words that the frequency less than 5 by unk , and the vocabulary sizes are 17K and 7.7K for English and Vietnamese respectively.", "WMT15 3 German English (De En) (4.5M 2 nlp.stanford.edu/projects/nmt/ 3 www.statmt.org/wmt15/translation-task 6780 2 4 6 8 10 Average Lagging(AL) 26 27 28 29 BLEU Full-sentence Wait-k + LAF Wait-k MMA + LAF MMA", "pairs) Following Ma et al. (2019), Arivazhagan et al. (2019) and Ma et al. (2020), we use new-stest2013 as validation set (3000 pairs) and new-stest2015 as test set (2169 pairs).", "BPE (Sennrich et al., 2016) was applied with 32K merge operations and the vocabulary is shared across languages.", "Transformer (Vaswani et al., 2017).", "Wait-k Wait-k policy proposed by Ma et al. (2019), the most widely used fixed policy, which first waits for k source words and then translates a target word and waits for a source word alternately.", "MMA 4 Monotonic multi-head attention (MMA) proposed by (Ma et al., 2020), the SOTA adaptive policy.", "At each step, MMA predicts a Bernoulli variable to decide whether to start translating.", "* + LAF Applying proposed length-aware framework on Wait-k or MMA.", "The implementation of all systems are adapted from Fairseq Library (Ott et al., 2019) based on Transformer (Vaswani et al., 2017) with the same setting in Ma et al. (2020).", "For En Vi, we apply Transformer-small (4 heads).", "For De En, we apply Transformer-Base (8 heads) and Transformer-Big (16 heads).", "We evaluate these systems with BLEU (Papineni et al., 2002) for translation quality and Average Lagging (AL) (Ma et al., 2019) for latency.", "AL is calculated based on g ( i ) : AL = 1 (cid:88) i =1 g ( i ) i 1 I/J (16) 4 github.com/pytorch/fairseq/tree/ master/examples/simultaneous_translation Train Test AL BLEU LAF GT Pred 4.11 28.34 Pred LAF Pred Pred 4.07 28.21 Oracle LAF GT GT 3.93 28.37 Table 1: An ablation study of using predicted full-sentence length (Pred) or ground-truth source length (GT) in training and testing respectively, where the results are based on the wait-5 policy.", "Figure 6 shows the performance improvement that LAF brings to Wait-k and MMA, where our method achieves higher translation quality under all latency.", "LAF has a more significant improvement on the fixed policy Wait-k, improving about 0.28 BLEU on En Vi, 1.94 BLEU on De En(Base), 1.50 BLEU on De En(Big), which is because the position bias in original wait-k is more serious.", "Compared with the SOTA adaptive policy MMA, our method also performs better and is much closer to full-sentence MT performance.", "We conduct extensive analyses to understand the specific improvements of our method.", "Unless otherwise specified, all the results are reported on De En(Base) and tested with wait-5 (AL=4.10) and MMA (AL=4.57) under similar latency.", "We use ground-truth full-sentence length to train the translation module, and use the predicted full-sentence length in testing.", "We conduct the ablation 6781 0 2 4 6 8 10 12 Average Lagging(AL) 15 20 25 30 35 40 A cc u r ac y ( % ) Wait-k + LAF MMA + LAF", "study of using predicted full-sentence length (Pred) or ground-truth length (GT) for translation in training and testing respectively, reported in Table 1.", "LAF has a better performance than Pred LAF', indicating that using ground-truth length during training is more helpful for learning translation.", "Compared with Oracle LAF' that uses ground-truth full-sentence length in testing, LAF achieves comparable performance, which shows that the length prediction module in LAF performs well.", "Figure", "7(a) shows the prediction accuracy of the full-sentence length in LAF, indicating that our method achieves good prediction performance.", "As the latency increases, the prediction accuracy of both Wait-k+LAF' and MMA+LAF' gradually increases.", "Specifically, Wait-k+LAF' predicts more accurately at low latency, which shows that the regular form of fixed policy is more conducive for LAF to learn the full-sentence length.", "Besides, in Figure", "7(b), with the continuous increase of received source words, the predicted full-sentence length is updated in real time and the prediction accuracy gradually improves, which is in line with our expectations.", "We show the change of average attention 5 after applying LAF in Figure 8.", "With LAF, the position bias in SiMT is significantly reduced, where the front positions are no longer illusoryly considered more important.", "By constructing the pseudo full-sentence, LAF bridges the structural gap between SiMT and full-sentence MT, so that the importance of source positions are more similar to that in full-sentence MT, thereby reducing the position bias.", "Position bias makes the target word tend to focus on the front source word, which leads to much overlap in the attention distribution, resulting in duplicate translation errors (Elbayad et al., 2020).", "Following See et al. (2017), we count the n-grams duplication proportion in translation in Figure 10.", "There are few duplicate n-grams in reference and full-sentence MT, especially when n> 2 .", "However, position bias in SiMT makes the model always focus on some particular source words in the front position, thereby exacerbating duplicate translation errors, especially in the fixed policy.", "In 3-grams, the duplicate translation of Wait-k is about 6 times that of full-sentence MT, which is in line with the previous conclusion (Elbayad et al., 2020).", "After applying LAF, the duplicate translation in SiMT is significantly reduced, similar to full-sentence MT. 7.5 Improvement on Various Difficulty Levels The word order difference is a major challenge of SiMT, where many word order inversions may force the model to start translating before reading the aligned source words (Chen et al., 2021).", "Following Zhang and Feng (2021c), We evenly divide the test set into three sets: Easy, Mid and Hard based on the number of reversed word orders 6782 e i n e ( _ a ) n e u e ( _ n e w ) R un d e ( _ r o un d ) i n d i @@ r e k @@ t e r ( _ i n d i r e c t ) G e s p r c h e ( _ t a l k s ) w i r d ( _ w ill ) v o r a u ss i c h t li c h ( _ li k e l y ) n o c h ( _ s t ill ) i n d i e s e m ( _ t h i s ) M o n a t ( _ m o n t h ) i n g y p t e n ( _ E g y p t ) b e g i nn e n ( _ b e g i n ) .", "in alignments using fast-align 6 (Dyer et al., 2013), and report the results on each set in Table 2.", "For full-sentence MT, word order reversal will not cause too much challenge, so that the performance gap between different sets is small.", "In SiMT, word order reversal often causes the model to translate before reading the aligned source words, which forces the target word to focus on some unrelated source words, resulting in poor performance in Hard set.", "LAF complements the incomplete source to the full-sentence length, which allows the target word to focus on the subsequent position instead of must focusing on the current irrelevant source word when the aligned word is not received, thereby obviously improving the performance on Hard set.", "LAF constructs the pseudo full-sentence by predicting the full-sentence length and filling the future position with positional encoding.", "To verify the importance of the future position, we count the attention weights on the future position (i.e., filled with positional encoding) at each decoding step in Figure 11.", "In the beginning, the future posi-6 https://github.com/clab/fast_align 5 10 15 20 25 30 35 40 Decoding Step 0.0 0.1 0.2 0.3 0.4 0.5 A tt e n ti on on F u t u r e S ou r ce P o s iti on Wait-k + LAF MMA + LAF Figure 11: The attention on future source position (filled with positional encoding) in different decoding steps.", "tion gets much attention weight, especially getting about 30% attention in the first decoding step.", "As the received source words increase, the attention received by future positions gradually decreases.", "Furthermore, we visualize the attention distribution of an example in Figure 9.", "In Wait-k and MMA, attention is more concentrated on the front position, especially Wait-k extremely focuses on the first source word, which leads to duplicate translation expected to to hold .", "With LAF, when the aligned source word has not been received, the future positions tend to get more attention, e.g. when Wait-k+LAF' translating take place before receiving beginnen .", "Besides, the predicted length in LAF changes dynamically and gradually approaches the full-sentence length.", "Overall, LAF reduces the position bias and thus the attention in SiMT is more similar to the attention in full-sentence MT, resulting in better translation quality.", "In this paper, we develop a length-aware framework for SiMT to reduce the position bias brought by incomplete source.", "Experiments show that our method achieves promising results by bridging the structural gap between SiMT and full-sentence MT. 6783 Acknowledgements We thank all the anonymous reviewers for their insightful and valuable comments.", "This work was supported by National Key R&D Program of China (NO. 2017YFE0192900)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "other", "method", "method", "method", "abstain", "method", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain" ]
[ "We propose ConVEx ( Con versational V alue Ex tractor), an efficient pretraining and fine-tuning neural approach for slot-labeling dialog tasks.", "Instead of relying on more general pretraining objectives from prior work (e.g., language modeling, response selection), ConVEx's pretraining objective, a novel pairwise cloze task using Reddit data, is well aligned with its intended usage on sequence labeling tasks.", "This enables learning domain-specific slot labelers by simply fine-tuning decoding layers of the pretrained general-purpose sequence labeling model, while the majority of the pretrained model's parameters are kept frozen.", "We report state-of-the-art performance of ConVEx across a range of diverse domains and data sets for dialog slot-labeling, with the largest gains in the most challenging, few-shot setups.", "We believe that ConVEx's reduced pretraining times (i.e., only 18 hours on 12 GPUs) and cost, along with its efficient fine-tuning and strong performance, promise wider portability and scalability for data-efficient sequence-labeling tasks in general.", "Slot labeling or slot filling is a critical natural language understanding (NLU) component of any task-oriented dialog system (Young, 2002, 2010; Tr and De Mori, 2011, inter alia ).", "Its goal is to fill the correct values associated with predefined slots : e.g., a dialog system for restaurant bookings is expected to fill slots such as date , time , and the number of guests with the values extracted from a user utterance (e.g., next Thursday , 7pm , 4 people ).", "Setting up task-oriented dialog systems, as well as slot labeling methods in particular, to support new tasks and domains is highly challenging due to inherent scarcity of expensive expert-annotated data for a plethora of intended use scenarios (Williams, 2014; Henderson et al., 2014; Budzianowski et al., 2018; Zhao et al., 2019).", "One plausible and promising solution is the creation of data-efficient models that learn from only a handful annotated examples in few-shot scenarios .", "This approach has been shown promising for learning intent detectors (Casanueva et al., 2020; Krone et al., 2020; Bunk et al., 2020) as well as for slot-filling methods (Hou et al., 2020; Coope et al., 2020).", "The dominant paradigm followed by the existing models of few-shot slot labeling is transfer learning (Ruder et al., 2019): 1) they rely on representations from models pretrained on large data collections in a self-supervised manner on some general NLP tasks such as (masked) language modeling (Devlin et al., 2019; Conneau et al., 2020; Brown et al., 2020) or response selection (Henderson et al., 2019b, 2020; Cer et al., 2018); and then 2) add additional task-specific layers for modeling the input sequences.", "However, we detect several gaps with the existing setup, and set to address them in this work.", "First, recent work in NLP has validated that a stronger alignment between a pretraining task and an end task can yield performance gains for tasks such as extractive question answering (Glass et al., 2020) and paraphrase and translation (Lewis et al., 2020).", "We ask whether it is possible to design a pretraining task which is more suitable for slot labeling in conversational applications.", "Second, is it possible to bypass learning sequence-level layers from scratch, and simply fine-tune them after pretraining instead?", "Third, is it possible to build a generally applicable model which fine-tunes pretrained general sequence-level layers instead of requiring specialized slot labeling algorithms from prior work (Krone et al., 2020; Hou et al., 2020)?", "Inspired by these challenges, we propose ConVEx ( Con versational V alue Ex tractor), a novel Transformer-based neural model which can be pretrained on large quantities of natural language data (e.g., Reddit) and then directly fine-tuned to a variety of slot-labeling tasks.", "Similar to prior work (Rastogi et al., 2019; Coope et al., 2020), ConVEx casts slot labeling as a span-based extraction task.", "For ConVEx, we introduce a new pretraining objective, termed pairwise cloze .", "This objective aligns well with the target downstream task: slot labeling for dialog, and emulates slot labeling relying on unlabeled sentence pairs from natural language data which share a keyphrase (i.e., a value for a specific slot).", "Instead of learning them from scratch as in prior work (Coope et al., 2020), ConVEx's pretrained Conditional Random Fields (CRF) layers for sequence modeling are fine-tuned using a small number of labeled in-domain examples.", "We evaluate ConVEx on a range of diverse dialog slot labeling data sets spanning different domains: DSTC 8 data sets (Rastogi et al., 2019), RESTAURANTS -8 K (Coope et al., 2020), and SNIPS (Coucke et al., 2018).", "ConVEx yields state-of-the-art performance across all evaluation data sets, but its true usefulness and robustness come to the fore in the few-shot scenarios.", "For instance, it increases average F 1 scores on RESTAURANTS 8 K over the previous state-of-the-art model (Coope et al., 2020) from 40.5 to 71.7 with only 64 labeled examples.", "Similar findings are observed with DSTC 8, and we also report state-of-the-art performance in the 5-shot slot labeling task on SNIPS.", "In summary, our results validate the benefits of task-aligned pretraining from raw natural language data , with particular gains for data-efficient slot labeling given a limited number of annotated examples, which is a scenario typically met in production.", "They also clearly demonstrate that competitive performance can be achieved via quick fine-tuning, without heavily engineered specialized methods from prior work (Hou et al., 2020).", "Further, we validate that learning sequence-level layers from scratch is inferior to fine-tuning from pretrained layers.", "From a broader perspective, we hope that this research will inspire further work on task-aligned pretraining objectives for other NLP tasks beyond slot labeling.", "From a more focused perspective, we hope that it will guide new approaches to data-efficient slot labeling for dialog.", "Before we delve deeper into the description of ConVEx in 2.3, in 2.1 we first describe a novel sentence-pair value extraction pretraining task used by ConVEx, called pairwise cloze , and then in 2.2 a procedure that converts raw unlabeled natural language data into training examples.", "Why Pairwise Cloze?", "Top performing natural language understanding models typically make use of neural nets pretrained on large scale data sets with unsupervised objectives such as language modeling (Devlin et al., 2019; Liu et al., 2019) or response selection (Henderson et al., 2020; Humeau et al., 2020).", "For sequential tasks such as slot labeling, this involves adding new layers and training them from scratch, as the pretraining procedure does not involve any sequential decoding; therefore, current unsupervised pretraining objectives are suboptimal for sequence-labeling tasks.", "With ConVEx, we introduce a new pretraining task with the following properties: 1) it is more closely related to the target slot-labeling task, and 2) it facilitates training all the necessary layers for slot-labeling, so these can be fine-tuned rather than learned from scratch.", "What is Pairwise Cloze?", "In a nutshell, given a pair of sentences that have a keyphrase in common, the task treats one sentence as a template sentence and the other as its corresponding input sentence .", "For the template sentence, the keyphrase is masked out and replaced with a special BLANK token.", "The model must then read the tokens of both sentences, and predict which tokens in the input sentence constitute the masked phrase.", "Some examples of such pairs extracted from Reddit are provided in Table 1.", "The main idea is to teach the model an implicit space of slots and values , where during self-supervised pretraining, slots are represented as the contexts in which a value might occur.", "The model than gets fine-tuned later to fit domain-specific slot labeling data.", "1 2.2 Pairwise Cloze Data Preparation Input Data.", "We assume working with the English language throughout the paper.", "Reddit has been shown to provide natural conversational English data for learning semantic representations that work well in downstream tasks related to dialog and conversation (Al-Rfou et al., 2016; Cer et al., 2018; Henderson et al., 2019b,a, 2020; Casanueva et al., 2020; Coope et al., 2020).", "Therefore, following 1 The pairwise cloze task has been inspired by the recent span selection objective applied to extractive QA by Glass et al. (2020): they create examples emulating extractive QA pairs with long passages and short question sentences.", "Another similar approach to extractive QA has been proposed by Ram et al. (2021).", "In contrast, our work seeks to emulate slot labeling in a dialog system by creating examples from short conversational utterances.", "recent work, we start with the 3.7B comments in the large Reddit corpus from 2015-2018 (inclusive) (Henderson et al., 2019a), filtering it to comments between 9 and 127 characters in length.", "This yields a total of almost 2B filtered comments.", "Keyphrase Identification.", "Training sentence pairs are extracted from unlabeled text based on their shared keyphrases.", "Therefore, we must first identify plausible candidate keyphrases.", "To this end, the filtered Reddit sentences are tokenized with a simple word tokenizer, and word frequencies are counted.", "The score of a candidate keyphrase kp = ( w 1 , w 2 , . . . , w n ) is computed as a function of the individual word counts: score ( kp ) = 1 / n n (cid:88) i =1 log | D | count( w i ) .", "where | D | is the number of sentences used to calculate the word frequencies.", "This simple scoring function selects phrases that have informative low-frequency words.", "The factor controls the length of the identified keyphrases: e.g., setting it to = 0 .", "8 , which is default in our experiments later, encourages selecting longer phrases.", "Given a sentence, the keyphrases are selected as those uni-grams, bigrams and trigrams whose score exceeds a predefined threshold.", "The keyphrase identification procedure is run for all sentences from the filtered Reddit sentences.", "At most two keyphrases are extracted per sentence, and keyphrases spanning more than 50% of the sentence text are ignored.", "Keyphrases that occur more than once in the sentence are also ignored.", "Sentence-Pair Data Extraction.", "In the next step, sentences from the same subreddit are paired by keyphrase to create paired data, 1.2 billion examples in total, 2 where one sentence acts as the input 2 We also expand keyphrases inside paired sentences if there is additional text on either side of the keyphrase that is the Total Reddit comments 3,680,746,776 Comments filtered by length 1,993,294,538 Extracted keyphrases 3,296,519,827 Training set size 1,172,174,919 Test set size 61,696,649 Mean number of words per keyphrase 1.3 Table 2: Statistics of the pairwise cloze training data.", "sentence and another as the template sentence (see Table 1 again).", "Table 2 summarizes statistics from the entire pretraining data preparation procedure.", "We now present ConVEx , a pretraining and fine-tuning framework that can be applied to a wide spectrum of slot-labeling tasks.", "ConVEx is pretrained on the pairwise cloze task (2.1), relying on sentence-pair data extracted from Reddit (2.2).", "Similar to prior work (Coope et al., 2020), we frame slot labeling as a span extraction task: spans are represented using a sequence of tags .", "These tags indicate which members of the sequence are in the span.", "We use the same tag representation as Coope et al. (2020), which is similar to the standard IOB format: the span is annotated with a sequence of BEFORE , BEGIN , INSIDE and AFTER tags.", "The ConVEx pretraining and fine-tuning architectures are illustrated in Figures 1a and 1b respectively, and we describe them in what follows.", "ConVEx: Pretraining.", "The ConVEx model encodes the template and input sentences using exactly the same Transformer layer architecture (Vaswani et al., 2017) as the lightweight and highly optimized ConveRT sentence encoder (Henderson et al., 2020): we refer the reader to the original work for all architectural and technical details.", "This model structure is very compact and resource-same in both sentences.", "For instance, the original keyphrase Star Wars will be expanded to the keyphrase Star Wars movie within this pair: I really enjoyed the latest Star Wars movie. We could not stand any Star Wars movie. (cid:80)(cid:72)(cid:72)(cid:87) (cid:92)(cid:82)(cid:88) (cid:68)(cid:87) (cid:62)(cid:37)(cid:47)(cid:36)(cid:49)(cid:46)(cid:64) (cid:21) (cid:83)(cid:72)(cid:82)(cid:83)(cid:79)(cid:72) (cid:25) (cid:83)(cid:80) (cid:86)(cid:75)(cid:68)(cid:85)(cid:72)(cid:71)(cid:3)(cid:38)(cid:82)(cid:81)(cid:89)(cid:72)(cid:53)(cid:55)(cid:3)(cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:73)(cid:82)(cid:85)(cid:80)(cid:72)(cid:85)(cid:3)(cid:79)(cid:68)(cid:92)(cid:72)(cid:85)(cid:86) (cid:86)(cid:75)(cid:68)(cid:85)(cid:72)(cid:71)(cid:3)(cid:38)(cid:82)(cid:81)(cid:89)(cid:72)(cid:53)(cid:55)(cid:3)(cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:73)(cid:82)(cid:85)(cid:80)(cid:72)(cid:85)(cid:3)(cid:79)(cid:68)(cid:92)(cid:72)(cid:85)(cid:86) (cid:38)(cid:53)(cid:41) (cid:37)(cid:40)(cid:41)(cid:50)(cid:53)(cid:40) (cid:37)(cid:40)(cid:41)(cid:50)(cid:53)(cid:40) (cid:37)(cid:40)(cid:42)(cid:44)(cid:49) (cid:44)(cid:49)(cid:54)(cid:44)(cid:39)(cid:40) (cid:238)(cid:21) (cid:83)(cid:79)(cid:72)(cid:68)(cid:86)(cid:72) (cid:36)(cid:41)(cid:55)(cid:40)(cid:53) (cid:44)(cid:81)(cid:83)(cid:88)(cid:87)(cid:3)(cid:41)(cid:41)(cid:49) (cid:68)(cid:71)(cid:71)(cid:3)(cid:14)(cid:3)(cid:81)(cid:82)(cid:85)(cid:80) (cid:86)(cid:72)(cid:79)(cid:73)(cid:3)(cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:68)(cid:87)(cid:87)(cid:72)(cid:81)(cid:87)(cid:76)(cid:82)(cid:81) (cid:41)(cid:41)(cid:49) (cid:68)(cid:71)(cid:71)(cid:3)(cid:14)(cid:3)(cid:81)(cid:82)(cid:85)(cid:80) (cid:68)(cid:71)(cid:71)(cid:3)(cid:14)(cid:3)(cid:81)(cid:82)(cid:85)(cid:80) (cid:55)(cid:72)(cid:80)(cid:83)(cid:79)(cid:68)(cid:87)(cid:72)(cid:3)(cid:41)(cid:41)(cid:49)", "efficient (i.e., it is 59MB in size and can be trained in 18 hours on 12 GPUs) while achieving state-of-the-art performance on a range of conversational tasks (Casanueva et al., 2020; Coope et al., 2020; Bunk et al., 2020).", "The weights in the ConveRT Transformer layers are shared for both sentences.", "3 The 512-dimensional output representations from the ConveRT layers are projected down to 128-dimensional representations using two separate feed-forward networks (FFNs), one for the template and one for the input sentence.", "The projected contextual subword representations of the input sentence are then enriched using two blocks of self-attention, attention over the projected template sentence representations, and FFN layers.", "This provides features for every token in the input sentence that take into account the context of both the input sentence and the template sentence.", "A final linear layer computes Conditional Random Field (CRF) parameters for tagging the value span using the 4 BEFORE , BEGIN , INSIDE , and AFTER labels.", "More formally, for each step t , corresponding to a subword token in the input sentence, the network outputs a 4 4 matrix of transition scores W t and a 4 -dimensional vector of unary potentials u t .", "Under the CRF model, the probability of a predicted tag 3 The ConVEx pretraining also closely follows ConveRT's tokenization process: the final subword vocabulary contains 31,476 subword tokens plus 1,000 buckets reserved for out-of-vocabulary tokens.", "Input text is split into subwords following a simple left-to-right greedy prefix matching (Vaswani et al., 2018; Henderson et al., 2020), and we tokenize both input sentences and template sentences the same way.", "p ( y | v ) T 1 (cid:89) t =1 exp ( W t | y t +1 , y t ) T (cid:89) t =1 exp ( u t | y", "The loss is the negative log-likelihood, which is equal to the negative sum of the transition scores and unary potentials that correspond to the true tag labels, up to a normalization term.", "The top scoring tag sequences are computed efficiently using the Viterbi algorithm (Sutton and McCallum, 2012).", "In addition to the CRF loss, an auxiliary dot-product loss can be added.", "This loss encourages the model to pair template sentences with the corresponding (semantically similar) input sentences.", "Let f Ti be the d -dimensional encoding of the beginning-of-sentence (BOS) token for the i th template sentence, and f Ii be the encoding of the BOS token for the i th (corrresponding) input sentence.", "As the encodings are contextual, the BOS representations can encapsulate the entire sequence.", "The auxiliary dot-product loss is then computed as: N (cid:88) i =1 C (cid:10) f Ti , f Ii (cid:11) + N (cid:88) i =1 log N (cid:88) j =1 e C (cid:104) f Ti , f Ij (cid:105) (2) where (cid:104) , (cid:105) is cosine similarity and C is an annealing factor that linearly increases from 0 to d over the first 10K training batches as in previous work (Henderson et al., 2020).", "The auxiliary loss is inspired by the dot-product loss typically used Activation Fast GELU approximation (Hendrycks and Gimpel, 2016) Total batch size 256 Negatives per batch 64 Learning rate 0.3 Optimizer Adadelta with = 0 .", "in retrieval tasks such as response selection (Hen-derson et al., 2017).", "Note that this loss does not necessitate any additional model parameters, and does not significantly increase the computational complexity of the pretraining procedure.", "Later in 4 we evaluate the efficacy of pretraining with and without the auxiliary loss.", "ConVEx: Fine-tuning.", "The majority of the computation and parameters of ConVEx are in the shared ConveRT Transformer encoder layers: they comprise 30M parameters, while the decoder layers comprise only 800K parameters.", "At ConVEx fine-tuning, the shared ConveRT transformer layers are frozen: these expensive operations are shared across slots, while the fine-tuned slot-specific models are small in memory and fast to run.", "To apply the ConVEx model to slot-labeling for a specific slot, the user utterance is treated both as the input sentence and the template sentence (note that at fine-tuning and inference the user input does not contain any BLANK token) see Figure 1b.", "This effectively makes the attention layers in the decoder act like additional self-attention layers.", "For some domains, additional context features such as the binary is_requested feature need to be incorporated (Coope et al., 2020): this is modeled through a residual layer that computes a term to add to the ConveRT output encoding, given the encoding itself and the additional features see Figure 1b.", "We again note that, except for the residual layer, no new layers are added between pretraining and fine-tuning; this implies that the model bypasses learning from scratch any potential complicated dynamics related to the application task, and is directly applicable to various slot-labeling scenarios.", "Pretraining: Technical Details.", "The ConVEx parameters at pretraining are randomly initialized, including the ConveRT layers, and the model is pretrained on the pairwise cloze Reddit data.", "Pretraining proceeds in batches of 256 examples, 64 of which are randomly paired sentences where no value should be extracted, and the remaining being pairs from the training data.", "This teaches the model that sometimes no value should be predicted, a scenario frequently encountered with slot labeling.", "Table 3 provides a concise summary of these and other pretraining hyper-parameters.", "Computational Efficiency and Tractability.", "ConVEx is pretrained for 18 hours on 12 Tesla K80 GPUs; this is typically sufficient to reach convergence.", "The total pretraining cost is roughly $85 on Google Cloud Platform.", "This pretraining regime is orders of magnitude cheaper and more efficient than the prevalent pretrained NLP models such as BERT (Devlin et al., 2019), GPT models (Brown et al., 2020), XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019), etc.", "The reduced pretraining cost allows for wider experimentation, and aligns with recent ongoing initiatives on improving fairness and inclusion in NLP/ML research and practice (Strubell et al., 2019; Schwartz et al., 2019).", "Fine-tuning: Technical Details.", "We use the same fine-tuning procedure for all fine-tuning experiments on all evaluation data sets.", "It proceeds for 4,000 steps of batches of size 64, stopping early if the loss drops below 0.001.", "4 The ConveRT layers are frozen, while the other layers are initialized to their pretrained values and optimized with Adam (Kingma and Ba, 2015), with a learning rate of 0.001 that decays to 10 6 over the first 3,500 steps using cosine decay (Loshchilov and Hutter, 2017).", "Dropout is applied to the output of the ConveRT layers with a rate of 0.5: it decays to 0 over 4,000 steps also using cosine decay.", "The residual layer for additional features (e.g., is_requested , token_is_numeric ) consists of a single 1024-dim hidden layer.", "As we demonstrate later in 4, this procedure works well across a variety of data settings.", "The early stopping and dropout are intended to prevent overfitting on very small data sets.", "Fine-tuning and Evaluation: Data and Setup.", "We rely on several diverse slot-labeling data sets, used as established benchmarks in previous work.", "First, we evaluate on a recent data set from Coope et al. (2020): RESTAURANTS -8 K , which comprises conversations from a commercial restaurant book-4 We enforce that exactly 20% of examples in each batch contain a value, and 80% contain no value.", "Further, the batch size is smaller than 64 in few-shot scenarios if the training set is too small to meet this ratio without introducing duplicates.", "ing system.", "It covers 5 slots required for the booking task: date , time , people , first name , and last name .", "Second, we use the Schema-Guided Dialog Dataset (SGDD) (Rastogi et al., 2019), originally released for DSTC 8, in the same way as prior work (Coope et al., 2020), extracting span annotated data sets from SGDD in four different domains.", "The particulars of the RESTAURANTS -8 K and DSTC 8 evaluation data are provided in the appendix.", "Similar to Coope et al. (2020), we simulate few-shot scenarios and measure performance on smaller sets sampled from the full data.", "We (randomly) subsample the training sets of various sizes while maintaining the same test set.", "Furthermore, we also evaluate ConVEx in the 5-shot evaluation task on the SNIPS data (Coucke et al., 2018), following the exact setup of Hou et al. (2020), which covers 7 diverse domains, ranging from Weather to Creative Work (see Table 4 later for the list of domains).", "The statistics of the SNIPS evaluation are also provided in the appendix.", "The SNIPS evaluation task slightly differs from RESTAURANTS -8 K and DSTC 8: we thus provide additional details related to fine-tuning and evaluation procedure on SNIPS, replicating the setup of Hou et al. (2020).", "Each of the 7 domains in turn acts as a held-out test domain, and the other 6 can be used for training.", "From the held-out test domain, episodes are generated that contain around 5 examples, covering all the slots in the domain.", "For each domain, we first further pretrain the ConVEx decoder layers (the ones that get fine-tuned) on the other 6 domains: we append the slot name to the template sentence, which allows training on all the slots.", "This gives a single updated fine-tuned ConVEx decoder model, trained on all slots of all other domains.", "For each episode, for each slot in the target domain we fine-tune 3 ConVEx decoders.", "The predictions are ensembled by averaging probabilities to give final predictions.", "This helps reduce variability and improves prediction quality.", "Baseline Models.", "For RESTAURANTS -8 K and DSTC 8, we compare ConVEx to the current best-performing approaches from Coope et al. (2020): Span-BERT and Span-ConveRT.", "Both models rely on the same CNN+CRF architecture 5 applied on top of the subword representations transferred from a pretrained BERT(-Base/Large) model (Devlin et al., 2019) (Span-BERT), or from a pretrained 5 See (Coope et al., 2020) for further technical details.", "ConveRT model (Henderson et al., 2020).", "6 Similar to Coope et al. (2020), for each baseline we run hyper-parameter optimization via grid search, evaluating on the dev set of RESTAURANTS -8 K .", "For SNIPS, we compare ConVEx to a wide spectrum of different few-shot learning models proposed and compared by Hou et al. (2020).", "7 One crucial difference between our approach and the methods evaluated by Hou et al. (2020) is as follows: we treat each slot independently, using separate ConVEx decoders for each, while the their methods train a single CRF decoder that models all slots jointly.", "One model per slot is simpler, easier for practical use (e.g., it is possible to keep and manage data sets for each slot independently), and makes pretraining conceptually easier.", "8 Evaluation Measure.", "Following previous work (Coucke et al., 2018; Rastogi et al., 2019; Coope et al., 2020), we report the average F 1 scores for extracting the correct span per user utterance.", "If the models extract part of the span or a longer span, this is treated as an incorrect span prediction.", "Intrinsic (Reddit) Evaluation.", "ConVEx reaches a precision of 84.8% and a recall of 85.3% on the held-out Reddit test set (see Table 2 again), using 25% random negatives as during pretraining.", "The ConVEx variant without the auxiliary loss (termed no-aux henceforth) reaches a precision of 82.7% and a recall of 83.9%, already indicating the usefulness of the auxiliary loss.", "9 These preliminary results serve mostly as a sanity check, suggesting ConVEx's ability to generalize over unseen Reddit data; we now evaluate its downstream task efficacy.", "6 Coope et al. (2020) also evaluated an approach based on the same CNN+CRF architecture as Span-{BERT, ConveRT} which does not rely on any pretrained sentence encoder, and learns task-specific subword representations from scratch.", "However, that approach is consistently outperformed by Span-ConveRT, and we therefore do not report it for brevity.", "7 A full description of each baseline model is beyond the scope of this work, and we refer to (Hou et al., 2020) for further details.", "For completeness, short summaries of each baseline model on SNIPS are provided in the appendix.", "8 Moreover, the methods of Hou et al. (2020) are arguably more computationally complex: at inference, their strongest models (i.e., TapNet and WPZ, see the appendix, run BERT for every sentence in the fine-tuning set (TapNet), or run classification for every pair of test words and words from the fine-tuning set (WPZ).", "The computational complexity of the ConVEx approach does not scale with the fine-tuning set, only with the number of words in the query sequence.", "9 While we evaluate the two ConVEx variants also in the slot-labeling tasks later, unless noted otherwise, in all experi", "Evaluation on RESTAURANTS -8 K and DSTC 8.", "The main respective results are summarized in Figure 2a and Figure 2b, with additional results available in the appendix.", "In full-data scenarios all models in our comparison, including the baselines from Coope et al. (2020), yield strong performance reaching 90% or even 95% average F 1 across the board.", "10 However, it is encouraging that Con-ments we assume the use of the variant with the aux loss.", "VEx is able to surpass the baseline models on average even in the full-data regimes.", "Figure 2a and Figure 2b also suggest true benefits of the proposed ConVEx approach: the ability of ConVEx to handle few-shot scenarios well.", "The gap between ConVEx and the baseline models becomes more and more pronounced as we continue to reduce the number of annotated examples for the labeling task.", "On RESTAURANTS -8 K the gain is still small when dealing with 1,024 annotated examples (+2.1 F 1 points over the strongest baseline), but it increases to +18.4 F 1 points when 128 annotated examples are available, and further to +31.2 F 1 points when only 64 annotated examples are available.", "We can trace a similar behavior on DSTC 8, with gains reported for all the DSTC 8 single-domain subsets in few-shot setups.", "These results point to the following key conclusion.", "While pretrained representations are clearly useful for slot-labeling dialog tasks, and the importance of pretraining becomes increasingly important when we deal with few-shot scenarios, the revalidated here, conversational pretraining based on response selection (ConveRT) seems more useful for conversational applications than regular LM-based pretraining (BERT).", "chosen pretraining paradigm has a profound im-pact on the final performance.", "The pairwise cloze pretraining task, tailored for slot-labeling tasks in particular, is more robust and better adapted to few-shot slot-labeling tasks.", "This also verifies our hypothesis that it is possible to learn effective domain-specific slot-labeling systems by simply fine-tuning a pretrained general-purpose slot labeler relying only on a handful of domain-specific examples.", "SNIPS Evaluation (5-Shot).", "The versatility of ConVEx is further verified in the 5-shot labeling task on SNIPS following Hou et al. (2020)'s setup.", "The results are provided in Table", "4. We report the highest average F 1 scores with ConVEx; ConVEx also surpasses all the baselines in 4/7 domains, while the highest scores in the remaining three domains are achieved by three different models from Hou et al. (2020).", "This again hints at the robustness of ConVEx, especially in few-shot setups, and shows that a single pretrained model can be adapted to a spectrum of slot-labeling tasks and domains.", "These results also stand in contrast with the previous findings of Hou et al. (2020) where they claimed ...that fine-tuning on extremely limited examples leads to poor generalization ability .", "On the contrary, our results validate that it is possible to fine-tune a pretrained slot-labeling model directly with a limited number of annotated examples for various domains, without hurting the generalization ability of ConVEx.", "In other words, we demonstrate that the mainstream pretrain then fine-tune paradigm is a viable solution to sequence-labeling tasks in few-shot scenarios, but with the condition that the pretraining task must be structurally well-aligned with the intended downstream tasks.", "Next, we analyze the benefits of model ensembling, as done in the 5-shot SNIPS task, also on RESTAURANTS -8 K .", "The results across different training data sizes are shown in Table", "5. While there is no performance difference when a sufficient number of annotated examples is available, the scores suggest that the model ensembling strategy does yield small but consistent improvements in few-shot scenarios, as it mitigates the increased variance that is typically met in these setups.", "Pretraining on CC100.", "We also test the robustness of ConVEx by pretraining it on another large Web-scale dataset: CC100 (Wenzek et al., 2020; Conneau et al., 2020) is a large CommonCrawl corpus available for English and more than 100 other 1 1/2 1/4 1/8 1/16 1/32 1/64 1/128 Dataset fraction (from RESTAURANTS -8 K ) 70 75 80 85 90 95 100 F 1 S c o r e Reddit CC100 Figure 3: F 1 scores on RESTAURANTS -8 K (averaged over all slots) with varying training data sizes when ConVEx is pretrained on Reddit versus CC100.", "languages.", "We use the English CC100 portion to pretrain ConVEx relying on exactly the same procedure described in 2, and then fine-tune it as before.", "First, its intrinsic evaluation on the held-out test set already hints that the CC100-based ConVEx is also a powerful slot labeller: we reach a precision of 85.9% and recall of 86.3%.", "More importantly, the results on RESTAURANTS 8 K , provided in Figure 3, confirm that another general-purpose corpus can be successfully used to pretrain the ConVEx model.", "We even observe slight gains on average over the Reddit-based model.", "Inductive Bias of ConVEx.", "In sum, ConVEx outperforms current state-of-the-art slot-labeling models such as Span-ConveRT, especially in low-data settings, where the performance difference is particularly large.", "The model architectures of Span-{BERT, ConveRT} and ConVEx are very similar: the difference in performance thus arises mainly from the pretraining task, and the fact that ConVEx's sequence-decoding layers are pretrained, rather than learned from scratch.", "We now analyse the inductive biases of ConVEx, that is, how first name last name date time people Individual slot (from RESTAURANTS -8 K ) 0 5 10 15 20 25 P r e c i s i o n / R e c a ll S c o r e Precision Recall Figure 5: Performance of the ConVEx decoder across all slots in RESTAURANTS -8 K without any fine-tuning.", "the pretraining regime and the main assumptions affect its behavior before and after fine-tuning.", "First, we analyze per-slot performance on RESTAURANTS -8 K , comparing ConVEx ( with aux ) with Span-BERT and Span-ConveRT.", "The scores in a few-shot scenario with 64 examples are provided in Figure 4, and we observe similar patterns in other few-shot scenarios.", "The results indicate the largest performance gap for the slots first name and last name .", "This is expected, given that by the ConVEx design the keyphrases extracted from Reddit consist of rare words, and are thus likely to cover plenty of names without sufficient coverage in small domain-specific data sets.", "Nonetheless, we also mark prominent gains over the baselines achieved also for the other slots with narrower semantic fields, where less lexical variability is expected ( date and people ).", "We can also expose ConVEx's built-in biases by applying it with no fine-tuning.", "Figure 5 shows the results with no slot-specific fine-tuning on RESTAURANTS -8 K , feeding the user input as both the template and input sentence.", "We extract at most one value from each sentence, where the model predicted a value for 96% of all the test examples, 16% of which corresponded to an actual labeled slot, and 86% did not.", "The highest recalls were for the name slots, and the time slot, which correlates with the slot-level breakdown results from Figure", "4. 11 11 The most frequent predictions from non-finetuned ConVEx that do not correspond to a labeled slot on RESTAURANTS -8 K give further insight into its inductive biases.", "The top 10 extracted non-labeled values are in descending order: booking, book, reservation, a reservation, a table, indoors, restaurant, cuisine, outside table, and outdoors .", "Some of these could be modeled as slot values with an extended ontology, such as indoors or outdoors/outside table .", "We have introduced ConVEx (Conversational Value Extractor), a light-weight pretraining and fine-tuning neural approach to slot-labeling dialog tasks.", "We have demonstrated that it is possible to learn domain-specific slot labelers even in low-data regimes by simply fine-tuning decoder layers of the pretrained general-purpose ConVEx model.", "The ConVEx framework has achieved a new leap in performance on standard dialog slot-labeling tasks, most notably in few-shot setups, by aligning the pretraining phase with the downstream fine-tuning phase for slot-labeling tasks.", "In future work, we plan to investigate the limits of data-efficient slot labeling, focusing on one-shot and zero-shot setups.", "We will also apply ConVEx to related tasks such as named entity recognition and conversational question answering.", "To the best of our knowledge, the conducted work does not imply any undesirable ethical ramifica-tions.", "By design and its uncontrollable nature, the Reddit data does encode a variety of societal, gender, and other biases; however, the models pretrained on the Reddit data are always fine-tuned for specific tasks using controlled data, and the Reddit-pretrained models are not used for any text generation nor full-fledged dialogue applications directly.", "The evaluation data used in this work have been collected in previous work following standard crowdsourcing and data annotation practices.", "We would like to thank Yutai Hou for sharing the data and evaluation episodes for the SNIPS evaluation.", "Thanks to our colleagues at PolyAI for fruitful discussions and critical examinations of this work.", "We would also like to thank Sam Coope and Tyler Farghly for their help with rerunning and validating Span-BERT and Span-ConveRT." ]
[ "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "method", "other", "other", "other" ]
[ "Recognizing named entities in a document is a key task in many NLP applications.", "Although current state-of-the-art approaches to this task reach a high performance on clean text (e.g. newswire genres), those algorithms dramatically degrade when they are moved to noisy environments such as social media domains.", "We present two systems that address the challenges of processing social media data using character-level phonetics and phonology, word embeddings, and Part-of-Speech tags as features.", "The first model is a multitask end-to-end Bidirectional Long Short-Term Memory (BLSTM)-Conditional Random Field (CRF) network whose output layer contains two CRF classifiers.", "The second model uses a multitask BLSTM network as feature extractor that transfers the learning to a CRF classifier for the final prediction.", "Our systems outperform the current F1 scores of the state of the art on the Workshop on Noisy User-generated Text 2017 dataset by 2.45% and 3.69%, establishing a more suitable approach for social media environments.", "One of the core tasks in Natural Language Processing (NLP) is Named Entity Recognition (NER).", "NER is a sequence tagging task that consists in selecting the words that describe entities and recognizing their types (e.g., a person, location, company, etc.).", "Figure 1 shows examples of sentences from different domains that contain named entities.", "Recognizing entities in running text is typically one of the first tasks in the pipeline of many NLP applications, including machine translation, summarization, sentiment analysis, and question answering.", "Traditional machine learning systems have proven to be effective in formal text, where grammatical errors are minimal and writers stick to CoNLL 2003 [ Spanish ] MISC Farm Minister [ Loyola de Palacio ] PER had earlier accused [ Fischler ] PER at an [ EU ] ORG farm ministers ' meeting of causing unjustified alarm through dangerous generalisation .", "WNUT 2017, Twitter domain been listenin to [ trey ] PER alllll week ... can u luv someone u never met ??", "bcuz i think im in luv yeeuuuuppp", "!!!", "the rules of the written language (Florian et al., 2003a; Chieu and Ng, 2003a).", "However, those traditional systems dramatically fail on informal text, where improper grammatical structures, spelling inconsistencies, and slang vocabulary prevail (Rit-ter et al., 2011).", "For instance, Table 1 shows a snapshot of NER systems' performance during the last years, where the results drop from 96.49% to 41.86% on the F1 metric as we move from formal to informal text.", "Although the results are not directly comparable because they consider different conditions and challenges, they serve as strong evidence that the NER task in social media is far from being solved.", "Recently, researchers have approached NER using different neural network architectures.", "For instance, Chiu and Nichols (2016) proposed a neural model using Convolutional Neural Networks (CNN) for characters and a bidirectional Long Short Term Memory (LSTM) for words.", "Their model learned from word embeddings, capitalization, and lexicon features.", "On a slightly different approach, Lample et al. (2016) used a BLSTM with a CRF at the output layer, re-1401 Organizer Competition Domain F1 Classes Grishman and Sundheim (1996a) MUC-6 Newswire 96.49% 2 Tjong Kim Sang and De Meulder (2003) CoNLL Newswire 88.76% 4 Strauss et al. (2016) WNUT Twitter 52.41% 10 Derczynski et al. (2017) WNUT SM domains 41.86% 6 Table 1: Results on different NER shared tasks.", "moving the dependencies on external resources.", "Moreover, Ma and Hovy (2016) proposed an end-to-end BLSTM-CNN-CRF network, whose loss function is based on the maximum log-likelihood estimation of the CRF.", "These architectures were benchmarked on the standard CoNLL 2003 dataset (Tjong Kim Sang and De Meulder, 2003).", "Although most of the work has focused on formal datasets, similar approaches have been evaluated on SM domains (Strauss et al., 2016; Derczynski et al., 2017).", "In the Workshop on Noisy User-generated Text (WNUT) 2016, Limsopatham and Collier (2016), the winners of the NER shared task, used a BLSTM-CRF model that induced features from an orthographic representation of the text.", "Later, in the WNUT 2017 shared task, the best performing system used a multitask network that transferred the learning to a CRF classifier for the final prediction (Aguilar et al., 2017).", "In this work we focus on addressing the challenges of the NER task found in social media environments.", "We propose that what is traditionally categorized as noise (i.e., misspellings, inconsistent orthography, emerging abbreviations, and slang) should be modeled as is since it is an inherent characteristic of SM text.", "Specifically, the proposed models attempt to address", "i) misspellings using subword level representations,", "ii) grammatical mistakes with SM-oriented Part-of-Speech tags (Owoputi et al., 2013),", "iii) sound-driven text with phonetic and phonological features (Bharadwaj et al., 2016), and", "iv) the intrinsic skewness of NER datasets by applying class weights.", "It is worth noting that our models do not rely on capitalization or any external resources such as gazetteers.", "The reasons are that capitalization is arbitrarily used on SM environments, and gazetteers are expensive resources to develop for a scenario where novel entities constantly and rapidly emerge (Derczynski et al., 2017; Augen-stein et al., 2017).", "Based on our experiments, we have seen that a multitask variation of the proposed networks improves the results over a single-task network.", "Additionally, this multitask version, paired with phonetic and phonological features, outperforms previous state-of-the-art results on the WNUT 2017 dataset, and the same models obtain reasonable results with respect to the state of the art on the CoNLL 2003 dataset (Tjong Kim Sang and De Meulder, 2003).", "The rest of the paper is organized as follows: 2 presents the proposed features, the formal description of the models, and the implementation details.", "3 describes the datasets and their challenges.", "On 4, we show the evaluation process of our models and the results.", "We explain the performance of the models on 5.", "6 describes related work and, finally, we draw conclusions on 7.", "Our methods are based on two main strategies:", "i) a representation of the input text using complementary features that are more suitable to social media environments, and", "ii) a fusion of these features by using a multitask neural network model whose main goal is to learn how entities are contextual-ized with and without the entity type information.", "Semantic features .", "Semantic features play a crucial role in our pipeline as they provide contextual information to the model.", "This information allows the model to infer the presence of entities as well as the entity types.", "We use the pretrained word embedding model provided by Godin et al. (2015).", "This model has been trained on 1 million tweets (roughly 1% of the tweets in a year) with the skipgram algorithm.", "We take advantage of this resource as it easily adapts to other SM environments besides Twitter (Aguilar et al., 2017).", "Syntactic features .", "Syntactic features help the models deal with word disambiguation based on 1402 Sentence IPA u hav to b KIDDDDING me / ju hv t@ bi kIdIN mi / you have to be kidding me / ju hv t@ bi kIdIN mi / Table 2: Examples of both noisy and normalized text.", "the grammatical role that the words play on a sentence.", "That is, a word that can be a verb or a noun in different scenarios may conflict with the interpretations of the models; however, by providing syntactical information the models can improve their decisions.", "We capture grammatical patterns using the Part-of-Speech (POS) tagger provided by Owoputi et al. (2013).", "This POS tagger has custom labels that are suitable to SM data (i.e., the tagger considers emojis, hashtags, URLs and oth-ers).", "Phonetic and phonological features .", "We also consider the phonetic and phonological aspects of the data at the character level.", "In Table 2 we show an example of two phrases: the first sentence is taken from SM, and the second one is its normalized representation.", "Even though the spellings of both phrases are significantly different, by using the phonological (articulatory) aspects of those phrases it is possible to map them to the same phonetic representation.", "In other words, our assump-tion is that social media writers heavily rely on the way that words sound while they write.", "We use the Epitran 1 library (Bharadwaj et al., 2016), which transliterates graphemes to phonemes with the International Phonetic Alphabet (IPA).", "In addition to the IPA phonemes, we also use the phonological (articulatory) features generated by the PanPhon 2 library (Mortensen et al., 2016).", "These features provide articulatory information such as the way the mouth and nasal areas are involved in the elaboration of sounds while people speak.", "We have experimented with two models.", "In the first one, we use an end-to-end BLSTM-CRF network with a multitask output layer comprised of one CRF per task, similar to Yang et al. (2016).", "In the second one, we define a stacked model that is based on two phases:", "i) a multitask neural network and", "ii) a CRF classifier.", "In the first phase, the network acts as a feature extractor, and then, 1 https://github.com/dmort27/epitran 2 https://github.com/dmort27/panphon for the second phase, it transfers the learning to a CRF classifier for the final predictions (see Figure 3).", "In both cases, the multitask layer is defined with the following two tasks: Segmentation .", "This task focuses on the Begin-Inside-Outside (BIO scheme) level of the tokens.", "That is, for a given NE, the model has to predict whether a word is B, I, or O regardless of the entity type.", "The idea is to let the models learn how entities are treated in general, rather than associating the types to certain contexts.", "This task acts as a regularizer of the primary task to prevent overfitting.", "Categorization .", "In this case, the models have to predict the types of the entities along with the BIO scheme (e.g., B-person, I-person, etc.), which represent the final labels.", "We formalize the definitions of our models as follows: let X = [ x 1 , x 2 , ..., x n ] be a sample sentence where x i is the i th word in the sequence.", "Then, let : V x R dim x be a word embedding, and let x = [ ( x 1 ) , . . . , ( x n )] be the word embedding matrix for the sample sentence such that V x is the vocabulary and dim x is the dimension of the embedding space.", "Similarly, let : V p R dim p be the POS tag embedding, and let p = [ ( p 1 ) , . . . , ( p n )] be the POS tag embedding matrix for the sample sentence such that V p is the set of Part-of-Speech tags and dim p is the dimension of the embedding space.", "Notice that the POS tag embedding matrix p is learned during training.", "Also, let Q = [ q 1 , q 2 , ..., q m ] be the phonetic letters of a word; let : V q R | V q | + dim PanPhon be an embedding that maps each phonetic character to a one-hot vector of the International Phonetic Alphabet ( V q ) concatenated with the 21 ( dim PanPhon ) phonological features of the PanPhon library (tongue position, movement of lips, etc.) (Bharadwaj et al., 2016); and let q = [ ( q 1 ) , ..., ( q m )] be the matrix representation of the word-level phonetics and phonology.", "We first apply an LSTM (Hochreiter and Schmidhuber, 1997) to the q matrix on forward and backward directions.", "Then we concatenate the output from both directions: h = LSTM ( { q 1 , q 2 , ..., q m } ) h = LSTM ( { q m , q m 1 , ..., q 1 } ) h = [ h ; h ] 1403 Figure 2: This is an end-to-end system that uses the CRF loss function as the objective function of the network.", "This vector not only encodes the phonetic and phonological features, but it also captures some morphological patterns at the character level based on the IPA representations.", "Then, we concatenate this vector with the word and POS tag representations: a = [ x t ; p t ; h t ] .", "We feed this representation to another bidirectional LSTM network (Dyer et al., 2015), similar to the BLSTM described for the character level.", "The bidirectional LSTM generates a word-level representation that accounts for the context in the sentence using semantics, syntax, phonetics and phonological aspects.", "We feed this representation to a fully-connected layer: r i = BLSTM ( { a 1 , a 2 , ... a n } ) (1) z i = ReLU ( W a r i + b ) (2) At this point, both models share the same definition.", "From here, we describe the multitask learning characteristics for each model separately.", "End-to-end model .", "For the end-to-end network (see Figure 2), we define an output layer based on two Conditional Random Fields (Lafferty et al., 2001), each assigned to one of the tasks.", "The idea of adding a CRF to the model is to capture the relation of the output probabilities of the network with respect to the whole sequence.", "This means that the CRFs will maximize the log-likelihood of the entire sequence, which allows the model to learn very specific constraints from the data (e.g., a label I-location cannot be followed by I-person ).", "Following Ma and Hovy (2016), we formalize the definition of the CRF as follows: let y = [ y 1 , y 2 , ..., y n ] be the labels for a sequence x , Figure 3: This is a stacked model that uses a network as feature extractor, and then it transfers the learning to a CRF classifier.", "where y i represents the i th label of the x i token in the sentence.", "Next, we calculate the conditional probability of seeing y given the extracted features z from the network and the weights W associated to the labels: p ( y | z ; W ) = exp( W y ( z , y )) P y 0 y exp( W y ( z , y 0 )) Where is a feature function that codifies the interactions between consecutive labels, y t and y t +1 , as well as the interactions between labels and words, represented by z t .", "Then, the objective function for one CRF is defined by the maximum log-likelihood of this probability.", "However, we are running two CRFs as the objective function: L 1 ( z , W ) = log p ( y seg | z ; W ) L 2 ( z , W ) = log p ( y cat | z ; W ) L ( z , W ) = L 1 ( z , W ) + L 2 ( z , W ) Where L 1 is the loss function of the segmentation task with labels y seg .", "Similarly, L 2 is the loss function of the categorization task with labels y cat .", "L is the loss function that accounts for both tasks, where the segmentation task is weighted by an scalar.", "Stacked model .", "For this model, we use a multitask network as a feature extractor whose loss function is defined as a categorical cross entropy (see Figure 3).", "We apply a softmax activation function to produce the probability distribution over the labels, and then we calculate the loss as 1404 follows: H 1 ( y , z ) = X z i y log( softmax ( W seg z i + b )) H 2 ( y , z ) = X z i y log( softmax ( W cat z i + b )) L ( y , z ) = H 1 ( z , W seg ) + H 2 ( z , W cat ) After training the multitask network, we take the activation outputs from Equation 2.", "These vectors are used as features to train a Conditional Random Fields classifier.", "The definition of the CRF is the same as the one described for the end-to-end network.", "We have performed a very simple preprocessing on the data, which consists in replacing URLs, emojis, tags, and numbers with predefined tokens.", "Additionally, the vocabulary of the pretrained word embeddings was not sufficient to cover all the words in the WNUT dataset (i.e., training, validation, and testing sets have OOV words).", "We handled this situation using the Face-book library FastText (Bojanowski et al., 2016).", "This library can produce an embedding vector from the subword level of the word (i.e., ngrams).", "The advantage of FastText over other embedding learning algorithms is that we can still extract useful embeddings for OOV words from their subword embeddings.", "For instance, if there is a missing letter in one word, the subword-level vector will be reasonably close to the vector of the correct spelling.", "The models have been trained using weighted classes, which forces the models to pay more attention to the labels that are less frequent.", "This is a very important step since the NE datasets usually show a skewed distribution, where the NE tokens represent approximately 10% of the entire corpus.", "Although weighting classes improves the recall of the model, we tried to be sensitive to this aspect as the model can be forced to predict entities even in cases where there are none.", "The weights were experimentally defined, keeping the same distribution but decreasing the loss on non-entity tokens.", "Additionally, we defined our models using the following hyperparameters: the phonetic and phonological BLSTM at the character level uses 64 units per direction, which adds up to 128 units.", "Similarly, the word level BLSTM uses 100 units per direction, which accounts for a total of 200 Corpus Dataset Classes % Unique CoNLL 2003 Train 4 26% Dev 4 40% Test 4 41% WNUT 2017 Train 6 75% Dev 6 85% Test 6 80% Table 3: Percentage of unique NEs in two benchmark datasets, the one from CoNLL 2003 and the one used in the 2017 shared task held by the WNUT workshop.", "units.", "The fully-connected layer has 100 neurons, and it uses a Rectified Linear Unit (ReLU) activation function.", "We also use a dropout operation before and after each BLSTM component.", "This forces the networks to find different paths to predict the data, which ultimately improves the generalization capabilities (i.e., they do not rely on a single path for certain inputs).", "The dropout value is 0.5.", "For the stacked model we use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001.", "Social media (SM) captures the fast evolving behavior of the language, and, as its influence in society grows, SM platforms play an important role in language understanding.", "We focus this work on the WNUT 2017 dataset for NER (Derczynski et al., 2017).", "This dataset covers multiple SM platforms and suits perfectly the purpose of this work.", "Table 5 shows the distribution of the dataset and its classes.", "The training set uses tweets, whereas the development set is based on YouTube comments.", "The testing set combines content from Reddit and StackExchange.", "The cross domain nature of the dataset establishes an additional challenge to the task.", "For instance, besides the particularities of the domains (e.g., length of the sentences, domain-specific expressions such as hashtags, emojis and others), the users tend to address different topics on each of the SM domains with different levels of relaxed language and style (Ritter et al., 2011; Strauss et al., 2016; Derczynski et al., 2017).", "Moreover, the predominant factors in those SM environments are the emerging and rare entities.", "As stated by Derczynski et al. (2017), emerging describes the entity instances that started to appear in context recently (e.g., a movie title released a 1405 Statistics Train Dev Test Posts 3,395 1,009 1,287 Tokens 62,729 15,733 23,394 NE tokens 3,160 1,250 1,589 NE tokens (%) 5.04 7.95 6.79 Table 4: General statistics of the WNUT 2017 dataset.", "year ago), whereas rare depicts the entities that appear less than certain number of times.", "It is worth noting that this dataset presents a great challenge to systems that rely on external resources due to the rare and emerging properties.", "We also consider the CoNLL 2003 dataset (Tjong Kim Sang and De Meulder, 2003) as it has been used as the standard dataset for NER benchmarks.", "However, we emphasize that both datasets present significantly different challenges and, thus, some relevant aspects in CoNLL 2003 may not be that relevant in the WNUT 2017 dataset.", "For example, capitalization is a crucial feature in newswire text, but it is less important in SM data since users tend to arbitrarily alter the character casing.", "Moreover, the target classes on the WNUT 2017 dataset cover the CoNLL 2003 classes plus fine-grained classes such as creative-work (e.g., movie titles, T.V. shows, etc.), group (e.g., sports teams, music bands, etc.), and product .", "The additional classes are more heterogeneous, and thus, it makes the task more difficult to generalize.", "Furthermore, Table 3 shows the percentage of unique tokens of the WNUT 2017 dataset, which certainly shows a great diversity compared to the CoNLL 2003 dataset.", "We mainly focus our experiments on the WNUT 2017 dataset.", "However, we consider relevant to compare our approach to the standard CoNLL 2003 dataset where current state-of-the-art systems are benchmarked.", "This section addresses the experiments and results of both datasets.", "In this section we discuss the experiments of the proposed approaches.", "We compare our models and describe the contribution of each component of the stacked system.", "Additionally, we compare our results against the state of the art in the WNUT 2017 dataset.", "Stacked vs. end-to-end model .", "Table 6 shows that the stacked system has a lower precision than the end-to-end model, but its recall is the highest.", "This means that the stacked model is slightly better at generalizing than the other models since it can detect a more diverse set of entities.", "The surface form F1 metric (Derczynski et al., 2017) supports that intuition as well.", "It assigns a better F1 score to the stacked system (43.90%) than to the end-to-end model (42.79%) because the former finds more rare and emerging entities than the latter.", "Moreover, Table 6 also shows that the precision of the end-to-end model is higher than the rest of the systems.", "This tends to capture the most frequent entities and leave behind the rare ones, which explains the different behaviors between the precision and recall of both models.", "Stacked model .", "The feature extractor contains a category task that can produce predictions of the test set.", "We explored predicting the final labels with the feature extractor and compared the results against the predictions of the CRF classifier.", "We noticed that the CRF always outperformed the network.", "For the best scores the feature extractor achieved 40.64% whereas the CRF reached 45.55%.", "This is consistent with previous research (Lample et al., 2016; Aguilar et al., 2017) in that the individual output probabilities of the network do not consider the whole sequence, and thus, a sequential algorithm such as a CRF can improve the results by learning global constraints (i.e., the B-person cannot be followed by I-corporation ).", "Ablation experiment .", "We explored the contribution of the features and different aspects of our models.", "For instance, we tried a BLSTM network using pretrained word embeddings only.", "The re-1406 Classes Precision (%) Recall (%) F1 (%) Stacked E2E WNUT Stacked E2E WNUT Stacked E2E WNUT corporation 33.33 30.77 31.91 19.70 12.12 22.73 24.76 17.39 26.55 creative-work 50.00 55.56 36.67 14.79 10.56 7.75 22.83 17.75 12.79 group 47.76 63.16 41.79 19.39 14.55 16.97 27.59 23.65 24.14 location 62.20 78.12 56.92 52.67 50.00 49.33 57.04 60.98 52.86 person 73.49 71.15 70.72 51.05 51.75 50.12 60.25 59.92 58.66 product 40.58 34.29 30.77 22.05 9.45 9.45 28.57 14.81 14.46 Overall 61.06 66.67 57.54 36.33 32.99 32.90 45.55 44.14 41.86 Table 6: The class-level and overall results of our systems on the WNUT 2017 dataset.", "sults of this model set our baseline on a 39.78% F1-score (see Table 7).", "This score is considerably close to the state-of-the-art performance, but improvements beyond that are small.", "For instance, Table 7 shows an ablation experiment using the stacked model.", "The ablation reveals that weighting the classes is the most influential factor, which accounts for a 2.58% of F1 score improvement.", "This aligns with the fact that the data is highly skewed, and thus, the model should pay more attention to the less frequent classes.", "The second most important aspect is the POS tags, which enhance the results by 1.10%.", "This improvement suggests that POS tags are important whether the dataset is from a noisy environment or not since other researchers have found positive effects by using this feature on formal text (Huang et al., 2015).", "Almost equally influential are the phonetic and phonological features that push the F1 score by 0.93%.", "According to the ablation experiment, using phonetic and phonology along with the pretrained word embeddings and POS tags can reach an F1 measure of 41.81%, which is a very similar result to the state-of-the-art score, but with a simpler and more suitable model for SM environments (i.e., without gazetteers or capitalization).", "We explored the multitask learning aspect by empirically trying multiple combinations of auxiliary tasks.", "The best combination is the standard NER categorization along with the segmentation task.", "The segmentation slightly improves the binary task proposed by Aguilar et al. (2017) by around 0.3%.", "Additionally, trying the binariza-tion, segmentation, and categorization tasks together drops the results by around 0.2% with respect to the categorization paired with the binary task.", "Moreover, the ablation experiment shows that the multitask layer boosts the performance of the stacked model with 0.79% of F1 score.", "For the OOV problem, we use FastText to provide vectors to 2,333 words (around 13% of the vocabulary).", "However, the ablation experiment shows a small improvement, which suggests that those words did not substantially contribute to the meaning of the context.", "Another aspect that we explored was adding all the letters of the dataset to the character level of the stacked model without modifying the casing.", "Surprisingly, the models produced a slightly worse result (around -0.5%).", "Our intuition is that the character aspects are already captured by the model with the phonetic (IPA) representation, and the arbitrary use of capitalization renders this information useless.", "It is also worth noting that having phonetics instead of a language-dependent alphabet allows the adaptability of this approach to other languages.", "State of the art comparison .", "Table 6 shows that our end-to-end and stacked models significantly outperform the state-of-the-art score by 2.28% and 3.69% F1 points, respectively.", "In the case of the stacked system, the precision and recall outper-1407 No Predictions 1 Road and airport closure isolate Srinagar as avalanche risk remains high 2 The Defence Research Development Organisation ( DRDO ) is working on four projects to develop new technologies for more accurate ... 3 Her name is Scout .", "form the winning system of the shared task (UH-RiTUAL) across all the classes.", "Moreover, even though the UH-RiTUAL system uses gazetteers, it only outperforms the recall of the end-to-end model on the corporation class.", "These results can be explained by the entity diversity of the dataset, where the emerging and rare properties are difficult to capture with external resources.", "We also benchmarked our approach on a standard CoNLL 2003 dataset for the NER task.", "The stacked model reached 89.01% while the end-to-end model achieved 88.98% on the F1 metric.", "Although the state-of-the-art performance is 91.21% (Ma and Hovy, 2016), our approach targets SM domains and, consequently, our models disregard some of the important aspects on formal text while still getting reasonable results.", "For instance, Ma and Hovy (2016) input the text to their model as is , which indirectly introduce capitalization to the morphological analysis at the character level.", "This aspect becomes relevant in this dataset because entities are usually capitalized on formal text.", "As explained before, our models do not rely on capitalization because the characters are represented by the International Phonetic Alphabet, which does not differentiate between lower and upper cases.", "Table 8 shows some predictions of our stacked model on the WNUT 2017 test set.", "In example number 1, the model is able to correctly label Srinagar as person , even though the model does not rely on gazetteers or capitalization.", "It is also important to mention that the word was not in the training or development set, which means that the network had to infer the entity purely from the context.", "Moreover, the second example shows that the model has problems to determine whether the article the belongs to an NE or not.", "This is an ambiguous problem that even humans struggle with.", "This example also has a variation on spelling for the words Defence and Organisation .", "We suspect that the mitigation of OOV words using the FastText library helped in this case.", "Also, from the phonetic perspective, the model treated the word Defence as if it was the word Defense because both words map to the same IPA sequence, /dIfEns/ .", "In the third case, the model is not able to identify the NE Scout , even though the context makes it fairly easy.", "In its former years, NER systems focused on newswire text, where the goal was to identify mainly three types of entities: person , corporation , and location .", "These entity types were originally proposed in the 6th Message Understanding Conference (MUC-6) (Grishman and Sundheim, 1996b).", "In MUC-7, the majority of the systems were based on heavily hand-crafted features and manually elaborated rules (Borthwick et al., 1998).", "Some years later, many researchers incorporated machine learning algorithms to their systems, but there was still a strong dependency on external resources and domain-specific features and rules (Tjong Kim Sang and De Meulder, 2003).", "In addition, the majority of the systems used Maximum Entropy (Bender et al., 2003; Chieu and Ng, 2003b; Curran and Clark, 2003; Florian et al., 2003b; Klein et al., 2003) and Hidden Markov Models (Florian et al., 2003b; Klein et al., 2003; Mayfield et al., 2003; Whitelaw and Patrick, 2003).", "Furthermore, McCallum and Li (2003) used a CRF combined with web-augmented lexicons.", "The features were selected by hand-crafted rules and refined based on their relevance to the domain of the entities.", "Moreover, Nothman et al. (2013) used Wikipedia resources to take advantage of structured data and reduce the human-annotated labels.", "In general, the results of the systems were reasonable for formal text, yet the scalability and the expensive detailed rules were not; their systems were difficult to maintain and adapt to other domains where different rules were needed.", "Recently, NER has been focused on noisy data as a result of the growth in social media users.", "However, the limits of the previous systems dramatically affected the results on noisy domains.", "For instance, Derczynski et al. (2014) evaluated multiple NER tools in noisy environments: Stanford NER (Finkel et al., 2005), ANNIE (Cunning-ham et al., 2002), among others.", "They reported that the majority of the tools were not capable of adapting to the noisy conditions showing a drop in performance of around 40% on a F1-score metric.", "This motivated many researchers to solve the problem using different techniques.", "In 2015, Baldwin et al. (2015) organized a NER shared task at the 1st Workshop on Noisy User-generated Text (WNUT), where three of the participants used word embedding as features to train their traditional machine learning algorithms (Godin et al., 2015; Toh et al., 2015; Cherry et al., 2015).", "The shared task introduced noisy data as well as more difficult entity types to identify (e.g., tv show, product, sports team, movie, music artist, etc.).", "Notably, the WNUT 2016 and 2017 were predominated by neural network systems (Limsopatham and Collier, 2016; Aguilar et al., 2017).", "Deep neural networks have proven to be effective for NER.", "The state-of-the-art and the most competitive architectures can be characterized by the use of recurrent neural networks (Chiu and Nichols, 2016) combined with CRF (Lample et al., 2016; Ma and Hovy, 2016; Peng and Dredze, 2016; Bharadwaj et al., 2016; Aguilar et al., 2017).", "Our work primarily focuses on social media data and explores more suitable variations and combinations of those models.", "The most important differences of our approach and previous works are", "i) the use of phonetics and phonology (articulatory) features at the character level to model SM noise,", "ii) consistent BLSTMs for character and word levels,", "iii) the segmentation and categorization tasks,", "iv) a multitask neural network that transfers the learning without using lexicons or gazetteers, and", "v) weighted classes to handle the inherent skewness of the datasets.", "This paper proposed two models for NER on social media environments.", "The first one is a stacked model that uses a multitask BLSTM network as a feature extractor to transfer the learning to a CRF classifier.", "The second one is an end-to-end multitask BLSTM-CRF model whose output layer has a CRF per task.", "Both models improve the state-of-the-art results on the WNUT 2017 dataset, where the data comes from multiple SM domains (i.e., Twitter, YouTube, Reddit, and StackExchange).", "Instead of working on normalizing text, we designed representations that are robust to inherent properties of SM data: inconsistent spellings, diverse vocabulary, and flexible grammar.", "Considering that SM is a prevalent communication chan-nel that constantly generates massive amounts of data, it is practical to design NLP tools to process this domain as is .", "In this sense, we showed that the phonetic and phonological features are useful to capture sound-driven writing.", "This approach avoids the standard normalization process and boosts prediction performance.", "Furthermore, the use of multitask learning with segmentation and categorization is important to improve the results of the models.", "Finally, the weighted classes force the model to pay more attention on skewed datasets.", "We showed that these components can point to more suitable approaches for NER on social media data." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "result" ]
[ "Providing a reliable explanation for clinical diagnosis based on the Electronic Medical Record (EMR) is fundamental to the application of Artificial Intelligence in the medical field.", "Current methods mostly treat the EMR as a text sequence and provide explanations based on a precise medical knowledge base, which is disease-specific and difficult to obtain for experts in reality.", "Therefore, we propose a counterfactual multi-granularity graph supporting facts extraction (CMGE) method to extract supporting facts from the irregular EMR itself without external knowledge bases in this paper.", "Specifically, we first structure the sequence of the EMR into a hierarchical graph network and then obtain the causal relationship between multi-granularity features and diagnosis results through counterfactual intervention on the graph.", "Features having the strongest causal connection with the results provide interpretive support for the diagnosis.", "Experimental results on real Chinese EMRs of the lymphedema demonstrate that our method can diagnose four types of EMRs correctly, and can provide accurate supporting facts for the results.", "More importantly, the results on different diseases demonstrate the robustness of our approach, which represents the potential application in the medical field 1 .", "Electronic Medical Record (EMR) based diagnosis has attracted extensive attention due to its comprehensive historical information and clinical descriptions with the development of natural language processing and medical informatics (Yang et al., 2018; Choi et al., 2018; Liu et al., 2019; Dong et al., 2020; Ma et al., 2020b).", "The application of deep learning in medicine requires adequate medical explanations for the result.", "Specific to the diagnosis of EMR, the model needs to provide the text description supporting the diagnosis results.", "As shown in Figure 1, an irregular EMR is a document of disease-related information, including symptoms, history of the disease, preliminary examination results, and so on, which is disordered and sparse with meaningless noisy text.", "Existing methods provide explanation through medical entities (Yuan et al., 2020), text spans (Mullenbach et al., 2018) and the weights of external knowledge (Ma et al., 2018).", "The entity is critical to the diagnosis (Sha and Wang, 2017; Girardi et al., 2018), but for the medical explanation, it cannot provide specific information of symptoms (such as positive or negative).", "And the form of the span is too fragmented and lacks readability.", "Therefore, the clause as a more informative and readable representation is needed to be combined above the level of entities.", "Most of the previous methods provide reliable explanations for diagnosis by calculating the similarity with an external medical knowledge base (ICD 2 and CCS 3 ) (Xu et al., 2019, 2020).", "KAME 2 https://www.cdc.gov/nchs/icd/icd10cm.htm 3 https://www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp (Ma et al., 2018) uses the weights of the nodes in the introduced knowledge graph to provide explanations.", "Depending on the hierarchical relations in the database, GMAN (Yuan et al., 2020) builds a disease hierarchy graph and a causal graph to find critical entities.", "However, a trusted medical knowledge base requires a mass of expertise in different fields to build, and it may be incomplete or erroneous in practical clinical applications.", "So far, how to extract supporting facts from the EMR itself without an external medical knowledge base is still a problem.", "Counterfactual reasoning provides a link between what could have happened when inputs had been changed (Verma et al., 2020).", "Doctors usually make a judgment based on several related symptoms during diagnosing a disease.", "In this regard we can consider a question: will a doctor make a misdiagnosis without one of the critical symptoms?", "The result is clear.", "In a counterfactual way, if we gradually weaken the features until the diagnosis changes dramatically, then this feature can be considered as a supporting fact.", "Based on this consensus, we propose a counterfactual multi-granularity graph supporting facts extraction (CMGE) method for the irregular EMR in this paper.", "First, we model the EMR as a hierarchical graph structure, which contains sentences, clauses, and entities.", "Specifically, sentences are used to model the temporal relationship, clauses provide a complete descriptive explanation, and entities provide symptom support as others.", "On this basis, we use a graph attention network to aggregate all information from different granularities.", "Then, we can do a counterfactual intervention to obtain the causal relation between feature and diagnosis.", "Specifically, we train a learnable soft-mask matrix to mask the feature of nodes or edges in the graph while keeping the diagnosis unchanged, and the remaining features are the supporting facts of the diagnosis.", "Counterfactual reasoning on the graph requires enhancing the medical features contained in the text of different granularity, so we use clustering labels 4 to cluster clauses and entities.", "The experimental results demonstrate the effectiveness of our method.", "The contributions of this paper are summarized as follows: We propose a multi-granularity structured 4 Notice that this label is disease-free and can be initially labeled without expert knowledge by crowdsourcing annotation.", "modeling method based on the hierarchical graph network that decomposes the EMR into sentences, clauses, and entities, and use clustering labels to enhance the expression of medical features.", "We adapt counterfactual intervention to extract critical supporting facts from the EMR during diagnosis.", "Importantly, our method is disease-independent and does not require a precise external medical knowledge base, so that it is suitable for a wide range of applications.", "The evaluation conducted on the real EMR dataset shows that our method can correctly diagnose the types of lymphedema.", "Keyword coverage and human evaluation show that the counterfactual reasoning method has better extraction accuracy and robustness compared to two existing methods reimplemented by ourselves.", "Given an irregular EMR in the form of free text X = [ x 1 , x 2 , , x L ] with L words, the task for us is to extract supporting facts that can be used to explain the diagnosis result without relying on external knowledge while performing diagnosis.", "The supporting facts can be entities or clauses of text.", "The medical features in the EMR are sparse and medical entities are insufficient to provide suffi-cient explanation for diagnosis.", "Therefore, we do multi-granularity segmentation for EMRs, which Figure 3: An overview of counterfactual multi-granularity graph supporting facts extraction network.", "enhances the symptom features of entities and explanation of diagnosis, while maintaining the integrity of the text.", "An EMR can be divided by periods into sentences, which can be further divided into clauses by commas or semicolons as a more granular segmentation.", "In order to keep the symptom features of entities, we do Named Entity Recognition 5 and number extraction for each clause 6 .", "In addition, we add two general nodes representing the gender and age of the patient respectively.", "After segmentation, as shown in Figure 2, we can build a hierarchical tree structure.", "The nodes at each level represent the text of sentences, clauses, and entities respectively.", "Specifically, for each EMR, we connect the two general nodes, sentence nodes sequentially.", "Then, we connect the clause node to the sentence node to which it belongs and the entity nodes disassembled from it.", "In particular, a fully-connected relationship is established between all the clause nodes, which overcomes the defect that Graph Attention Network (GAT) can only aggregate the information from adjacent nodes when the network is shallow and expands the receptive field of each sub-sentence node to the whole EMR.", "Then, all clause nodes are connected to an aggregate node which is used to do the diagnosis.", "All the edges in the graph are bidirectional to make the information between nodes flow better.", "In the original EMR, all tokens have the same weight, so noisy text will degrade the performance of diagnosis and explanation.", "To improve the accuracy of symptom presentation, clustering-labels are used to cluster clauses and entities into corresponding medical classifications.", "Specifically, the clause is divided into 33 classes and the entity is divide into 10 classes, which is a scientific classification method in medicine derived from the textbook \"Di-agnostics\" (Xuehong Wan, 2013).", "These labels are disease-free and can be labeled without expert knowledge by crowdsourcing annotation.", "We manually annotated the corresponding labels for the entire dataset on our own platform.", "And we have trained a BERT (Devlin et al., 2019) based text classifier on 30% of the data, which can achieve the annotation accuracy of 80.76% on clauses and 97.13% on entities on the remaining data.", "This shows that our method can easily annotate large-scale data.", "With these labels, we can gather the same types of features together in the feature space, thereby enhancing the model's overall attention to important types of features.", "Please refer to Appendix B.2 for more details.", "After building the multi-granularity graph for a medical record, each node in the graph contains a sequence X node = [ x 1 , x 2 , , x n ] with n words, which is tokenized by the tokenizer of BERT (De-vlin et al., 2019).", "In order to maintain the consistency of the results of different granularity encoding, we use one bi-directional RNN (Schuster and Paliwal, 1997) with GRU (Cho et al., 2014) to cover the sequence of sentences, clauses, entities and general information into hidden state sequence respectively H m = ( h 1 , h 2 , , h n ) : h t = BiGRU ( h t 1 , e ( x t )) (1) where h t is the hidden state of the t-th token and e ( x t ) is the embedding vectors with random initialization of x i .", "Finally, we use the last hidden state of i-th text sequence as the feature H i of node i .", "Once we get the feature of the node, we use the Graph Attention Network (GAT) (Velickovic et al., 2018) to aggregate the information between different granularity.", "GAT can obtain the correlation score between nodes based on the attention mechanism, which is the key to the interpretability of our model.", "Specifically, GAT takes all the node features as input and calculates the attention coeffi-cients ij by e ij = LeakyReLU ( a T ([ W H i ; W H j ])) (2) ij = exp( e ij ) (cid:80) k N i exp( e ik ) (3) where H i is the feature of node i , W R d d is a learnable weight matrix for the linear projection, a R 2 d is a learnable weight vector used to transform the adjacent node feature representations to the edge score e ij between the i -th and j -th nodes.", "Equation (4) means to do a softmax normalization between all the edge attention scores on the edges connected to node i .", "Then, we update the feature of each node by H (cid:48) i = LeakyReLU ( (cid:88) j N i ij W H j ) (4) After graph reasoning, the representation H of each node has been updated with the granular information aggregated from adjacent nodes and can be used for subsequent tasks.", "After obtaining the updated node features, we use them in three subtasks:", "(i) graph classification for automatic diagnosis;", "(ii) sub-sentence classification for clustering; and", "(iii) entity classification for clustering.", "Taking entity node classification as an example, for each entity node, we use a two-layer MLP with the ReLU activation function to calculate the probability.", "For an entity node i , we can get P entity,i = MLP entity ( E i ) (5) By the same way, we can obtain the probability P graph , P clause , P entity .", "The same as the common multi-task learning, we joint all the losses together as: L joint = 1 L graph + 2 L clause + 3 L entity (6) where 1 , 2 and 3 are hyper-parameters, and all the loss are calculated by cross-entropy loss.", "Providing supporting information while making the diagnosis is the key to applying Artificial Intelligence into the medical field.", "Inspired by (Ying et al., 2019), we add node-mask or edge-mask into GAT to obtain the counterfactual result after the training and eliminate the noise nodes while keeping the diagnostic results unchanged.", "For edge-mask, we introduce a learnable matrix M with the same form as the adjacency matrix of the medical record graph.", "Each element m ij in the matrix represents the degree of mask for message aggregation from node i to node j in the graph.", "With this method, the calculation of attention coef-ficients in the GAT has been changed to ij = exp( e ij m ij ) (cid:80) k N i exp( e ik m ik ) (7) And for node-mask, similarly, we introduce a learnable parameter i for each node i in the graph.", "The parameter represents the degree of mask for the feature in the node.", "After node-mask, the calculation of e ij and H (cid:48) i has been changed to e ij = LeakyReLU ( a T ([ i W H i ; j W H j ])) (8) H (cid:48) i = LeakyReLU ( (cid:88) j N i ij j W H j ) (9) In the training of counterfactual reasoning, we jointly optimize three loss functions to obtain accurate counterfactual results.", "To ensure that the model can make a correct diagnosis after the counterfactual intervention, we use the original model type train test Secondary Lymphedema 448 36 Primary Lymphedema 185 22 Chylous Reflux Lymphedema 19 21 Others 248 21 All 900 100 Table 1: The statistics of the datasets.", "to obtain the fact result D i and maximizes the probability of selecting the correct diagnosis in counterfactual reasoning.", "Besides, we minimize the sum of all elements in the mask matrix to ensure that all noise nodes are filtered as much as possible.", "Since there is an exponential level possibility of counterfactual intervention on the model through the node-mask or edge-mask, we minimize the information entropy of the mask matrix regarding which node to select to reduce the uncertainty of the result.", "Finally, the loss of counterfactual reasoning is as follows: L c = 4 logP ( D = D i ) + 5 sum ( M ) 6 1 N (cid:88) m i M m i log ( m i ) 6 1 N (cid:88) m i M (1 m i ) log (1 m i ) (10) where 4 , 5 and 6 are hyper-parameters, N is the number of elements in the mask matrix M , and all the elements in M are mapped to the [0 , 1] by sigmoid function.", "For node-mask, the training is similar.", "After counterfactual reasoning, we extract the nodes or edges (each edge represents the two nodes connected) represented by the top-k elements in the mask matrix as supporting facts.", "Based on the cooperation with the hospitals, we conducted experiments with real EMR data.", "We selected the EMRs from the department of lymphedema and diagnose the disease of primary lymphedema ( ), secondary lymphedema ( ), chylous reflux lymphedema ( ) and others ( ).", "The reasons for us to choose this department are as follows: (I) Lymphedema is a sub-discipline in medicine, so the researches on it, whether in Medicine or Artificial Intelligence, is still limited.", "For example, ICD10 can not provide full medical supporting.", "(II)", "The pathogenesis and treatment methods of different types of lymphedema vary greatly, but their outward manifestations are similar.", "Therefore, there is an urgent need for a simple method of earlier diagnosis system of lymphedema.", "(III)", "Specialist doctors pay more attention to the diagnosis in sub-discipline disease and do not concern with the large-scale rough diagnosis.", "Formally, there are 1000 EMRs used in our experiment, of which 900 are used for training and 100 are used for testing.", "The statistics of four types of diseases are shown in Table", "1. The average length of all EMRs is 345 words in Chinese.", "And our model is implemented based on PyTorch (Paszke et al., 2019), and use Adam (Kingma and Ba, 2015) optimizer for training.", "Please refer to Appendix B.1 for datasets details and Appendix A.1 for implementation details.", "We designed two representative models to compare the ability to extract medical support facts under similar task conditions based on attention and variational inference:", "Self-Attention This method represents most of the existing approaches and provides explanations through attention similarity.", "We use BiGRU to encode the EMR.", "With the sequence embedding, following (Choi et al., 2016), we use average pooling to obtain the overall representation for automatic diagnosis.", "For supporting fact extraction, following (Mullenbach et al., 2018), we calculate the self-attention weight of each token, and design a sliding window method to obtain the average attention scores of fixed length spans, among which having high scores are taken as the supporting facts.", "PostKS This is another method based on variational inference we've designed in addition to attention.", "Inspired by the dialogue knowledge selection model PostKS (Lian et al., 2019), we convert the pivotal information extraction into a clause selection problem.", "This method uses the text result of the diagnosis(as shown in Figure 1) to calculate the correlation with the clause as posterior distribution through the attention mechanism, and then uses self-attention and average pooling between clauses to obtain correlation score as the prior distribution.", "During training, based on variational inference, the model uses posterior information to guide prior selection, so that makes the prior distribution and the Model Diagnosis Clause Entity P/% R/% F1/% P/% R/% F1/% P/% R/% F1/% Self-Attention 94.95 95.00 94.97 ---PostKS 95.13 97.00 96.06 ---CMGE c e 96.17 96.00 96.08 1.91 3.72 2.52 14.36 9.05 11.10 CMGE e 97.40 96.00 96.69 81.26 81.80 81.53 15.22 32.18 20.67 CMGE c 97.19 97.11 97.15 25.75 1.55 2.92 95.33 95.12 95.22 CMGE 99.04 99.00 99.02 82.49 82.53 82.51 96.43 96.38 96.40 Table 2: The first two lines are the diagnostic performance of the compared model and the last line is ours.", "posterior distribution consistent.", "Finally, during inference, we select the clauses with high prior attention scores among clauses as supporting facts.", "Please refer to Appendix A.2 for more details.", "To measure the performance of our pivotal information extraction module, we built a simple diagnostic criterion from (Levine, 2017), which is a complete diagnosis and treatment guide for lymphedema written by medical experts.", "Based on this diagnosis criteria, we used a combination of automatic evaluation and human evaluation.", "Automatic Evaluation The precision, recall, and F1 are used as the metrics to measure the diagnostic accuracy of the model, which is the basis for the practical application.", "Specifically, several key-phrases for the three types of lymphedema are manually identified respectively to represent diagnostic features, and they are the re-descriptions of diagnostic criteria in the guide using phrases from EMRs.", "We use hit@1/3/5 (Bordes et al., 2013) to measure the coverage rate of the extracted results to the key-phrases.", "These metrics represent whether one of the diagnostic features is included in the top-1/3/5 extracted results.", "Please refer to Appendix B.3 for more details.", "Human Evaluation Since some of the implicit medical features cannot be covered by key-phrases, human evaluation is necessary.", "We used each model to extract the top 3 supporting facts respectively for all 100 EMR samples in the testset, and randomly shuffled the order of the results.", "Then we invited 3 evaluators with medical backgrounds and having read the guide to determine whether the results conform to medical knowledge.", "We focus on the comprehensiveness and trustworthiness of each model.", "Comprehensiveness is used to measure whether the model can provide more medical features, and trustworthiness is used to measure whether the extraction results are helpful for diagnosis.", "For each item, the evaluator is asked to score in 0 2 .", "The final indicator is the average of the three evaluators.", "4.1 Diagnostic Result The diagnostic results are shown in Table", "2. From the results, we can see our model performs better than all the compared models and can achieve about 99% accuracy in the diagnosis of lymphedema, which exceeds the comparison models by 3%-5% in precision, recall, and F1.", "Based on our model, the categories of clauses and entities can be distinguished correctly, which demonstrates that the clustering information contained in the pseudo-labels is correctly learned by our multi-granularity model.", "This result indicates that the accuracy of our method in the diagnosis of lymphedema is in line with clinical requirements.", "Since our goal is to make the model really help doctors in clinical practice with reliable medical explanations, we will focus on the performance of the counterfactual extraction of the supporting facts for the diagnosis that follows.", "Please refer to Appendix A.4 for the effectiveness of our model in diagnosis on the benchmark data.", "Automatic Result Table 3 shows the automatic evaluation results of the supporting facts extraction.", "Since the identified keywords are difficult to accurately cover the features for diagnosis and models have different adaptability to various diseases, the performance is distinguishing on different diseases.", "Compared with other models, the counterfactual-Model Secondary Lymphedema Primary Lymphedema Chylous Reflux Lymphedema hit@1 hit@3 hit@5 hit@1 hit@3 hit@5 hit@1 hit@3 hit@5 Self-Attention 5.45% 36.36% 50.91% 13.64% 45.45% 54.55% 4.76% 9.52% 33.33% PostKS 9.09% 43.64% 60.00% 9.09% 40.91% 54.55% 0.00% 14.29% 19.05% Node-Mask 25.45% 52.73% 69.09% 22.73 % 31.82% 54.55% 9.52% 19.05% 23.81% Edge-Mask 36.36% 61.82% 70.91% 22.73% 40.91% 50.00% 61.90% 66.67% 76.19% Table 3: The automatic evaluation for the extraction of diagnostic supporting facts for three types of lymphedema.", "based methods, especially the Edge-Mask method, has an advantage in accuracy and robustness on the whole.", "Hit@1 shows that the Edge-Mask can locate key facts more quickly than the comparison methods and hit@5 shows that it achieves over 70% accuracy on secondary lymphedema and chylous reflux lymphedema.", "In the comparison of different lymphedema, other methods have a greater performance degradation, and only the Edge-Mask maintains high accuracy in various diseases, indicating that the Edge-Mask method is highly robust to different diseases.", "Human Result Table 4 shows the results of the human evaluation of the four categories of diagnosis.", "Compared with other methods, the counterfactual-based methods have great advantages in comprehensiveness, which indicates that our method can focus more on useful medical information and eliminate invalid noise in the EMR.", "The fourth category requires focus.", "This category includes all non-lymphedema medical records, and its diseases are diverse and complex.", "It can be seen that the method of counterfactual reasoning has strong performance in this type in terms of comprehensiveness and credibility, indicating that our method is truly independent of the type of disease and suitable for large-scale promotion.", "Table 2 shows the ablation experiment results for the clustering labels.", "For the experiment without corresponding labels, we used a classifier with random initialization parameters for classification, which can reflect the expectation of the ability to encode medical features of the model.", "The results show that both the clause label and entity label can improve the accuracy of diagnosis by about 1% on the basis of over 96% accuracy.", "Since we use the same encoder to encode the three granular texts of the sentence, clause and entity, the addition of clause labels also improves the accuracy of entity classification and vice versa.", "The result indicates that the introduction of cluster tags enhances the expression of medical information in the model and enables the model to better extract and utilize relevant medical knowledge from irregular text.", "Results in Primary Lymphedema Since the diagnosis of primary lymphedema is mainly diagnosed by excluding other types of lymphedema, the keywords we established are not standardized in the EMR, the performance of all models in Table 3 has a significant decline and only be used for comparison.", "And the performance in human evaluation is consistent with other diseases in Table 4.", "Results in Chylous Reflux Lymphedema Except for Edge-Mask, the performance of the other methods on chylous reflux lymphedema has dropped significantly.", "Since this type of EMR only accounts for 4% of the dataset, the models based on frequency statistics are difficult to capture key features.", "And Edge-Mask, using counterfactual intervention to obtain causal relation, is disease-independent and can adapt to few data.", "Node-Mask and Edge-Mask Edge-Mask is included in Node-Mask.", "Masking the feature of a node will inevitably reduce the flow of information on all connected edges.", "So compared to Node-Mask, Edge-Mask is a fine-grained counterfactual intervention.", "For Node-Mask, the flow of multi-granularity information between nodes will be truncated.", "For example, when a clause node is masked, the entity features belonging to it are truncated together.", "Therefore, Node-Mask has a weaker performance than Edge-Mask.", "Figure 4 is an example randomly obtained from the test set.", "In this graph, each node represents a clause that contains the entities used to describe the symptoms of the disease and the edges represent the connection between them.", "All the aforementioned features constitute a hierarchical supporting graph to provide effective help for doctors' diagnosis.", "As we can see, our model successfully extracted the patient's history of cancer, surgery and chemotherapy, which can clearly indicate that the patient is suffering from secondary lymphedema.", "This shows that the supporting facts we extracted are effective.", "We provide a comparison of the extraction results of different models in Appendix A.3.", "Figure 5 shows an example of the visualization of the Edge-Mask matrix.", "It can be seen that most of the edges have been masked, and only the edges from two key feature nodes have been preserved.", "This proves that our method can effectively filter noisy features and extract supporting facts.", "Explainable Diagnosis with EMR It is necessary to provide explainability for automatic diagnosis systems.", "CAML (Mullenbach et al., 2018) provides explanations with the spans having the high attention weights in the text sequence and (Feng et al., 2020) calculates a threshold for attention selection.", "AdaCare (Ma et al., 2020a) calculate the average importance weights in the overall dataset to obtain symptoms strongly associated with the diseases.", "These works focus on correlations based on attention and ignore causality between features and diagnosis.", "Document Modeling with Graph Network Document modeling with graph network has been widely used in text classification (Yao et al., 2019), multi-hop reading comprehension (Cao et al., 2019) and abstract extraction (Wang et al., 2020).", "An EMR can also be considered as a document.", "There are two main ways to structure a document into a graph, based on the entity (Qiu et al., 2019) or based on the structure of the document (Zheng et al., 2020).", "(Tu et al., 2019) considers the integration of documents and entities as heterogeneous nodes in the graph network, and (Fang et al., 2019) propose a hierarchical model that combines document structure and entity structure.", "We used a multi-granularity hierarchical graph network to model the EMR documents.", "history (Lewis, 1973; Woodward, 2005).", "In recent years, (Oberst and Sontag, 2019) introduces a kind of structural causal model to genera counterfactual trajectories in a synthetic environment of sepsis management.", "(Lin et al., 2020) presents a patient simulator to generate informative counterfactual response in the disease diagnosis.", "(Lenis et al., 2020) identifies salient regions of a medical image by measuring the effect of local counterfactual image-perturbations.", "We use counterfactual reasoning in EMRs to provide explanations for diagnosis.", "In this paper, we propose a counterfactual multi-granularity graph supporting facts extraction (CMGE) method for the irregular EMR without an external medical knowledge base.", "Based on this model, we can correctly diagnose lymphedema.", "The proposed counterfactual-based approach can discover the causal relationship between symptoms and diagnosis.", "The results of supporting fact extraction show that our method has strong robustness and can maintain accuracy in various diseases and even in categories with few data resources.", "In the future, we will introduce multi-modal into the model such as radiology images to discover more medical knowledge from EMRs.", "This work is supported by the National Key Research and Development Program of China (No.2018YFB1005104) and the Key Research Program of the Chinese Academy of Sciences (ZDBS-SSW-JSC006)." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "objective", "abstain", "method", "result", "method", "method", "objective", "objective", "objective", "method", "abstain", "result", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "result", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "result", "abstain", "other" ]
[ "Emotion recognition in conversation (ERC) is a crucial component in affective dialogue systems, which helps the system understand users' emotions and generate empathetic responses.", "However, most works focus on modeling speaker and contextual information primarily on the textual modality or simply leveraging multimodal information through feature concatenation.", "In order to explore a more effective way of utilizing both multimodal and long-distance contextual information, we propose a new model based on multimodal fused graph convolutional network, MMGCN, in this work.", "MMGCN can not only make use of multimodal dependencies effectively, but also leverage speaker information to model inter-speaker and intra-speaker dependency.", "We evaluate our proposed model on two public benchmark datasets, IEMOCAP and MELD, and the results prove the effectiveness of MMGCN, which outperforms other SOTA methods by a significant margin under the multimodal conversation setting.", "Emotion is an important part of human daily communication.", "Emotion Recognition in Conversation (ERC) aims to automatically identify and track the emotional status of speakers during a dialogue.", "It has attracted increasing attention from researchers in the field of natural language processing and multimodal processing.", "ERC has a wide range of potential applications such as assisting conversation analysis for legal trials and e-health services etc.", "It is also a key component for building natural human-computer interactions that can produce emotional responses in a dialogue.", "Different from traditional emotion recognition on isolated utterances, emotion recognition in conversation requires context modeling of individual utterances.", "The context can be attributed to the preceding utterances, temporality in conversation turns, or speaker related information etc.", "Different models have been proposed to capture the contextual information in previous works, including the LSTM-based model (Poria et al., 2017), the conversational memory network (CMN) model (Hazarika et al., 2018b), interactive conversational memory network (ICON) model (Hazarika et al., 2018a), and DialogueRNN model (Majumder et al., 2019) etc.", "In the example conversation as shown in Figure 1, the two speakers are chatting in the context of the male speaker being admitted to USC.", "In this chatting scene, they change topics a few times, such as the female speaker inviting the male speaker out to play and so on.", "But they keep coming back to the topic of USC, and then both of them express an excitement emotional status.", "It shows that long-distance contextual information is of great help to the prediction of speakers' emotions.", "However, previous models can not effectively capture both speaker and long-distance dialogue contextual information simultaneously in multi-speaker conversation scenarios.", "Ghosal et", "al.(Ghosal et al., 2019), therefore, first propose the DialogueGCN model which applies graph convolutional network (GCN) to capture long-distance contextual information in a conversation.", "DialogueGCN takes each utterance as a node and connects any nodes that are in the same window within a conversation.", "It can well model both the dialogue context and speaker information which leads to the state-of-the-art ERC performance.", "However, like most previous models, DialogGCN only focuses on the textual modality of the conversation, ignoring effective combination of other modalities such as visual and acoustic modalities.", "Works that consider multimodal contextual information often conduct the simple feature concatenation type of multimodal fusion.", "In order to effectively explore the multimodal information and at the same time capture long-distance contextual information, we propose a new multimodal fused graph convolutional network (MMGCN) model in this work.", "MMGCN constructs the fully connected graph in each modality, and builds edge connections between nodes corresponding to the same utterance across different modalities, so that contextual information across different modalities can interact.", "In addition, the speaker information is injected into MMGCN via speaker embedding.", "Furthermore, different from DialogueGCN, which is a non-spectral domain GCN and its many optimized matrices occupy too much computing resource, we encode the multimodal graph using spectral domain GCN and extend the GCN from a single layer to deep layers.", "To verify the effectiveness of the proposed model, we carry out experiments on two benchmark multimodal conversation datasets, IEMOCAP and MELD.", "MMGCN significantly outperforms other models on both datasets.", "The rest of the paper is organized as follows: Section 2 discusses some related works; Section 3 introduces the proposed MMGCN model in details; Section 4 and 5 present the experiment setups on two public benchmark datasets and the analysis of experiment results and ablation study; Finally, Section 6 draws some conclusions.", "With the fast development of social media, much more interaction data become available, including several open-sourced conversation datasets such as IEMOCAP(Busso et al., 2008), AVEC(Schuller et al., 2012), MELD(Poria et al., 2018), etc.", "ERC has attracted much research attention recently.", "Many previous works focus on modeling contextual information due to its importance in ERC.", "Poria et al. (Poria et al., 2017) leverage a LSTM-based model to capture interaction history context.", "Hazarika et al. (Hazarika et al., 2018b,a) first pay attention to the importance of speaker information and exploit different memory networks to model different speakers.", "DialogueRNN (Majumder et al., 2019) leverage distinct GRUs to capture speakers' contextual information.", "DialogueGCN (Ghosal et al., 2019) construct the graph considering both speaker and conversation sequential information and achieve the state-of-the-art performance.", "Most recent studies on ERC focus primarily on the textual modality.", "(Poria et al., 2017; Hazarika et al., 2018b,a) leverage multimodal information through concatenating features from three modalities without modeling the interaction between modalities.", "(Chen et al., 2017) conduct multimodal fusion at the word-level for emotion recognition of isolated utterances.", "(Sahay et al., 2018) consider contextual information and use relations in the emotion labels across utterances to predict the emotion (Zadeh et al., 2018) propose MFN to fuse information of multi-views, which aligns features from different modalities well.", "However, MFN neglects to model speaker information, which is significant to ERC as well.", "The state-of-the-art dialogueGCN model only considers the textual modality.", "In order to explore a more effective way of fusing multiple modalities and at the same time capturing contextual conversation information, we propose MMGCN which constructs a graph based on all three muoldalities.", "Graph convolutional networks have been widely used in the past few years for their ability to cope with non-Euclidean data.", "Mainstream GCN methods can be divided into spectral domain methods and non-spectral domain methods (Velickovic et al., 2017).", "Spectral domain GCN methods (Zhang et al., 2019) are based on Laplace Spectral decomposition theory.", "They can only deal with undirected graphs.", "Non-spectral domain GCN methods (Velickovic et al., 2017; Schlichtkrull et al., 2018; Li et al., 2015) can be applied to both directed and undirected graphs, but consuming larger computing resource.", "Recently, researchers have proposed methods to make spectral domain GCN deeper without over-smoothing (Li et al., 2019; Chen et al., 2020).", "In order to further improve MMGCN on ERC, we encode the multimodal graph using spectral domain GCN with deep layers.", "A dialogue can be defined as a sequence of utterances { u 1 , u 2 , ..., u N } , where N is the number of utterances.", "Each utterance involves three sources of utterance-aligned data corresponding to three modalities, including acoustic", "(a), visual (v) and textual (t) modalities, which can be represented as follows: u i = { u ai , u vi , u ti } (1) where u ai , u vi , u ti denote the raw feature representation of u i from the acoustic, visual and textual modality, respectively.", "The emotion recognition in conversation task aims to predict the emotional status label for each utterance u i in the conversation based on the available information from all three modalities.", "Figure 2 illustrates the overall framework of our proposed emotion recognition in conversation system, which consists of three key modules: Modality Encoder, Multimodal Fused Graph Convolutional Network (MMGCN), and Emotion Classifier.", "As we mentioned above, the dialog context information is important for predicting the emotion label of each utterance.", "Therefore, it is beneficial to encode the contextual information into the utterance feature representation.", "We generate the context-aware utterance feature encoding for each modality through the corresponding modality encoder.", "To be specific, we apply a bidirectional Long Short Term Memory (LSTM) network to encode the sequential textual context information for the textual modality.", "For the acoustic and visual modalities, we apply a fully connected network.", "The context-aware feature encoding for each utterance can be formulated as follows: h ti = [ LSTM( u ti , h ti 1 ) , LSTM( u ti , h ti +1 )] h ai = W ae u ai + b ai h v i = W v e u v i + b v i (2) where u ai , u vi , u ti are the context-independent raw feature representation of utterance i from the acoustic, visual and textual modalities, respectively.", "The modality encoder outputs the context-aware raw feature encoding h ai , h vi , and h ti accordingly.", "In order to capture the utterance-level contextual dependencies across multiple modalities, we propose a Multimodal fused Graph Convolutional Network (MMGCN).", "We construct a spectral domain graph convolutional network to encode the multimodal contextual information inspired by (Li et al., 2019; Chen et al., 2020).", "We also stack more layers to construct a deep GCN.", "Furthermore, we add learned speaker-embeddings to encode the speaker-level contextual information.", "As mentioned above, speaker information is important for ERC.", "In order to encode the speaker identity information, we add speaker embeddings to the features before constructing the graph.", "Assuming there are M parties in a dialogue, then the size of the speaker embedding is M .", "We show a two-speaker conversation case in Figure 2. The original speaker identity can be denoted with a one-hot vector s i and the speaker embedding S i is calculated as follows: S i = W s s i + b si (3) The speaker embedding can then be leveraged to attach speaker information in the graph construction.", "A dialogue with N utterances can be represented as an undirected graph G = ( V , E ) , where V ( |V| = 3 N ) denotes utterance nodes in three modalities and E V V is a set of relationships containing context, speaker and modality dependency.", "We construct the graph as follows: Nodes: Each utterance is represented by three nodes v ai , v vi , v ti in a graph, initialized with h (cid:48) ai , h (cid:48) vi , h (cid:48) li , which represent [ h ai , S i ] , [ h vi , S i ] , [ h ti , S i ] respectively, corresponding to the three modalities.", "Thus, given a dialogue with N utterances, we construct a graph with 3 N nodes.", "Edges: We assume that each utterance has certain connection to other utterances in the same dialogue.", "Therefore, any two nodes in the same modality in the same dialogue are connected in the graph.", "Furthermore, each node is connected with the nodes which correspond to the same utterance but from different modalities.", "connected with v vi and v ti in the graph.", "Edge Weighting: We assume that if two nodes have higher similarity, the information interaction between them is also more important, and the edge weight between them should be higher.", "In order to capture the similarities between node representations, following (Skianis et al., 2018), we use the angular similarity to represent the edge weight between two nodes.", "There are two types of edges in the graph: 1) edges connecting nodes from the same modality, and 2) edges connecting nodes from different modalities.", "To differentiate them, we use different edge weighting strategies.", "For the first type of edges, the edge weight is computed as: A ij = 1 arccos ( sim ( n i , n j )) (4) where n i and n j denote the feature representations of the i -th and j -th node in the graph.", "For the second type of edges, the edge weight is computed as: A ij = (1 arccos ( sim ( n i , n j )) ) (5) where is a hyper parameter.", "Graph Learning: Inspired by (Chen et al., 2020), we build a deep graph convolutional network based on the undirected graph formed following the above construction steps to further encode the contextual dependencies.", "To be specific, given the undirected graph G = ( V , E ) , let P be the renormalized graph Laplacian matrix (Kipf and Welling, 2016) of G : P = D 1 / 2 A D 1 / 2 = ( D + I ) 1 / 2 ( A + I )( D + I ) 1 / 2 (6) where A denotes the adjacency matrix, D denotes the diagonal degree matrix of graph G , and I denotes identity matrix.", "The iteration of GCN from different layers can be formulated as: H ( l +1) = (((1 ) PH ( l ) + H (0) )((1 ( l ) ) I + ( l ) W ( l ) )) (7) where and ( l ) are two hyper parameters, denotes the activation function and W ( l ) is a learnable weight matrix.", "To ensure the decay of the weight matrix adaptively increases when stacking more layers, we set ( l ) = log( l + 1) , where is also a hyper parameter.", "A residual connection to the first layer H (0) is added to the representation PH ( l ) and an identity mapping I is added to the weight matrix W ( l ) .", "With such residual connection, we can make MMGCN deeper to further improve performance.", "As described in sec. 3.2.2, we initialize nodes with the combination of utterance feature and speaker embedding, h (cid:48) i .", "h (cid:48) i = [ h (cid:48) ai , h (cid:48) vi , h (cid:48) ti ] .", "(8) Let g ai , g vi and g ti be the features of different modalities encoded by the GCN.", "The features corresponding to the same utterance are concatenated: g i = [ g ai , g vi , g ti ] .", "(9) We then can concatenate g i and h i to generate the final feature representation for each utterance: e i = [ h (cid:48) i , g i ] , (10) e i is then fed into a MLP with fully connected layers to predict the emotion label y i for the utterance: l i = RELU ( W l e i + b l ) P i = Softmax ( W smax l i + b smax ) y i = arg min k ( P i [ k ]) (11) 3.4 Training Objectives We use categorical cross-entropy along with L2-regularization as the loss function during training: L = 1 (cid:80) Ns =1 c ( s ) N (cid:88) i =1 c ( i ) (cid:88) j =1 logP i,j [ y i,j ] + (cid:107) (cid:107) 2 (12) where N is the number of dialogues, c ( i ) is the number of utterances in dialogue i , P i,j is the probability distribution of predicted emotion labels of utterance j in dialogue i , y i,j is the expected class label of utterance j in dialogue i , is the L2-regularization weight, and is the set of all trainable parameters.", "We use stochastic gradient descent based Adam (Kingma and Ba, 2014) optimizer to train our network.", "Hyper parameters are optimized using grid search.", "We evaluate our proposed MMGCN model on two benchmark datasets, IEMOCAP(Busso et al., 2008) and MELD(Poria et al., 2018).", "Both are multimodal datasets with aligned acoustical, visual and textual information of each utterance in a conversation.", "Followed (Ghosal et al., 2019), we partition both datasets into train and test sets with roughly Dataset dialogues utterances train+val test train+val test IEMOCAP 120 31 5810 1623 MELD 1153 280 11098 2610 Table 1: Data distribution of IEMOCAP and MELD 8:2 ratio.", "Table 1 shows the distribution of train and test samples for both datasets.", "IEMOCAP: The dataset contains 12 hours of videos of two-way conversations from ten unique speakers, where only the first eight speakers from session one to four are used in the training set.", "Each video contains a single dyadic dialogue, segmented into utterances.", "There are in total 7433 utterances and 151 dialogues.", "Each utterance in the dialogue is annotated with an emotion label from six classes, including happy, sad, neutral, angry, excited and frustrated.", "MELD: Multi-modal Emotion Lines Dataset (MELD) is a multi-modal and multi-speaker conversation dataset.", "Compared to the Emotion Lines dataset (Chen et al., 2018), MELD has three modality-aligned conversation data with higher quality.", "There are in total 13708 utterances, 1433 conversations and 304 different speakers.", "Specifi-cally, different from dyadic conversation datasets such as IEMOCAP, MELD has three or more speakers in a conversation.", "Each utterance in the dialogue is annotated with an emotion label from seven classes, including anger, disgust, fear, joy, neutral, sadness and surprise.", "The textual raw features are extracted using TextCNN following (Hazarika et al., 2018a).", "The acoustic raw features are extracted using the OpenS-mile toolkit with IS10 configuration (Schuller et al., 2011).", "The visual facial expression features are extracted using a DenseNet (Huang et al., 2015) pre-traind on the Facial Expression Recognition Plus (FER+) corpus (Barsoum et al., 2016).", "The hyperparameters are set as follows: the number of GCN layers are both 4 for IEMOCAP and MELD.", "The dropout is 0.4.", "The learning rate is 0.0003.", "The L2 regularization parameter is 0.00003.", ", and are set as 0.1, 0.5 and 0.7 respectively.", "Considering the class-imbalance in IEMOCAP MELD Happy Sad Neutral Angry Excited Frustrated Average(w) Average(w) BC-LSTM 34.43 60.87 51.81 56.73 57.95 58.92 54.95 56.80 CMN 30.38 62.41 52.39 59.83 60.25 60.69 56.13 ICON 29.91 64.57 57.38 63.04 63.42 60.81 58.54 DialogueRNN 39.16 81.69 59.77 67.36 72.91 60.27 64.58 57.11 DialogueGCN 47.1 80.88 58.71 66.08 70.97 61.21 65.04 58.23 MMGCN 42.34 78.67 61.73 69.00 74.33 62.32 66.22 58.65 Table 2: ERC performance (F1-score) of different approaches on both IEMOCAP and MELD datasets under the multimodal setting, which means the input includes all the acoustic, visual, and textual modalities; bold font denotes the best performance.", "MELD, we use focal loss when training MMGCN on MELD.", "In addition, we add layer normalization after the speaker embedding.", "Following previous works (Hazarika et al., 2018a; Majumder et al., 2019; Ghosal et al., 2019), we use weighted average f1-score as the evaluation metric.", "Paired t-test is performed to test the significance of performance improvement with a default significance level of 0.05.", "In order to verify the effectiveness of our model, We implement and compare the following models on emotion recognition in conversation.", "BC-LSTM (Poria et al., 2017): it encodes contextual information through Bi-directional LSTM (Hochreiter and Schmidhuber, 1997) network.", "The context-aware features are then used for emotion classification.", "BC-LSTM ignores speaker information as it doesn't attach any speaker-related information to their model.", "CMN (Hazarika et al., 2018b): it leverages speaker-dependent GRUs to model utterance context combining dialogue history information.", "The utterance features with contextual information are subject to two distinct memory networks for both speakers.", "Due to the fixed number of Memory network blocks, CMN can only serve in dyadic conversation scenarios.", "ICON (Hazarika et al., 2018a): it extends CMN to model distinct speakers respectively.", "Same with CMN, two speaker-dependent GRUs are leveraged.", "Besides, A global GRU is used to track the change of emotion status in the entire conversation and multi-layer memory networks are leveraged to model the global emotion status.", "Though ICON improves the result of ERC, it still cannot adapt to a multi-speaker scenario.", "DialogueRNN (Majumder et al., 2019): it models speakers and sequential information in dialogues through three different GRUs, which include Global GRU, Speaker GRU and Emotion GRU.", "Specifically, Global GRU models context information, while Speaker dependent GRU models the status of the certain speaker.", "The two modules update interactively.", "Emotion GRU detects emotion of utterances in conversation.", "Furthermore, in the multimodal setting, the concatenation of acoustical, visual, and textual features is used when the speaker talks, but only use visual features otherwise.", "However, DialogueRNN doesn't improve much in multimodal settings.", "DialogueGCN (Ghosal et al., 2019): it applies GCN to ERC, in which the generated features can integrate rich information.", "Specifically, utterance-level features encoded by bi-lstm are used to initialize the nodes of the graph, edges are constructed within a certain window.", "Utterances in the same dialogue but with long distance can be connected directly.", "Relation GCN(Schlichtkrull et al., 2018) and GNN(Morris et al., 2019), which are both nonspectral domain GCN models, are leveraged to encode the graph.", "However, DialogueGCN only focuses on the textual modality.", "In order to compare with our MMGCN under the multimodal setting, we extend DialogueGCN by simply concatenating features of three modalities.", "We compare our proposed MMGCN with all the baseline models presented in section 4.5 on IEMOCAP and MELD datasets under the multimodal setting.", "In order to compare the results under the same experiment settings, we reimplement the models in AV T Emotion Classifier", "Table 2 shows the performance comparison of MMGCN with other models on the two benchmark datasets under the multimodal setting.", "Di-alougeGCN was the best performing model when using only the textual modality.", "Under the multimodal setting, DialogueGCN which is fed with the concatenation of acoustic, visual and textual features achieves some slight improvement over the single textual modality.", "Our proposed MMGCN improves the F1-score performance over DialogueGCN under the multimodal setting by ab-solute 1.18% on IEMOCAP and 0.42% on MELD on average, and the improvement is significant with p-value < 0 .", "05 .", "Table 3 shows the performance comparison of MMGCN under different multimodal settings on both benchmark datasets.", "From Table 3 we can see that the best single modality performance is achieved on the textual modality and the worst is on the visual modality, which is consistent with previously reported findings.", "Adding acoustic and visual modalities can bring additional performance improvement over the textual modality.", "To verify the effectiveness of MMGCN in multimodal fusion, we compare it with other multimodal fusion methods, including early fusion, late fusion, fusion through gated attention and other representative fusion methods such as MFN(Zadeh et al., 2018) and MulT(Tsai et al., 2019).", "The first three fusion methods are illustrated in Figure 3. As for modality IEMOCAP MELD a 54.66 42.63 v 33.86 33.27 t 62.35 57.72 at 65.70 58.02 vt 62.89 57.92 avt 66.22 58.65 Table 3: ERC performance of MMGCN under different multimodal settings, which means the input contains different combination of the three modalities early fusion, multimodal features are concatenated and fed into GCN directly.", "As for late fusion, features of different modalities are fed into different GCNs respectively and concatenated afterwards.", "As for fusion through gated attention, features are fed into different GCNs the same way as in late fusion, and then to a gated attention module.", "Specifi-cally, the gated attention module can be formulated as follows: r m j i = tanh ( W m j h m j i ) (13) r m k i = tanh ( W m k h m k i ) (14) z = ( W z h m j i ) (15) r ( m j ,m k ) i = z r m j i + (1 z ) r m k i (16) e i = [ r ( a,v ) i , r ( a,t ) i , r ( v,t ) i ] (17) where m j and m k could be any modality among { a, v, t } , h m j i and h m k i represent the feature encoded by the corresponding modality encoder, e i represents the final feature representation for the i th utterance.", "Considering MFN and MulT are leveraged to fuse multimodal information sequentially, they are used to replace the Modality Encoder.", "The fused multimodal features are fed to the GCN module subsequently.", "Table 4 shows that MMGCN with the graph-based multimodal fusion outperforms all other com-acoustic visual textual What's that suppose to mean?", "We investigate the impact of the number of layers in MMGCN on the ERC performance in Table 5. The experiment results show that a different number of layers does affect the ERC recognition performance.", "Specifically, MMGCN achieves the best performance with 4 layers on both IEMOCAP and MELD.", "Speaker Embedding can differentiate input features from different speakers.", "Previous works have reported that speaker information can help improve emotion recognition performance.", "We conduct the ablation study to verify the contribution of speaker embedding in MMGCN as shown in Table 6. As expected, dropping speaker embedding in MMGCN leads to performance degradation, which is significant by t-test with p< 0 .", "05 .", "Fig 4 depicts a scene in which a man and a woman quarrel with each other over a female friend of the man who came to meet with him across 700 miles.", "They are frustrated or angry in most cases.", "At the beginning of the conversation, their emotion states are both neutral.", "Over time, they become emotional.", "They are both angry at the end of the conversation.", "The heatmaps of the adjacent matrix for the 20 th utterance in the conversation from the three modalities demonstrate that different from simple sequential models, MMGCN pays attention not only to the close context, but also relate to the context in long-distance.", "For example, as shown in the textual heatmap, MMGCN can successfully aggregate information from the most relevant utterances, even from long-distance utterances, for example the 3 rd utterance.", "In this paper, we propose an multimodal fused graph convolutional network (MMGCN) for multimodal emotion recognition in conversation (ERC).", "MMGCN provides a more effective way of utilizing both multimodal and long-distance contextual information.", "It constructs a graph that captures not only intra-speaker context dependency but also inter-modality dependency.", "With the residual connection, MMGCN can have deep layers to further improve recognition performance.", "We carry out experiments on two public benchmark datasets, IEMOCAP and MELD, and the experiment results prove the effectiveness of MMGCN, which outperforms other state-of-the-art methods by a significant margin under the multimodal conversation setting.", "This work was supported by the National Key R & D Program of China under Grant No. 2020AAA0108600, National Natural Science Foundation of China (No. 62072462), National Natural Science Foundation of China (No. 61772535), and Beijing Natural Science Foundation (No. 4192028)." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "other", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "other", "other", "method", "abstain", "result", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "other" ]
[ "Existing multilingual machine translation approaches mainly focus on English-centric directions, while the non-English directions still lag behind.", "In this work, we aim to build a many-to-many translation system with an emphasis on the quality of non-English language directions.", "Our intuition is based on the hypothesis that a universal cross-language representation leads to better multilingual translation performance.", "To this end, we propose mRASP2, a training method to obtain a single unified multilingual translation model.", "mRASP2 is empowered by two techniques:", "a) a contrastive learning scheme to close the gap among representations of different languages, and", "b) data augmentation on both multiple parallel and monolingual data to further align token representations.", "For English-centric directions, mRASP2 outperforms existing best unified model and achieves competitive or even better performance than the pre-trained and fine-tuned model mBART on tens of WMT's translation directions.", "For non-English directions, mRASP2 achieves an improvement of average 10+ BLEU compared with the multilingual Transformer baseline.", "Code, data and trained models are available at https://github.", "com/PANXiao1994/mRASP2 .", "Transformer (Vaswani et al., 2017) has achieved decent performance for machine translation with rich bilingual parallel corpora.", "Recent work on multilingual machine translation aims to create a single unified model to translate many languages (Johnson et al., 2017; Aharoni et al., 2019; Zhang et al., 2020; Fan et al., 2020; Siddhant et al., 2020).", "Multilingual translation models are appealing for two reasons.", "First, they are model effi-cient, enabling easier deployment (Johnson et al., Encoder Decoder <Fr> Je t'aime.", "2017).", "Further, parameter sharing across different languages encourages knowledge transfer, which benefits low-resource translation directions and potentially enables zero-shot translation (i.e. direct translation between a language pair not seen during training) (Ha et al., 2017; Gu et al., 2019; Ji et al., 2020).", "Despite these benefits, challenges still remain in multilingual NMT.", "First, previous work on multilingual NMT does not always perform well as their corresponding bilingual baseline especially on rich resource language pairs (Tan et al., 2019; Zhang et al., 2020; Fan et al., 2020).", "Such performance gap becomes larger with the increasing number of accommodated languages for multilingual NMT, as model capacity necessarily must be split between many languages (Arivazhagan et al., 2019).", "In addition, an optimal setting for multilingual NMT should be effective for any language pairs, while most previous work focus on improving English-centric 1 directions (Johnson et al., 2017; Aharoni et al., 2019; Zhang et al., 2020).", "A few recent exceptions are Zhang et al. (2020) and Fan et al. (2020), who trained many-to-many systems with introducing more non-English corpora, through data mining or back translation.", "In this work, we take a step towards a unified many-to-many multilingual NMT with only English-centric parallel corpora and additional monolingual corpora.", "Our key insight is to close the representation gap between different languages to encourage transfer learning as much as possible.", "As such, many-to-many translations can make the most of the knowledge from all supervised directions and the model can perform well for both English-centric and non-English settings.", "In this paper, we propose a multilingual COntrastive Learning framework for Translation (mCOLT or mRASP2) to reduce the representation gap of different languages, as shown in Figure 1. The objective of mRASP2 ensures the model to represent similar sentences across languages in a shared space by training the encoder to minimize the representation distance of similar sentences.", "In addition, we also boost mRASP2 by leveraging monolingual data to further improve multilingual translation quality.", "We introduce an effective aligned augmentation technique by extending RAS (Lin et al., 2020) on both parallel and monolingual corpora to create pseudo-pairs.", "These pseudo-pairs are combined with multilingual parallel corpora in a unified training framework.", "Simple yet effective, mRASP2 achieves consistent translation performance improvements for both English-centric and non-English directions on a wide range of benchmarks.", "For English-centric directions, mRASP2 outperforms a strong multilingual baseline in 20 translation directions on WMT testsets.", "On 10 WMT translation benchmarks, mRASP2 even obtains better results than the strong bilingual mBART model.", "For zero-shot and unsupervised directions, mRASP2 obtains surprisingly strong results on 36 translation directions 2 , with 10+ BLEU improvements on average.", "mRASP2 unifies both parallel corpora and monolingual corpora with contrastive learning.", "This section will explain our proposed mRASP2.", "The overall framework is illustrated in Figure 1 2.1 Multilingual Transformer A multilingual neural machine translation model learns a many-to-many mapping function f to translate from one language to another.", "To distinguish different languages, we add an additional language identification token preceding each sentence, for both source side and target side.", "The base architecture of mRASP2 is the state-of-the-art Transformer (Vaswani et al., 2017).", "A little different from previous work, we choose a larger setting with a 12-layer encoder and a 12-layer decoder to increase the model capacity.", "The model dimension is 1024 on 16 heads.", "To ease the training of the deep model, we apply Layer Normalization for word embedding and pre-norm residual connection following Wang et al. (2019a) for both encoder and decoder.", "Therefore, our multilingual NMT baseline is much stronger than that of Transformer big model.", "More formally, we define L = { L 1 , . . . , LM } where L is a collection of M languages involving in the training phase.", "D i,j denotes a parallel dataset of ( L i , L j ) , and D denotes all parallel datasets.", "The training loss is cross entropy defined as: L ce = (cid:88) x i , x j D log P ( x i | x j ) (1) where x i represents a sentence in language L i , and is the parameter of multilingual Transformer model.", "Multilingual Transformer enables implicitly learning shared representation of different languages.", "mRASP2 introduces contrastive loss to explicitly bring different languages to map a shared semantic space.", "The key idea of contrastive learning is to minimize the representation gap of similar sentences and maximize that of irrelevant sentences.", "Formally, given a bilingual translation pairs ( x i , x j ) D , ( x i , x j ) is the positive example and we randomly choose a sentence y j from language L j to form a negative example 3 ( x i , y j ) .", "3 It is possible that L j = L i Encoder Decoder I like (cid:14)(cid:13) and (cid:18)(cid:19) <EN id> J'adore chanter et danser J'adore chanter et danser <EOS> <FR id> singing dancing C( x EN ) x FR x FR", "(2) where sim ( ) calculates the similarity of different sentences.", "+ and denotes positive and negative respectively.", "R ( s ) denotes the average-pooled encoded output of an arbitrary sentence s .", "is the temperature, which controls the difficulty of distinguishing between positive and negative examples 4 .", "In our experiments, it is set to 0 .", "1 .", "The similarity of two sentences is calculated with the cosine similarity of the average-pooled encoded output.", "To simplify implementation, the negative samples are sampled from the same training batch.", "Intuitively, by maximizing the softmax term sim + ( R ( x i ) , R ( x j )) , the contrastive loss forces their semantic representations projected close to each other.", "In the meantime, the softmax function also minimizes the non-matched pairs sim ( R ( x i ) , R ( y j )) .", "During the training of mRASP2, the model can be optimized by jointly minimizing the contrastive training loss and translation loss: L = L ce + | s |L ctr (3) where is the coefficient to balance the two training losses.", "Since L ctr is calculated on the sentence-level and L ce is calculated on the token-level, therefore L ctr should be multiplied by the averaged sequence length | s | .", "We then will introduce how to improve mRASP2 with data augmentation methods, including the introduction of noised bilingual and noised monolingual data for multilingual NMT.", "The above two 4 Higher temperature increases the difficulty to distinguish positive sample from negative ones.", "types of training samples are illustrated in Figure 2. Lin et al. (2020) propose Random Aligned Substitution technique (or RAS 5 ) that builds code-switched sentence pairs ( C ( x i ) , x j ) for multilingual pre-training.", "In this paper, we extend it to Aligned Augmentation (AA), which can also be applied to monolingual data.", "For a bilingual or monolingual sentence pair ( x i , x j ) 6 , AA creates a perturbed sentence C ( x i ) by replacing aligned words from a synonym dictionary 7 .", "For every word contained in the synonym dictionary, we randomly replace it to one of its synonym with a probability of 90%.", "For a bilingual sentence pair ( x i , x j ) , AA creates a pseudo-parallel training example ( C ( x i ) , x j ) .", "For monolingual data, AA takes a sentence x i and generates its perturbed C ( x i ) to form a pseudo self-parallel example ( C ( x i ) , x i ) .", "( C ( x i ) , x j ) and ( C ( x i ) , x i ) is then used in the training by calculating both the translation loss and contrastive loss.", "For a pseudo self-parallel example ( C ( x i ) , x i ) , the contrastive loss is basically the reconstruction loss from the perturbed sentence to the original one.", "This section shows that mRASP2 can achieve substantial improvements over previous many-to-many multilingual translation on a wide range of benchmarks.", "Especially, it obtains substantial gains on zero-shot directions.", "Parallel Dataset PC32 We use the parallel dataset PC32 provided by Lin et al. (2020).", "It con-5 They apply RAS only on parallel data 6 x i is in language L i and x j is in language L j , where i, j { L 1 , . . . , LM } 7 We will release our synonym dictionary En-Fr wmt14 En-Tr wmt17 En-Es wmt13 En-Ro wmt16 En-Fi wmt17 Avg (*) bilingual Transformer-6(Lin et al., 2020) 43.2 39.8 --34.3 34.0 --Transformer-12(Liu et al., 2020) 41.4 -9.5 12.2 33.2 -34.3 36.8 20.2 21.8 -pre-train & fine-tuned Adapter (Bapna and Firat, 2019) --35.4 33.7 ---mBART(Liu et al., 2020) 41.1 -17.8 22.5 34.0 -37.7 38.8 22.4 28.5 -XLM(Conneau and Lample, 2019) ----38.5 --MASS(Song et al., 2019) ----39.1 --mRASP(Lin et al., 2020) 44.3 45.4 20.0 23.4 -37.6 38.9 24.0 28.0 unified multilingual Multi-Distillation (Tan et al., 2019) ---31.6 35.8 22.0 21.2 -m-Transformer 42.0 38.1 18.8 23.1 32.8 33.7 35.9 37.7 20.0 28.2 31.03 mRASP w/o finetune(**) 43.1 39.2 20.0 25.2 34.0 34.3 37.5 38.8 22.0 29.2 32.33 +1.30 mRASP2 43.5 39.3 21.4 25.8 34.5 35.0 38.0 39.1 23.4 30.1 33.01 +1.98 Table 1: Performance (tokenized BLEU) on WMT supervised translation directions.", "tains a large public parallel corpora of 32 English-centric language pairs.", "The total number of sentence pairs is 97.6 million.", "We apply AA on PC32 by randomly replacing words in the source side sentences with synonyms from an arbitrary bilingual dictionary provided by (Lample et al., 2018) 8 .", "For words in the dictionaries, we replace them into one of the synonyms with a probability of 90% and keep them unchanged otherwise.", "We apply this augmentation in the pre-processing step before training.", "Monolingual Dataset MC24 We create a dataset MC24 with monolingual text in 24 languages 9 .", "It is a subset of the Newscrawl 10 dataset by retaining only those languages in PC32, plus three additional languages that are not in PC32 (Nl, Pl, Pt).", "In order to balance the volume across different languages, we apply temperature sampling n i = (cid:16) n i / (cid:80) j n j (cid:17) 1 /T with T =5 over the dataset, where n i is the number of sentences in i th language.", "data.", "The total number of sentences in MC24 is 1.01 billion.", "The detail of data volume is listed in the Appendix.", "We apply AA on MC24 by randomly replacing words in the source side sentences with synonyms from a multilingual dictionary.", "Therefore the source side might contain multiple language tokens (preserving the semantics of the original sentence), and the target is just the original sentence.", "The replace probability is also set to 90%.", "We apply this augmentation in the pre-processing step before training.", "We will release the multilingual dictionary and the script for producing the noised monolingual dataset.", "Evaluation Datasets For supervised directions, most of our evaluation datasets are from WMT and IWSLT benchmarks, for pairs that are not available in WMT or IWSLT, we use OPUS-100 instead.", "For zero-shot directions, we follow (Zhang et al., 2020) and use their proposed OPUS-100 zero-shot testset.", "The testset is comprised of 6 languages (Ru, De, Fr, Nl, Ar, Zh), resulting in 15 language pairs and 30 translation directions.", "We report de-tokenized BLEU with Sacre-En-Nl iwslt2014 En-Pt opus-100 En-Pl wmt20 Nl-Pt Avg m-Transformer 1.3 7.0 3.7 10.7 0.6 3.2 -4.42 mRASP 0.7 10.6 3.7 11.6 0.5 5.3 -5.40 +0.98 mRASP2 10.1 28.5 18.4 30.5 6.7 17.1 9.3 8.3 18.55 +14.13 Table 2: mRASP2 outperforms m-Transformer in unsupervised translation directions by a large margin.", "BLEU (Post, 2018).", "For tokenized BLEU, we tokenize both reference and hypothesis using Sacremoses 11 toolkit then report BLEU using the multi-bleu.pl script 12 .", "For Chinese (Zh), BLEU score is calculated on character-level.", "Experiment Details We use the Transformer model in our experiments, with 12 encoder layers and 12 decoder layers.", "The embedding size and FFN dimension are set to 1024.", "We use dropout = 0.1, as well as a learning rate of 3e-4 with polynomial decay scheduling and a warm-up step of 10000.", "For optimization, we use Adam optimizer (Kingma and Ba, 2015) with (cid:15) = 1e-6 and 2 = 0.98.", "To stabilize training, we set the threshold of gradient norm to be 5.0 and clip all gradients with a larger norm.", "We set the hyper-parameter = 1 .", "0 in Eq.3 during training.", "For multilingual vocabulary, we follow the shared BPE (Sennrich et al., 2016) vocabulary of Lin et al. (2020), which includes 59 languages.", "The vocabulary contains 64808 tokens.", "After adding 59 language tokens, the total size of vocabulary is 64867.", "This section shows that mRASP2 provides consistent performance gains for supervised and unsupervised English-centric translation directions as well as for non-English directions.", "Supervised Directions As shown in Table 1, mRASP2 clearly improves multilingual baselines by a large margin in 10 translation directions.", "Previously, multilingual machine translation under-performs bilingual translation in rich-resource scenarios.", "It is worth noting that our multilingual machine translation baseline is already very competitive.", "It is even on par with the strong mBART bilingual model, which is fine-tuned on a large scale unlabeled monolingual dataset.", "mRASP2 further improves the performance.", "We summarize the key factors for the success training of our baseline 13 m-Transformer:", "a) The batch size plays a crucial role in the suc-13 many-to-many Transformer trained on PC32 as in Johnson et al. (2017) except that we apply language indicator the same way as Fan et al. (2020) model CTL AA MC24 Supervised Unsupervised Zero-shot 1 m-Transformer 28.65 4.42 5.05 2 mRASP w/o", "cess of training multilingual NMT.", "We use 8 4 NVIDIA V100 with update frequency 50 to train the models and each batch contains about 3 million tokens.", "b) We enlarge the number of layers from 6 to 12 and observe significant improvements for multilingual NMT.", "By contrast, the gains from increasing the bilingual model size is not that large.", "mBART also uses 12 encoder and decoder layers.", "c) We use gradient norm to stable the training.", "Without this regularization, the large scale training will collapse sometimes.", "Unsupervised Directions In Table 2, we observe that mRASP2 achieves reasonable results on unsupervised translation directions.", "The language pairs of En-Nl, En-Pt, and En-Pl are never observed by m-Transformer.", "m-Transformer sometimes achieves reasonable BLEU for X En,", "e.g.", "10 .", "7 for Pt En, since there are many similar languages in PC32, such as Es and Fr.", "Not surprisingly, it totally fails on En X directions.", "By contrast, mRASP2 obtains +14.13 BLEU score on an average without explicitly introducing supervision signals for these directions.", "Furthermore, mRASP2 achieves reasonable BLEU scores on Nl Pt directions even though it has only been trained on monolingual data of both sides.", "This indicates that by simply incorporating monolingual data with parallel data in the unified framework, mRASP2 successfully enables unsupervised translation through its unified multilingual representation.", "Zero-shot Translation has been an intriguing topic in multilingual neural machine translation.", "Previous work shows that the multilingual NMT model can do zero-shot translation directly.", "However, the translation quality is quite poor compared with pivot-based model.", "We evaluate mRASP2 on the OPUS-100 (Zhang et al., 2020) zero-shot test set, which contains 6 languages 14 and 30 translation directions in total.", "To make the comparison clear, we also report the results of several different baselines.", "mRASP2 w/o AA only adopt contrastive learning on the basis of m-Transformer.", "mRASP2 w/o MC24 excludes monolingual data from mRASP2.", "The evaluation results are listed in Appendix and we summarize them in Table 3. We find that our mRASP2 significantly outperforms m-Transformer and substantially narrows the gap with pivot-based model.", "This is in line with our intuition that bridging the representation gap of different languages can improve the zero-shot translation.", "The main reason is that contrastive loss, aligned augmentation and additional monolingual data enable a better language-agnostic sentence representation.", "It is worth noting that, Zhang et al. (2020) achieves BLEU score improvements on zero-shot translations at sacrifice of about 0.5 BLEU score loss on English-centric directions.", "By contrast, mRASP2 improves zero-shot translation by a large margin without losing performance on English-Centric directions.", "Therefore, mRASP2 has a great potential to serve many-to-many translations, including both English-centric and non-English directions.", "To understand what contributes to the performance gain, we conduct analytical experiments in this", "14 Arabic, Chinese, Dutch, French, German, Russian", "section.", "First we summarize and analysis the performance of mRASP2 in different scenarios.", "Second we adopt the sentence representation of mRASP2 to retrieve similar sentences across languages.", "This is to verify our argument that the improvements come from the universal language representation learned by mRASP2.", "Finally we visualize the sentence representations, mRASP2 indeed draws the representations closer.", "To make a better understanding of the effectiveness of mRASP2, we evaluate models of different settings.", "We summarize the experiment results in Table 4: 1 v.s. 3 : 3 performs comparably with m-Transformer in supervised and unsupervised scenarios, whereas achieves a substantial BLEU improvement for zero-shot translation.", "This indicates that by introducing contrastive loss, we can improve zero-shot translation quality without harming other directions.", "2 v.s. 4 : 2 performs poorly for zero-shot directions.", "This means contrastive loss is crucial for the performance in zero-shot directions.", "5 : mRASP2 further improves BLEU in all of the three scenarios, especially in unsupervised directions.", "Therefore it is safe to conjecture that by accomplishing with monolingual data, mRASP2 learns a better representation space.", "In order to verify whether mRASP2 learns a better representation space, we conduct a set of similarity search experiments.", "Similarity search is a task to find the nearest neighbor of each sentence in another language according to cosine similarity.", "We argue that mRASP2 benefits this task in the sense that it bridges the representation gap across languages.", "Therefore we use the accuracy of similarity search tasks as a quantitative indicator of cross-lingual representation alignment.", "We conducted comprehensive experiments to support our argument and experiment on mRASP2 and mRASP2 w/o AA", ".We divide the experiments into two scenarios: First we evaluate our method arcsdeenes fr it ja konl roru tr vi zh a r c s d e e n e s f r i t j a k o n l r o r u t r v i z h 0.0 0.1 0.2", "on Tatoeba dataset (Artetxe and Schwenk, 2019), which is English-centric.", "Then we conduct similar similarity search task on non-English language pairs.", "Following Tran et al. (2020), we construct a multi-way parallel testset (Ted-M) of 2284 samples by filtering the test split of ted 15 that have translations for all 15 languages 16 .", "Under both settings, we follow the same strategy: We use the average-pooled encoded output as the sentence representation.", "For each sentence from the source language, we search the closest sentence in the target set according to cosine similarity.", "English-Centric: Tatoeba We display the evaluation results in Table 5. We detect two trends:", "(i) The overall accuracy follows the rule: m-Transformer < mRASP2 w/o AA < mRASP2.", "(ii) mRASP2 brings more significant improvements for languages with less data volume in PC32.", "The two trends mean that mRASP2 increases translation BLEU score in a sense that it bridges the representation gap across languages.", "Non-English: Ted-M It will be more convincing to argue that mRASP2 indeed bridges the representation gap if similarity search accuracy increases on zero-shot directions.", "We list the averaged top-1 accuracy of 210 non-English directions 17 in Table 6. The results show that mRASP2 increases the similarity search accuracy in zero-shot scenario.", "The results support our argument 15 http://phontron.com/data/ted talks.tar.gz 16 Arabic, Czech, German, English, Spanish, French, Italian, Japanese, Korean, Dutch, Romanian, Russian, Turkish, Vietnamese, Chinese 17 15 languages, resulting in 210 directions Lang Fr De Zh Ro Cs Tr Ru NL PL Pt m-Transformer 91.7 96.8 87.0 90.6 84.8 91.1 89.1 25.6 6.3 37.3 mRASP2 w/o AA 91.7 97.3 89.9 91.4 86.1 92.4 90.4 35.7 14.3 46.5 mRASP2 93.0 98.0 90.7 91.9 89.3 92.4 92.3 60.3 28.1 58.6 Table 5: English-Centric: Sentence retrieval top-1 accuracy on Tatoeba evaluation set.", "To better understanding the specifics beyond the averaged accuracy, we plot the accuracy improvements in the heat map in Figure 3. mRASP2 w/o AA brings general improvements over m-Transformer.", "mRASP2 especially improves on Dutch(Nl).", "This is because mRASP2 introduces monolingual data of Dutch while mRASP2 w/o AA includes no Dutch data.", "In order to visualize the sentence representations across languages, we retrieve the sentence representation R ( s ) for each sentence in Ted-M, resulting in 34260 samples in the high-dimensional space.", "To facilitate visualization, we apply T-SNE dimension reduction to reduce the 1024-dim representations to 2-dim.", "Then we select 3 representative languages: English, German, Japanese and depict the bivariate kernel density estimation based on the 2-dim representations.", "It is clear in Figure 4 that m-Transformer cannot align the 3 languages.", "By contrast, mRASP2 draws the representations across 3 languages much closer.", "Multilingual Neural Machine Translation While initial research on NMT starts with building", "building translation systems between two languages, Dong et al. (2015) extends the bilingual NMT to one-to-many translation with sharing encoders across 4 language pairs.", "Hence, there has been a massive increase in work on MT systems that involve more than two languages (Chen et al., 2018; Choi et al., 2018; Chu and Dabre, 2019; Dabre et al., 2017).", "Recent efforts mainly focuses on designing language specific components for multilingual NMT to enhance the model performance on rich-resource languages (Bapna and Firat, 2019; Kim et al., 2019; Wang et al., 2019b; Escolano et al., 2020).", "Another promising thread line is to enlarge the model size with extensive training data to improve the model capability (Arivazhagan et al., 2019; Aharoni et al., 2019; Fan et al., 2020).", "Different from these approaches, mRASP2 proposes to explicitly close the semantic representation of different languages and make the most of cross lingual transfer.", "Zero-shot Machine Translation Typical zero-shot machine translation models rely on a pivot language (e.g. English) to combine the source-pivot and pivot-target translation models (Chen et al., 2017; Ha et al., 2017; Gu et al., 2019; Currey and Heafield, 2019).", "Johnson et al. (2017) shows that a multilingual NMT system enables zero-shot translation without explicitly introducing pivot methods.", "Promising, but the performance still lags behind the pivot competitors.", "Most following up studies focused on data augmentation methods.", "Zhang et al. (2020) improved the zero-shot translation with online back translation.", "Ji et al. (2020); Liu et al. (2020) shows that large scale monolingual data can improve the zero-shot translation with unsupervised pre-training.", "Fan et al. (2020) proposes a simple and effective data mining method to enlarge the training corpus of zero-shot directions.", "Some work also attempted to explicitly learn shared semantic representation of different languages to im-100 50 0 50 100 x 100 50 0 50 100 y language en ja de", "Lu et al. (2018) suggests that by learning an explicit interlingual across languages, multilingual NMT model can significantly improve zero-shot translation quality.", "Al-Shedivat and Parikh (2019) introduces a consistent agreement-based training method that encourages the model to produce equivalent translations of parallel sentences in auxiliary languages.", "Different from these efforts, mRASP2 attempts to learn a universal many-to-many model, and bridge the cross-lingual representation with contrastive learning and m-RAS.", "The performance is very competitive both on zero-shot and supervised directions on large scale experiments.", "Contrastive Learning Contrastive Learning has become a rising domain and achieved significant success in various computer vision tasks (Zhuang et al., 2019; Tian et al., 2020; He et al., 2020; Chen et al., 2020; Misra and van der Maaten, 2020).", "Researchers in the NLP domain have also explored contrastive Learning for sentence representation.", "Wu et al. (2020) employed multiple sentence-level augmentation strategies to learn a noise-invariant sentence representation.", "Fang and Xie (2020) applies the back-translation to create augmentations of original sentences.", "Inspired by these studies, we apply contrastive learning for multilingual NMT.", "Cross-lingual Representation Cross-lingual representation learning has been intensively studied in order to improve cross-lingual understanding (XLU) tasks.", "Multilingual masked language models (MLM), such as mBERT(Devlin et al., 2019) and XLM(Conneau and Lample, 2019), train large Transformer models on multiple languages jointly and have built strong benchmarks on XLU tasks.", "Most of the previous works on cross-lingual representation learning focus on unsupervised training.", "For supervised learning, Conneau and Lample (2019) proposes TLM objective that simply concatenates parallel sentences as input.", "By contrast, mRASP2 leverages the supervision signal by pulling closer the representations of parallel sentences.", "We demonstrate that contrastive learning can significantly improve zero-shot machine translation directions.", "Combined with additional unsupervised monolingual data, we achieve substantial improvements on all translation directions of multilingual NMT.", "We analyze and visualize our method, and find that contrastive learning tends to close the representation gap of different languages.", "Our results also show the possibilities of training a true many-to-many Multilingual NMT that works well on any translation direction.", "In future work, we will scale-up the current training to more languages,", "e.g.", "PC150.", "As such, a single model can handle more than 100 languages and outperforms the corresponding bilingual baseline." ]
[ "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "objective", "result", "result", "result", "abstain", "abstain", "abstain", "abstain" ]
[ "We show that unsupervised sequence-segmentation performance can be transferred to extremely low-resource languages by pretraining a Masked Segmental Language Model (Downey et al., 2021) multilingually.", "Further, we show that this transfer can be achieved by training over a collection of low-resource languages that are typologically similar (but phylogenetically unrelated) to the target language.", "In our experiments, we transfer from a collection of 10 Indigenous American languages (AmericasNLP, Mager et al., 2021) to K'iche', a Mayan language.", "We compare our multilingual model to a monolingual (from-scratch) baseline, as well as a model pre-trained on Quechua only.", "We show that the multilingual pre-trained approach yields consistent segmentation quality across target dataset sizes, exceeding the monolingual baseline in 6/10 experimental settings.", "Our model yields especially strong results at small target sizes, including a zero-shot performance of 20.6 F1.", "These results have promising implications for low-resource NLP pipelines involving human-like linguistic units, such as the sparse transcription framework proposed by Bird (2020).", "Unsupervised sequence segmentation (at the word, morpheme, and phone level) has long been an area of interest in languages without whitespace-delimited orthography (e.g. Chinese, Uchiumi et al., 2015; Sun and Deng, 2018), morphologically complex languages without rule-based morphological analyzers (Creutz and Lagus, 2002), and automatically phone-transcribed speech data (Goldwater et al., 2009; Lane et al., 2021), respec-", "respecEqual contribution from starred authors, sorted by last name.", "Sincere thanks to: Gina-Anne Levow, Shane Steinert Threlkeld, and Sara Ng for helpful comments and discussion; Francis Tyers for access to the K'iche' data; Manuel Mager for access to the morphologically-segmented validation data tively.", "It has been particularly important for lower-resource languages in which there is little or no gold-standard data on which to train supervised models (Joshi et al., 2020).", "In modern neural end-to-end systems, unsupervised segmentation is usually performed via information-theoretic algorithms such as BPE (Sen-nrich et al., 2016) and SentencePiece (Kudo and Richardson, 2018).", "However, the segmentations they produce are largely non-sensical to humans (Park et al., 2021).", "The motivating tasks listed above instead require unsupervised approaches that correlate more closely with human judgements of the boundaries of linguistic units.", "For example, in a human-in-the-loop framework such as the sparse transcription proposed by Bird (2020), lexical items are automatically proposed to native speakers for confirmation, and it is important that these candidates be (close to) sensical, recognizable pieces of language.", "In this paper, we investigate the utility of recent models that have been developed to conduct unsupervised surface morpheme segmentation as a byproduct of a language modeling objective (e.g. Kawakami et al., 2019; Downey et al., 2021, see Section 2).", "The key idea is that recent breakthroughs in crosslingual language modeling and transfer learning (Conneau and Lample, 2019; Artetxe et al., 2020, inter alia ) can be leveraged to facilitate transferring unsupervised segmentation performance to a new target language, using these types of language models.", "Specifically, we investigate the effectiveness of multilingual pre-training in a Masked Segmental Language Model (Downey et al., 2021) when applied to a low-resource target.", "We pre-train our model on the ten Indigenous languages of the 2021 AmericasNLP shared task dataset (Mager et al., 2021), and apply it to another low-resource, Indigenous, and morphologically complex language of Central America: K'iche' (quc), which at least 5331 phylogenetically is unrelated to the pre-training languages (Campbell et al., 1986).", "We hypothesize that multilingual pre-training on similar, possibly contact-related languages, will outperform both a monolingual baseline trained from scratch and a model pre-trained on a single language (Quechua) with the same amount of pretraining data.", "We also expect that the pre-trained models will perform increasingly better than the monolingual baseline the smaller the target corpus is.", "Indeed, our experiments show that a pre-trained multilingual model provides stable performance across all dataset sizes and far exceeds the monolingual baseline at low-to-medium target sizes.", "We additionally show that the multilingual model achieves a zero-shot segmentation performance of 20.6 F1 on the K'iche' data, where the monolingual baseline yields a score of zero.", "These results suggest that transferring from a multilingual model can greatly assist unsupervised segmentation in very low-resource languages, even those that are morphologically rich.", "The results also provide evidence for the idea that transfer from multilingual models works at a more moderate scale than is typical for recent crosslingual models (3.15 million parameters for our models).", "In the following section, we overview work relating to unsupervised segmentation, crosslingual pre-training, and transfer-learning (Section 2).", "We then introduce the multilingual data used in our experiments, and the additional pre-processing we performed to prepare the data for pre-training (Sec-tion 3).", "Next we provide a brief overview of the type of Segmental Language Model used in our experiments, as well as our multilingual pre-training process (Section 4).", "After this, we describe our experimental process applying the pre-trained and from-scratch models to varying target data sizes (Section 5).", "Finally, we discuss the results of our experiments and their significance for low-resource pipelines, both within unsupervised segmentation and for other NLP tasks more generally (Sections 6 and 7).", "Work related to the present study largely falls either into the field of (unsupervised) word segmentation, or the field(s) of crosslingual language modeling and transfer learning.", "To our knowledge, we are the first to propose a multilingual model for unsupervised word/morpheme-segmentation.", "Unsupervised Segmentation Current state-of-the-art unsupervised segmentation has largely been achieved with Bayesian models such as Hierarchical Dirichlet Processes (Teh et al., 2006; Goldwater et al., 2009) and Nested Pitman-Yor (Mochihashi et al., 2009; Uchiumi et al., 2015).", "Adaptor Grammars (Johnson and Goldwater, 2009) have been successful as well.", "Models such as Morfessor (Creutz and Lagus, 2002), which are based on Minimal Description Length (Rissanen, 1989) are also widely used for unsupervised morphology.", "As Kawakami et al. (2019) note, most of these models have weak language modeling ability, being unable to take into account much other than the immediate local context of the sequence.", "Another line of techniques has focused on models that are both strong language models and good for sequence segmentation.", "Many are in some way based on Connectionist Temporal Classification (Graves et al., 2006), and include Sleep-WAke Networks (Wang et al., 2017), Segmental RNNs (Kong et al., 2016), and Segmental Language Models (Sun and Deng, 2018; Kawakami et al., 2019; Wang et al., 2021; Downey et al., 2021).", "In this work, we conduct experiments using the Masked Segmental Language Model of Downey et al. (2021), due to its good performance and scalability, the latter usually regarded as an obligatory feature of multilingual models (Conneau et al., 2020a; Xue et al., 2021, inter alia ).", "Crosslingual and Transfer Learning Crosslingual modeling and training has been an especially active area of research following the introduction of language-general encoder-decoders in Neural Machine Translation, offering the possibility of zero-shot translation (i.e. translation for language pairs not seen during training; Ha et al., 2016; Johnson et al., 2017).", "The arrival of crosslingual language model pretraining (XLM, Conneau and Lample, 2019) further demonstrates that large models pre-trained on multiple languages yield state-of-the-art performance across an abundance of multilingual tasks including zero-shot text classification (e.g. XNLI, Conneau et al., 2018), and that pre-trained transformer encoders provide great initializations for MT systems and language models in very low-resource languages.", "single out which components of crosslingual training contribute to transferability from one language to another (e.g. Conneau et al., 2020b).", "Others have questioned the importance of multilingual training, and have instead proposed that even monolingual pre-training can provide effective transfer to new languages (Artetxe et al., 2020).", "Though some like Lin et al. (2019) have tried to systematically study which aspects of pre-training languages/corpora enable effective transfer, in practice the choice is often driven by availability of data and other ad-hoc factors.", "Currently, large crosslingual successors to XLM such as XLM-R (Conneau et al., 2020a), MASS (Song et al., 2019), mBART (Liu et al., 2020), and mT5 (Xue et al., 2021) have achieved major success, and are the starting point for a large portion of multilingual NLP systems.", "These models all rely on an enormous amount of parameters and pre-training data, the bulk of which comes from very high-resource languages.", "In contrast, in this paper we assess whether multilingual pre-training on a suite of very low-resource languages, which combine to yield a moderate amount of unlabeled data, can provide good transfer to similar languages which are also very low-resource.", "We draw data from three main datasets.", "We use the AmericasNLP 2021 open task dataset (Mager et al., 2021) to pre-train our multilingual models.", "The multilingual dataset from Kann et al. (2018) serves as segmentation validation data for our pre-training process in these languages.", "Finally, data from Tyers and Henderson (2021) is used as the training set for our experiments transferring to K'iche', and Richardson and Tyers (2021) provides the validation and test data for these experiments.", "AmericasNLP 2021 The AmericasNLP data consists of train and validation files for ten low-resource Indigenous languages of Central and South America: Ashninka (cni), Aymara (aym), Bribri (bzd), Guaran (gug), Hhu (oto), Nahuatl (nah), Quechua (quy), Rarmuri (tar), Shipibo Konibo (shp), and Wixarika (hch).", "For each language, AmericasNLP also includes parallel Spanish sets, which we do not use.", "The data was originally curated for the AmericasNLP 2021 shared task on low-resource Machine Translation.", "(Mager et al., 2021).", "We augment the Ashninka and Shipibo-Konibo training sets with additional available monolingual data from Bustamante et al. (2020), 2 which is linked in the official AmericasNLP repository.", "We add both the training and validation data from this corpus to the training set of our splits.", "To pre-process for a multilingual language modeling setting, we first remove lines that contain urls, copyright boilerplate, or that contain no alphabetic characters.", "We also split lines that are longer than 2000 characters into sentences/clauses where evident.", "Because we use the Nahuatl and Wixarika data from Kann et al. (2018) as validation data, we remove any overlapping lines from the AmericasNLP set.", "We create a combined train file as the concatenation of the training data from each of the ten languages, as well as a combined validation file likewise.", "Because the original ratio of Quechua training data is so high compared to all other languages (Figure 1), we downsample it to 2 15 examples, the closest order of magnitude to the next-largest training set.", "A plot of the balanced (final) composition of our AmericasNLP train and validation sets is seen in Figure", "2. To compare the effect of multilingual and monolingual pre-training, we also pre-train a model on Quechua alone, since it has by far the most data (Figure 1).", "However, the full Quechua training set has about 50k fewer lines than our balanced AmericasNLP set (Figure 2).", "To create a fair comparison between multilingual and monolingual pre-training, we additionally create a downsampled version of the AmericasNLP set of equal size to the Quechua data (120,145 lines).", "The detailed composition of our data is available in Appendix A. Kann et al (2018) The data from Kann et al. (2018), originally curated for a segmentation task on polysynthetic low-resource languages, contains morphologically segmented sentences for Nahuatl and Wixarika.", "We use these examples as validation data for segmentation quality during the pretraining process.", "We clean this data in the same manner as the AmericasNLP sets.", "K'iche' data The K'iche' data used in our study was curated for Tyers and Henderson (2021).", "The 1 https://github.com/AmericasNLP/ americasnlp2021 2 https://github.com/iapucp/ multilingual-data-peru 5333 Figure 1: Original (imbalanced) language composition of the AmericasNLP training set Figure 2: Final language composition of our AmericasNLP splits after downsampling Quechua raw (non-gold-segmented) data, used as the training set in our transfer experiments, comes from a section of this data web-scraped by the Crbadn project (Scannell, 2007).", "This data is relatively noisy, so we clean it by removing lines with urls or lines where more than half of the characters are non-alphabetic.", "We also remove duplicate lines.", "The final data consists of 47,729 examples and is used as our full-size training set for K'iche'.", "Our experiments involve testing transfer at different resource levels, so we also create smaller training sets by downsampling the original to lower orders of magnitude.", "For evaluating segmentation performance on K'iche', we use the segmented sentences from Richardson and Tyers (2021), 3 which were created for a shared task on morphological segmen-3 https://github.com/ftyers/ global-classroom tation.", "These segmentations were created by a hand-crafted FST, then manually disambiguated.", "Because gold-segmented sentences are so rare, we concatenate the original train/validation/test splits and then split them in half into final validation and test sets.", "This section gives an overview of the Masked Segmental Language Model (MSLM), introduced in Downey et al. (2021), along with a description of our pre-training procedure.", "MSLMs An MSLM is a variant of a Segmental Language Model (SLM) (Sun and Deng, 2018; Kawakami et al., 2019; Wang et al., 2021), which takes as input a sequence of characters x and outputs a probability distribution for a sequence of segments y such that the concatenation of y is equivalent to x : ( y ) = x .", "An MSLM is composed of a Segmental Transformer Encoder and an LSTM-based Segment Decoder (Downey et al., 2021).", "See Figure", "3. The MSLM training objective is based on the prediction of masked-out spans.", "During a forward pass, the encoder generates an encoding for every position in x , for a segment up to k symbols long; the encoding at position i 1 corresponds to every possible segment that starts at position i .", "Therefore, the encoding approximates p ( x i : i +1 , x i : i +2 , ..., x i : i + k | x <i , x i + k ) To ensure that the encodings are generated based only on the portions of x that are outside of the predicted span, the encoder uses a Segmental Attention Mask (Downey et al., 2021) to mask out tokens inside the segment.", "Figure 3 shows an example of such a mask with k = 2 .", "Finally, the Segment Decoder of an SLM determines the probability of the j th character of the segment of y that begins at index i , y ij , using the encoded context: p ( y ij | y i 0: j , x <i , x i + k ) = Decoder ( h ij i , y ij 1 ) The output of the decoder is not conditional on the determination of other segment boundaries.", "The probability of y is modeled as the marginal probability over all possible segmentations of x .", "Because directly marginalizing is computationally intractable, the marginal is computed using dynamic programming over a forward-pass lattice.", "The maximum-probability segmentation is determined by Viterbi decoding.", "The training objective optimizes language-modeling performance, which is measured in Bits Per Character (bpc).", "Pre-training Procedure In our experiments, we test the transferability of multilingual and monolingual pre-trained MSLMs.", "The multilingual models are trained on the AmericasNLP 2021 data (see Section 3).", "Since SLMs operate on plain text, we can train the model directly on the multilingual concatenation of this data, and evaluate it by its language modeling performance on the concatenated validation data.", "As mentioned in Section 3, we create two versions of the multilingual pre-trained model: one trained on the full AmericasNLP set ( 172k lines) and the other trained on the downsampled set, which is the same size as the Quechua training set ( 120k lines).", "We designate these models MULTI-PT full and MULTI-PT down , respectively.", "Our pre-trained monolingual model is trained on the full Quechua set (QUECHUA-PT ).", "Each model is an MSLM with four encoder layers, hidden size 256, feedforward size 512, and four attention heads.", "Character embeddings are initialized using Word2Vec (Mikolov et al., 2013) over the training data.", "The maximum segment size is set to 10.", "The best model is chosen as the one that minimizes the Bits Per Character (bpc) loss on the validation set.", "For further pre-training details, see Appendix B. To evaluate the effect of pre-training on the segmentation quality for languages within the pretraining set, we also log MCC between the model output and gold-segmented secondary validation sets available in Nahuatl and Wixarika (Kann et al., 2018, see Section 3).", "Figure 4 shows the unsupervised segmentation quality for Nahuatl and Wixarika almost monotonically increases during pre-training (MULTI-PT full ).", "To do this, we pre-train SLMs on one or all of the AmericasNLP 2021 languages (Mager et al., 2021) and transfer it to a new target language: K'iche' (Tyers and Henderson, 2021).", "K'iche' is a morphologically rich Mayan language with several classes of in-flectional prefixes and suffixes (Txchajchal Batz et al., 1996).", "An example sentence can be found in Table 1, which also shows our model's input and target output format.", "As a baseline, we train a monolingual K'iche' model from scratch.", "We evaluate performance with respect to the size of the target training set, simulating varying degrees of low-resource setting.", "To do this, we downsample the K'iche' training set to 8 smaller sizes, for 9 total: {256, 512, ... 2 15 , 47.7k (full)}.", "For each size, we both train a monolingual baseline and fine-tune the pre-trained models we describe in Section 4.", "4 4 All of the data and software required to run these experiments can be found at https://github.com/ cmdowney88/XLSLM 5335 Orthography kinch'aw ruk' le nunan Linguistic Segmentation k-in-ch'aw r-uk' le nu-nan Translation I speak with my mother Model Input kinch'awruk'lenunan Target Output k in ch'aw r uk' le nu nan Table 1: Example K'iche' sentence from Tyers and Henderson (2021).", "The only difference is that the baseline model is initialized with a character vocabulary only covering the particular K'iche' training set (size-specific).", "The character vocabulary of the K'iche' data is a subset of the AmericasNLP vocabulary, so we are able to transfer the multilingual models without changing the embedding and output layers.", "The Quechua vocabulary is not a superset of the K'iche', so we add the missing characters to the Quechua model's embedding block before pre-training (these are randomly initialized).", "The character embeddings for the baseline are initialized using Word2Vec (Mikolov et al., 2013) on the training set (again, size-specific).", "Evaluation Metrics SLMs can be trained in either a fully unsupervised or lightly supervised manner (Downey et al., 2021).", "In the former case, only the language modeling loss (Bits Per Character, bpc) is used to pick parameters and checkpoints.", "In the latter, the segmentation quality on gold-segmented validation data can be considered.", "Though our validation set is gold-segmented, we pick the best parameters and checkpoints based on bpc only, simulating the unsupervised case.", "However, to monitor the change in segmentation quality during training, we also use Matthews Correlation Coefficient (MCC).", "This measure frames segmentation as a character-wise binary classification task (i.e. boundary vs. no boundary), and measures correlation with the gold segmentation.", "To make our results comparable with the wider word-segmentation literature, we use the scoring script from the SIGHAN Segmentation Bakeoff (Emerson, 2005) for our final segmentation F1.", "For each model and target size, we choose the best checkpoint (by bpc), apply the model to the combined validation and test set, and use the SIGHAN script to score the output.", "For comparison to the Chinese Word-Segmentation and speech literature, any whitespace segmentation in the validation/test data is discarded before it is fed to the model.", "However, SLMs can also be trained to treat spaces like any other character, and thus could be able to take advantage of existing segmentation in the input.", "We leave this for future work.", "Parameters and Trials For our training procedure (both training the baseline from scratch and fine-tuning the pre-trained models) we tune hyperparameters on three of the nine dataset sizes (256, 2048, and full) and choose the optimal parameters by bpc.", "For each of the other sizes, we directly apply the chosen parameters from the tuned dataset of the closest size (on a log scale).", "We tune over five learning rates and three encoder dropout values.", "As in pre-training, we set the maximum segment length to 10.", "For more details on our training procedure, see Appendix B. 6 Results The results of our K'iche' transfer experiments at various target sizes can be found in Table", "2. In general, the (full) pre-trained multilingual model (MULTI-PT full ) demonstrates good performance across dataset sizes, with the lowest segmentation performance (20.6 F1) being in the zero-shot case and the highest (40.7) achieved on 2 14 examples.", "The monolingual baseline outperforms MULTIPT full at the two largest target sizes, as well as at size 4096 (achieving the best overall F1 of 44.8), but performs very poorly under 2048 examples, and has no zero-shot ability (unsurprisingly, since it is a random initialization).", "Interestingly, other than in the zero-shot case, QUECHUA-PT and the comparable MULTI-PT down perform very similarly to each other.", "However, the 5336 zero-shot transferability of MULTI-PT down is almost twice that of the model trained on Quechua only.", "MULTI-PT full exceeds both MULTI-PT down and QUECHUA-PT by a wide margin in every setting.", "Finally, all models show increasing performance until about size 4096, after which more target examples don't provide a large increase in segmentation quality.", "Interpretation These results show that MULTIPT full provides consistent performance across target sizes as small as 512 examples.", "Even for size 256, there is only a 9% (relative) drop in quality from the next-largest size.", "Further, the pre-trained model's zero-shot performance is impressive given the baseline is effectively 0 F1.", "On the other hand, the performance of the monolingual baseline at larger sizes seems to suggest that given enough target data, it is better to train a model devoted to the target language only.", "This is consistent with previous results (Wu and Dredze, 2020; Conneau et al., 2020a).", "However, it should also be noted that MULTI-PT full never trails the baseline by more than 5.2 F1.", "One less-intuitive result is the dip in the base-line's performance at sizes 8192 and 2 14 .", "We believe this discrepancy may be partly explainable by sensitivity to hyperparameters in the baseline.", "Though the best baseline trial at size 2048 exceeds MULTI-PT full by a small margin, the baseline shows large variation in performance across the top-four hyperparameter settings at this size, where MULTI-PT full actually performs better on average and much more consistently (Table 3).", "We thus believe the dip in performance for the baseline at sizes 8192 and 2 14 may be due to an inability to extrapolate hyperparameters from other experimental settings.", "Standing of Hypotheses Within the framework of unsupervised segmentation, these results provide strong evidence that relevant linguistic patterns can be learned over a collection of low-resource languages, and then transferred to a new language without much (or any) target training data.", "Further, it is shown that the target language need not be (phylogenetically) related to any of the pre-training languages, even though details of morphological structure are ultimately language-specific.", "baseline at smaller target sizes is also strongly supported.", "This result is consistent with related work showing this to be a key advantage of the multilingual approach (Wu and Dredze, 2020).", "The hypothesis that multilingual pre-training also yields better performance than monolingual pre-training given the same amount of data seems to receive mixed support from our experiments.", "On one hand, the comparable multilingual model has a clear advantage over the Quechua model in the zero-shot setting, and outperforms the latter in 5/10 settings more generally.", "However, because the Quechua data lacks several frequent K'iche' characters (and these embeddings remain randomly ini-tialized), it is unclear how much of this advantage comes from the multilingual training per-se .", "Instead, the advantage may be due to the multilingual model's full coverage of the target vocabulary an advantage which may disappear at larger target sizes.", "Further analysis of this hypothesis will require additional investigation.", "Significance The above results, especially the strong zero-shot transferability of segmentation performance, suggest that the type of language model used here learns some abstract linguistic pattern(s) that are generalizable across languages, and even to new ones.", "It is possible that these generalizations could take the form of abstract stem/affix or word-order patterns, corresponding roughly to the lengths and order of morphosyntactic units.", "Because MSLMs operate on the character level (and in these languages orthographic characters mostly correspond to phones), it is also possible the model could recognize syllable structure in the data (the ordering of consonants and vowels in human languages is relatively constrained), and learn to segment on syllable boundaries.", "It is also helpful to remember that we select the training suite and target language to have some characteristics in common that may help facilitate transfer.", "The AmericasNLP languages are almost all morphologically rich, with many considered polysynthetic (Mager et al., 2021), a feature that K'iche' shares (Surez, 1983).", "Further, all of the languages, including K'iche', are spoken in countries where either Spanish or Portuguese is the official language, and have very likely had close contact with these Iberian languages and borrowed lexical items.", "Finally, the target language family (Mayan) has also been shown to have close historical contact with the families of several of the 5337 Model Target Language Segmentation F1 0 256 512 1024 2048 4096 8192 2 14 2 15 47,729 (full) MULTI-PT full 20.6 34.0 37.4 37.4 38.2 40.5 38.6 40.7 38.9 38.2 MULTI-PT down 15.0 25.1 25.7 29.3 32.5 33.2 33.3 31.5 33.6 31.9 QUECHUA-PT 7.6 29.9 31.0 30.4 30.7 31.0 29.9 33.6 31.8 33.3 MONOLINGUAL 0.002 4.0 3.3 10.3 39.2 44.8 29.4 39.5 44.1 43.2 Table 2: Segmentation quality on the combined validation and test set for each model, at each target training set size.", "It is possible that one or several of these shared characteristics facilitates the strong transfer shown here, in both our multilingual and monolingual pretrained models.", "However, our current study does not conclusively show this to be the case.", "Lin et al. (2019) show that factors like linguistic similarity and geographic contact are often not as important for transfer success as non-linguistic features such as the raw size of the source dataset.", "Indeed, the fact that our Quechua pre-trained model performs similarly to the comparable multilingual model (at least at larger target sizes) suggests that the benefit to using MULTI-PT full could be interpreted as a combined advantage of pre-training data size and target vocabulary coverage.", "The nuanced question of whether multilingual pre-training itself enables better transfer than monolingual pre-training requires more study.", "However, taking a more pragmatic point of view, multilingual training can be seen as a methodology to 1) acquire more data than is available from any one language and 2) ensure broader vocabulary overlap with the target language.", "Our character-based model is of course different from more common wordor subword-based approaches, but with these too, attaining pre-trained embeddings that cover a novel target language is an important step in cross-lingual transfer (Garcia et al., 2021; Conneau et al., 2020a; Artetxe et al., 2020, inter alia ) Future Work We believe some future studies would shed light on the nuances of segmentation transfer-learning.", "First, pre-training either multilingually or monolingually on languages that are not linguistically similar to the target language could help isolate the advantage given by pre-training on any language data (vs. similar language data).", "Second, we have noted that monolingual pretraining on a language that does not have near-full vocabulary coverage of the target language leaves some embeddings randomly initialized, yielding worse performance at small target sizes.", "Pretraining a model on a single language that happens to have near-complete vocabulary coverage of the target could give a better view of whether multilingual training intrinsically yields advantages, or whether monolingual training is disadvantaged mainly due to this lack of vocabulary coverage.", "Finally, because none of the present authors have any training in the K'iche' language, we are unable to perform a linguistically-informed error analysis of our model's output (e.g. examining the types of words and morphemes which are erroneously 5338 (un)segmented, rather than calculating an overall precision and recall for the predicted and true morpheme boundaries, as we do in this study).", "However, we make all of our model outputs available in our public repository, so that future work may provide a more nuanced analysis of the types of errors unsupervised segmentation models are prone to make.", "This study has shown that unsupervised sequence segmentation ability can be transferred via multilingual pre-training to a novel target language with little or no target data.", "The target language also need not be from the same family as a pre-training language for successful transfer.", "While training a monolingual model from scratch on large amounts of target data results in good segmentation quality, our experiments show that pre-trained models, especially multilingual ones, far exceed the baseline at small target sizes ( 1024), and seem to be much more robust to hyperparameter variation at medium sizes (2048, 8192, 2 14 ).", "One finding that may have broader implications is that pre-training can be conducted over a set of low-resource languages with some typological or geographic connection to the target, rather than over a crosslingual suite centered around high-resource languages like English and other European languages.", "Most modern crosslingual models have huge numbers of parameters (XLM has 570 million, mT5 has up to 13 billion, Xue et al., 2021), and are trained on enormous amounts of data, usually bolstered by hundreds of gigabytes in the highest-resource languages (Conneau et al., 2020a).", "In contrast, our results suggest that effective transfer may be possible at smaller scales, by combining the data of low-resource languages and training moderately-sized, more targeted pre-trained multilingual models (our model has 3.15 million parameters).", "Of course, this study can only support this possibility within the unsupervised segmentation task, so future work will be needed to investigate whether transfer to and from low-resource languages can be extended to other tasks." ]
[ "result", "result", "abstain", "method", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "result", "result", "abstain", "result", "method", "method", "method", "method", "result", "other", "objective", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "other", "method", "method", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "objective", "abstain", "result", "abstain", "abstain", "objective", "objective" ]
[ "Unfamiliar terminology and complex language can present barriers to understanding science.", "Natural language processing stands to help address these issues by automatically defining unfamiliar terms.", "We introduce a new task and dataset for defining scientific terms and controlling the complexity of generated definitions as a way of adapting to a specific reader's background knowledge.", "We test four definition generation methods for this new task, finding that a sequence-to-sequence approach is most successful.", "We then explore the version of the task in which definitions are generated at a target complexity level.", "We introduce a novel reranking approach and find in human evaluations that it offers superior fluency while also controlling complexity, compared to several controllable generation baselines.", "Unfamiliar concepts and complex language can make understanding scientific information difficult for readers (Brossard and Shanahan, 2006; Shea, 2015; Martnez and Mammola, 2021), especially because understanding such terms is highly dependent on their domain knowledge.", "Given the wide variation in such knowledge, providing a one-size-fits-all definition may not be sufficiently understandable for all readers.", "We envision a software tool designed to aid readers with varying domain knowledge by automatically defining scientific terms.", "Such a tool would afford readers control over generated definitions, including their complexity.", "This hypothetical system motivates research on automated generation of scientific definitions and generation-time control of definition complexity.", "Prior work in generating definitions and personalizing generations to a reader falls short of these goals.", "Most definition generation has focused on common, general-usage words in English (Noraset A molecule that binds to a hydrophobic surface.", "Scientific terms rarely reach common usage (Shea, 2015; Britt et al., 2014) and the contexts in which their definitions might appear (e.g., a research paper) are often much more complex than general-purpose resources for definitions (e.g., dictionaries or standard word embeddings).", "Previous methods focused on reader personalization have aimed at generating based on a reader's prior knowledge and interests (Acharya et al., 2018; Murthy et al., 2021).", "These approaches work well when models can leverage a reader profile (Murthy et al., 2021) or incorporate reader feedback over time.", "However, in many cases a model might not have access to this additional information, such as for newcomers in an online forum discussing scientific findings (August et al., 2020a).", "We are interested instead in providing readers the ability to explicitly set definition complexity suited to their technical comfort (McNamara and Kintsch, 1996; Kintsch, 1994; Kim et al., 2016).", "We introduce a new task for generating definitions of scientific and medical terms with varying complexity (2; Joshi et al., 2017; Fan et al., 2019).", "Our dataset (3) is constructed from consumer medical questions and science glossaries containing words that vary in their complexity and frequency.", "We start by evaluating four modeling approaches for generating definitions, finding that, among them, a finetuned BART model is most successful at this new task (4).", "As a first step to adjusting definition complexity, we introduce methods to explicitly set definition complexity as either high or low at generation time.", "To our knowledge, this is the first paper using decoding-time controllable generation techniques on text complexity.", "We operationalize complexity based on readability and science communication research (Pitler and Nenkova, 2008; Gardner and Davies, 2013; Leroy et al., 2010) and evaluate several state-of-the-art controllable generation methods on this task (5).", "We also develop a new, lightweight method for controlling generation based on discriminator ranking.", "Our automatic and human evaluations show that our lightweight method is effective at varying complexity while maintaining high fluency and reducing factual errors.", "We release our dataset and code to encourage future work on this task.", "1 2 Definition Tasks Generating definitions has been approached as a word-to-sequence task, where language models used a word's embedding to generate its definition (Noraset et al., 2017).", "Recent work used a sequence-to-sequence setup for generating definitions instead, where the defined word was a highlighted token in a sequence (Mickus et al., 2019).", "This conceptualization of definition modeling is an important starting point for addressing our task.", "However, new scientific terms are introduced regularly and many never appear in dictionaries or reach common usage (Shea, 2015; Britt et al., 2014), making it difficult to rely on general-purpose dictionaries (Kim et al., 2016).", "Scientific terms are also notoriously esoteric (e.g., hidden Markov model ) or else overload definitions of common words (e.g., transformer the model architecture versus transformer the electrical device), both of which complicate the use of standard word representations from pretrained models (Beltagy et al., 2019).", "ically, we frame our task as generating an answer to the question What is (are) X?", "This reframing allows us to leverage scientific definitions from more diverse sources (e.g., QA datasets) and to incorporate domain-specific knowledge into definition generation by including supporting information (3.2; Chen et al., 2017; Joshi et al., 2017).", "We collect a new dataset of definitions that are answers to the question What is (are) X? where X is a scientific term or concept (e.g., carbon nan-otubes ).", "These questions are drawn roughly equally from an existing QA dataset or templated from scientific glossaries.", "Medical consumer questions Ben Abacha and Demner-Fushman (2019) collected 47,457 medical questions from 12 National Institutes of Health (NIH) websites and collected them into the MedQuAD dataset.", "The dataset covers 37 different question types.", "Three question categories are focused on defining and providing information on medical terms: Information, How can I learn more, and Other information.", "Manual inspection of these question categories shows that all questions are of the form What is (are) X? or Do you have more information on X?", "Responses to the these questions begin with a brief definition of X. After filtering for this question type and removing questions with no answer due to copyright restrictions, we had 4,525 definitions.", "Wikipedia The MedQuAD questions are an excellent source of definitions, but only cover medical terms.", "Because we are interested in tackling scientific terms more broadly, we augment this set with terms drawn from Wikipedia science glossaries.", "2 We extract all science-related terms and their definitions, yielding another 3,738 terms for a total dataset of 8,263 terms.", "We also explored using other QA datasets that included scientific information to expand our coverage of scientific domains outside of medicine, such as the Explain Like I am Five (Fan et al., 2019) and ARC science exam question 2 https://en.wikipedia.org/wiki/ Category:Glossaries_of_science 8299 Source Count Example Questions Example Definitions MedQuAD 4,525 What is (are) complement component 2 deficiency?", "datasets (Clark et al., 2018).", "We found these questions to be less focused on definitions, though future work might find ways to make use of them.", "We split our dataset into training, development, and test sets (8/1/1).", "Examples of terms in this dataset are in Table 1. 3.2 Support Documents We next collect scientific abstracts related to each term to allow models to incorporate related scientific knowledge (Fan et al., 2019; Clark et al., 2018).", "Specifically, given a term question (i.e., What is (are) X?), we query S2ORC (Lo et al., 2020), a corpus of over 81 million scientific articles, for the top 10 related abstracts.", "Query scoring and retrieval is done with Elasticsearch.", "3 These abstracts are concatenated together and form the input along with the term question for our models (4).", "We use scientific abstracts, rather than general audience text like Wikipedia or the Common Crawl, for two reasons.", "First, scientific terms are originally introduced and most commonly used in research papers, making them the most reliable source for these terms.", "Second, terms can be contextual, having different meanings in common usage.", "Additional details for collecting the dataset and creating the support documents are in Appendices A.1 and A.2.", "Our goal is to create a definition dataset with", "(i) coverage of scientific and medical terminology and", "(ii) diverse levels of complexity, to support the application envisioned in 1.", "We conjecture that general-purpose dictionaries will lack coverage of such terms and tend to have complex definitions for those terms that they do include.", "Indeed, we found that less than 20% (191) of a sample of 3 https://www.elastic.co/ 1,000 terms in the medical consumer portion of our dataset have entries in the Merriam Webster Dictionary (MW).", "4 The dictionary definitions also use substantially more academic vocabulary: an average of 39% (s.d. 12%) of words in those dictionary definitions were in the Academic Vocabulary List (Gardner and Davies, 2013)a list of words that occur more frequently in academic writing than common usagecompared to 29% (s.d. 12%) in the medical consumer definitions.", "Examples of definitions from our dataset and from MW are in Table 2. While complex definitions are not necessarily bad, we want diverse complexity levels in our input.", "While medical consumer questions tend to use fewer specialized terms than a dictionary, we also find that a random sample of 1,000 Wikipedia terms in our dataset use close to as much specialized terminology as a dictionary (37%, s.d. 12%).", "This provides us with a wider range of complexity levels than were we to use a single source of scientific definitions.", "We later explore how this exposure to different complexity levels in the input makes it possible to control the complexity of generated definitions (5.2).", "Our first goal is to generate fluent definitions that include relevant and accurate information about the term being defined.", "Because this is a new task and there are multiple reasonable approaches to generating fluent text (Prabhumoye et al., 2020), we experiment with four methods that have performed strongly in question answering and general-purpose definition generation and evaluate their effectiveness in this novel domain.", "For additional details on 4 For this analysis, we exclude the Wikipedia science glossary terms since Wikipedia is also often used as a generalpurpose resource of definitions, and the Merriam Webster API restricts us to 1,000 queries.", "Sequence-to-Sequence: BART ( BART SD and BART NO SD ) BART (Lewis et al., 2020) has been used to define general English terms in context (Bevilacqua et al., 2020) and reached state-of-the-art results on the Explain Like I am Five (ELI5; Lewis et al., 2020) QA dataset, which includes some questions requiring scientific knowledge (e.g., What is a Turing Machine and why is it so important?).", "We experiment with finetuning the BART pretrained model on our task and dataset in two ways.", "In the first, BART is trained with the term question concatenated with the supporting document (referred to as BART SD ).", "In the second, we train a BART model with just the term question and definition answer, without the support documents ( BART NO SD ).", "This second version is included to assess how important the support documents are for generating definitions of scientific terms.", "We use BART-large as our base model for both versions.", "5 Out-of-the-Box Causal Language Modeling ( OOTB GPT -2 and OOTB GPT -3) Recent work has also shown that large pretrained causal language models, such as GPT-2 and GPT-3, can generate fluent answers to factual questions without finetuning (Brown et al., 2020).", "We experiment with using both GPT-2 and GPT-3 out-of-the-box ( OOTB GPT -2 and OOTB GPT -3).", "We use GPT-2 medium 6 and GPT-3 davinci 7 for 5 https://huggingface.co/facebook/ bart-large 6 https://huggingface.co/gpt2-medium .", "We obtain similar results when using GPT2-large.", "7 https://beta.openai.com/ these experiments.", "For OOTB GPT -3, we evaluate with 100 terms due to OpenAI API limits.", "For generation, we follow the few-shot setting proposed in Brown et al. (2020) and prepend two held-out question term and definition pairs before each generation.", "We do not include the support documents in this few shot setting since doing so extends beyond GPT-2's context window of 1024 tokens and preliminary results showed that the additional text led to fewer definitions and more repetition from the abstracts.", "Finetuning GPT-2 ( FT GPT -2): Because OOTB GPT -2 and OOTB GPT -3 involve no finetuning or use of the support documents, we suspect that they will underperform BART SD .", "We experiment with finetuning the GPT-2 medium model ( FT GPT -2) with the question and support document, separated by new special tokens.", "Information Retrieval ( OOTB BIDAF ): Information retrieval (IR) methods are an important part of many open-domain QA systems and have presented a strong baseline in scientific question answering (Clark et al., 2018).", "We experiment using a pretrained BiDAF model (Seo et al., 2018) to extract the highest scoring span in the support document based on the term question ( OOTB BIDAF ).", "We use AllenNLP's implementation of BiDAF trained on SQuAD.", "8 4.2 Results Table 4 shows the ROUGE scores and BERTscore for each modeling method on the development set of our dataset.", "models.", "OOTB GPT -3 performs surprisingly well, outperforming even FT GPT -2. BART NO SD also performs closely to BART SD , suggesting that while the support documents can boost performance in this task, the effect can be small.", "OOTB BIDAF extracts spans that don't answer the question.", "Table 3 provides examples of the generated definitions for each modeling approach.", "BART SD provides the most concise answer while also remaining informative, compared to FT GPT -2's definition, which is circular (e.g., Acanthoma (cancer) is a type of cancer).", "While most models show impressive background knowledge, there is evidence of incorrect or hallucinated information, such as acanthoma being a type of skin cancer ( OOTB GPT -2 and BART NO SD ), these hallucinations are marked in Table 3. We explore the amount of hallucinated information further in 7.2.", "For the rest of the paper we use the BART SD model since it outperforms other methods.", "Automatically generating definitions is an important first step in supporting readers who come", "across unfamiliar scientific terms, but individuals can have different tolerances for the complexity of a definition depending on their domain knowledge (Britt et al., 2014).", "The models we tested in 4 were not trained to vary the complexity of definitions; they do not adapt definitions to different readers.", "Here we explore how to control the complexity of generated definitions.", "Controlling or guiding text generation is an active research area with important applications like toxicity control (Gehman et al., 2020) and language debiasing (Ma et al., 2020).", "For a review, see Prab-humoye et al., 2020.", "To the best of our knowledge, ours is the first work to evaluate decoding-time controllable generation methods for text complexity.", "One task that has considered changing text complexity is text simplification.", "Work on text simplification has mostly used a machine translation setup based on parallel corpora (Zhu et al., 2010; Cao et al., 2020) to translate complex sentences into simple ones.", "These parallel corpora are rare and often expensive to create (Xu et al., 2015).", "This setup also assumes an input text to be simplified (Surya et al., 2019), whereas our task expects that the text will be generated with varying complexity.", "Below we describe prior methods, used as baseline generation control methods.", "In each case, we focus on a binary distinction between low complexity and high complexity definitions, leaving more fine-grained distinctions to future work.", "We also introduce a novel lightweight approach based on reranking candidate generations in 5.2.", "Additional details for training are in Appendix A.4.", "Plug-and-play language models PPLM (Dathathri et al., 2020) is a technique to guide generation using the gradients of a classifier for a 8302 particular desired text attribute.", "At each generation step, the classifier's gradients are used to update the language model's hidden representations.", "Due to the computational expense of PPLM, we evaluate with 100 randomly sampled test set terms.", "We train our attribute classifier on sentences from scientific journal abstracts and scientific news articles.", "Journal abstracts are sampled from the ArXiv dataset (Clement et al., 2019) and used to guide to more complex language.", "Scientific news articles are sampled from a corpus of science news articles (August et al., 2020b) and used to guide towards less complex language.", "Generative discriminators The GeDi method (Krause et al., 2021) uses a class-conditioned language model trained on text with a certain desired (or undesired) feature (e.g., toxicity) to guide generation.", "At each generation step, the model provides next token probabilities to the generator via Bayes' rule.", "We train a new GeDi on the same dataset of science news and journal articles as for PPLM.", "Ensemble of language models DExperts (Liu et al., 2021) combines multiple pretrained language models in an ensemble of experts and anti-experts.", "Specifically, a base language model is combined with a language model trained on text with desirable attributes (expert) and text with undesirable attributes (anti-expert).", "At generation time, the base model's logits are combined with the difference of the expert's and anti-expert's logits.", "Our expert and anti-expert are pretrained BART-large models that we continue to pretrain on the data used to train the PPLM discriminator.", "One model is pretrained on the journal abstracts and one on the science news articles.", "To generate more complex definitions, the expert is the model trained on journal abstracts while the anti-expert is the model trained on science news.", "To generate less complex definitions, the roles are reversed.", "We introduce a new, lightweight method to generate definitions with different complexity via reranking.", "Past work has explored selecting candidate generations based discriminator scores to control for specific topics or discourse structure but found that it did not provide strong control (Dathathri et al., 2020; Gabriel et al., 2021).", "Because our generation task does not require topic shifts and our input has naturally varying complexity (3.3), we adapt this method by scoring and selecting candidates based on complexity discriminators.", "Specifically, at test time we use our BART model ( BART SD ) to generate 100 candidate definitions for each definition.", "We then rerank these candidate generations based on logits from a discriminator trained to distinguish scientific journal text from science news text.", "While this method requires regenerating the definition many times, it does not require gradient or probability distribution updating during generation or any prior pretraining, allowing for much greater flexibility during generation (e.g., generating from a language model without access to vocabulary logits during generation).", "We consider two discriminators.", "Both are trained on the same dataset of science news and journal articles as PPLM.", "BERT We use the SciBERT uncased pretrained model (Beltagy et al., 2019).", "For more complex definitions we select definitions with high predicted probability for journal text, and for less complex definitions we select definitions with high prediction probability for science news text.", "Linear We also experiment with using a linear SVM classifier.", "The SVM's features are complexity measures drawn from science communication and readability literature, discussed in 5.3.", "The complexity of scientific writing is affected by many factors and it is difficult to operationalize it into a single dimension.", "We therefore use multiple measures of scientific writing complexity based on prior work in science communication and readability.", "These measures are not meant to be an exhaustive list (for a review, see Pitler and Nenkova, 2008), but a selection of measures that capture different elements of complexity important to definitions.", "Table 17 in the Appendix has examples of model outputs that scored either very high or very low for each measure.", "Scientific language is often associated with language formality (Lahiri, 2016; Heylighen et al., 1999).", "This might lead to some methods introduced in 5.1 to generate definition with higher formality when guided towards higher complexity.", "We therefore focus our measures on aspects of complexity important for reader comprehension (e.g., unfamiliar terminology or dense text) rather than the formality of the definition.", "ways.", "Five of them are the features in our linear SVM reranker.", "We also use them as a preliminary automatic evaluation of the various controllable generation approaches in 5.1 and 5.2.", "Obviously, we expect the linear SVM reranker to outperform the other approaches in this automatic evaluation since it was trained with these complexity features; it should be considered something like an upper bound for these complexity measures.", "Our human evaluations (6.2 and 7) provide a more complete picture of the systems' performance.", "Academic Vocabulary List (AVL) occurrences The AVL is a list of academic vocabulary drawn from corpora spanning many scientific disciplines (Gardner and Davies, 2013).", "We measure the fraction of AVL words in a generated definition.", "Thing Explainer out-of-vocabulary The popular book Thing Explainer explains scientific concepts using only the 1,000 most frequent words in English (measured by Wiktionary's contemporary fiction frequency list; Munroe, 2017).", "10 We measure the fraction of words in the definition outside of the top 1,000 used in Thing Explainer .", "Function words In health communication, function words (e.g., prepositions, auxiliary verbs, or question words) positively correlate with perceived and actual readability (Leroy et al., 2008, 2010).", "Sentence length Sentence length is a commonly used metric for document level complexity and is part of many classic readability measures (Pitler and Nenkova, 2008; Feng et al., 2010).", "While we set a maximum generation length for our definitions (64 tokens), we enable early stopping.", "While longer sentences are often considered more complex, we hypothesize that in our dataset longer definitions will be associated with less complex 10 https://en.wiktionary.org/wiki/ Wiktionary:Frequency_lists/Contemporary_fiction language due to elaborative simplification, where complex terms are explained as a way of simplifying them (Srikanth and Li, 2021).", "Language model perplexity Language model perplexity has been found to correlate with perceived and actual reading difficulty (Pitler and Nenkova, 2008; Collins-Thompson, 2014).", "We use the GPT model to measure language model perplexity, as it was trained on common English (as opposed to scientific text).", "Flesch-Kincaid grade level This score (FK) uses simple calculations based on sentence length, word length, and syllable counts (Kincaid et al., 1975).", "Although findings are mixed on how well the FK predicts readability in science or medical documents (Leroy et al., 2008), it is a standard, widely used measure of text complexity (Redmiles et al., 2019).", "The FK expects a document with multiple sentences, but our definitions are a single sentence.", "To address this, we calculate the FK based on the concatenation of all definitions generated by a particular method.", "For the same reason, we do not include the FK score as a feature in our SVM reranker (5.1).", "As a preliminary analysis of complexity using our measures, we evaluate how the journal abstracts and science news articles used for guiding complexity generation (described in 5.1) vary across our measures.", "Table 5 includes a row representing the difference in each measure between the training set of journal abstracts and science news articles (Jour-nal News).", "We see that all the measure behave in the expected direction except sentence length, which goes in the opposite direction.", "This might signify that journal abstracts still use longer sentences even if science news articles are explaining complex topics to simplify them (Srikanth and Li, 2021).", "Here we evaluate how well our baseline and novel generation control methods can vary the complexity of definitions.", "For each generation method, we generate and evaluate 10 definitions for each term.", "We automatically evaluate each control method by calculating the difference in each complexity measure (5.3) for the high and low complexity generations.", "Table 5 details these differences.", "While each measure captures a different element of complexity, counting the number of words outside of the top 1,000 most common English words (TE) seems to be one of the most consistent measures, with all higher complexity generations having differences in the expected direction.", "DExperts and the BERT reranker have the largest differences, with 5% and 4% more words per sentence.", "Higher complexity generations also have higher GPT perplexity, with DExperts having the largest difference.", "The two rerankers (BERT and SVM) perform better than other models on most measures.", "This is unsurprising for the SVM since it was trained with these complexity features, but it is interesting that reranking with the BERT classifier also provides effective control over complexity.", "Table 6 provides example generations based on each approach.", "Automatic classification of text complexity is difficult and domain-specific (Collins-Thompson, 2014; Redmiles et al., 2019); even in combination, we believe the measures in 5.3 are insufficient for a", "full evaluation of our approaches.", "We therefore carry out a human evaluation to assess how each method influences perceived definition complexity.", "We select the models that performed best overall in our automatic evaluation: DExperts, GeDi, and the SVM reranker.", "11 We randomly sample 50 terms from our test split to evaluate.", "We use a high and low complexity generation from each model, leaving us with 50 2 3 = 300 definitions.", "We broke down complexity into two ratings: how complicated a definition was and how difficult to understand the definition was.", "For each, participants rated definitions on a 14 Likert scale.", "We recruited participants on Amazon Mechanical Turk.", "Each participant was payed US$0.50 cents based on US$10 dollars/hour.", "This study was approved by our institution's internal review board.", "Participants 233 participants took part in our evaluation (mean age 35 years, s.d. 11).", "Table 18 in the Appendix provides more details on their demographics.", "We removed 4 participants due to low effort responses (i.e., responding to all prompts with the same rating within 15 seconds).", "Results Figure 2 shows the average ratings for each model type.", "DExperts generations differentiate most between high and low complexity.", "GeDi definitions behave in a way that is the opposite of what we expected, with the low complexity generations rated as more complicated and difficult to understand than the high complexity generations.", "The SVM-reranked definitions perform in the expected direction, with high complexity generations 11 We do not include PPLM in this analysis due to its computational cost and similar performance to GeDi.", "being rated as more complicated and difficult to understand.", "Examples of ratings and raw counts are in Table 19 and Figure 4 in the Appendix.", "Our results suggest that our reranking method is a simple intervention that can control complexity with similar performance as other state-of-the-art methods.", "However, definitions of scientific terms also must be fluent, relevant, and factual.", "Factuality can be especially difficult to achieve in generations (Maynez et al., 2020).", "In science communication such failures could spread misinformation with fluent but incorrect definitions (Britt et al., 2019).", "We do two additional human evaluations for fluency and relevance (7.1), and factuality (7.2).", "We used two trained annotators, one of them an author, to rate the same 300 definitions used in the complexity evaluation (6.2).", "Neither annotator saw the model generations before evaluation or knew which method had generated each definition.", "Annotators rated definitions for fluency and relevance using 14 Likert scales (1 = Not at all to 4 = Very).", "Table 7 shows the average fluency and relevance ratings.", "The SVM-reranked definitions were rated close to Very fluent and relevant (both above 3.5 on a 4 point scale), and significantly more fluent compared to GeDi ( t 198 = 5 . 99 p < 0 . 001 , Cohen's d = 0 . 60 ) and DExperts ( t 198 = 18 . 85 p < 0 . 001 , d = 1 . 88 ).", "For each definition, annotators identified if there was any factually incorrect information in the definition (a binary label) and if so, rated how extensive these errors were on the same 14 scale.", "Table 7 reports on the average rating for how extensive these errors were.", "Below we report on the binary label.", "Overall 60% of our generations were labeled as factually incorrect by at least one annotator (40% by both).", "The SVM had significantly fewer factual errors (38% by one annotator, 16% by both), compared to GeDi (52% and 33%, t 198 = 4 . 71 p < 0 . 001 , Cohen's d = 0 . 47 ) and DExperts (86% and 67%, t 198 = 12 . 29 p < 0 . 001 , d = 1 . 24 ).", "We introduce a new task and dataset for generating definitions of scientific terms with controllable complexity as a way of adapting to different readers' scientific background.", "We evaluate conventional generation methods and introduce a lightweight approach of reranking candidate generations based on a discriminator to control complexity.", "We find that this reranking is effective at controlling text complexity while also maintaining fluency and factuality.", "We release our dataset and code to encourage more work on making scientific terms more accessible to readers of diverse background knowledge.", "12 9 Ethical Considerations The goal of this paper is to enable a wider audience of readers to understand and engage with scientific writing.", "A risk, though, is that such attempts might instead widen the gap to accessing scientific information.", "The texts in the datasets we train our models on are in General or Academic American 12 https://github.com/talaugust/ definition-complexity 8306 English.", "Many people, especially those who have been historically underrepresented in STEM disciplines and medicine, may not be comfortable with this dialect of English.", "This risks further alienating the readers we hope to serve.", "This is a common issue in NLP systems (Sap et al., 2019), since the majority of datasets are in General American English.", "An important and exciting direction in NLP is making models more flexible to dialects and low-resource languages (e.g., the ACL 2022 special theme being Language Diversity).", "While our results suggest that the lighter control of reranking generations leads to less hallucinated information, strong supervision of definition factuality is important for any future deployment of such a system.", "While hallucinated information can be damaging in any generation context, incorrect scientific definitions could mislead readers and potentially contribute to broader scientific misinformation.", "Furthermore, a bad actor could use these models to generate fluent but incorrect definitions at scale, potentially contributing to misinformation campaigns with a veneer of scientific language (Britt et al., 2019).", "We trained our models on data we believe is trustworthy (e.g., questions and answers from NIH websites); and we release our training data and models to allow for further work on encouraging factuality in these model generations.", "We thank Anita Silva for her help annotating definitions and the Mechanical Turk annotators for their work on the project.", "We also thank the anonymous reviewers and members of the UWNLP and DUB community for their helpful feedback.", "This work was supported in part by the Office of Naval Research under MURI grant N00014-18-1-2670 and by a Twitch Research Fellowship." ]
[ "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "objective", "objective", "method", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "method", "abstain", "method", "abstain", "method", "other", "method", "other", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "method", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "other", "other", "other" ]
[ "Streaming cross document entity coreference (CDC) systems disambiguate mentions of named entities in a scalable manner via incremental clustering.", "Unlike other approaches for named entity disambiguation (e.g., entity linking), streaming CDC allows for the disambiguation of entities that are unknown at inference time.", "Thus, it is well-suited for processing streams of data where new entities are frequently introduced.", "Despite these benefits, this task is currently difficult to study, as existing approaches are either evaluated on datasets that are no longer available, or omit other crucial details needed to ensure fair comparison.", "In this work, we address this issue by compiling a large benchmark adapted from existing free datasets, and performing a comprehensive evaluation of a number of novel and existing baseline models.", "1 We investigate: how to best encode mentions, which clustering algorithms are most effective for grouping mentions, how models transfer to different domains, and how bounding the number of mentions tracked during inference impacts performance.", "Our results show that the relative performance of neural and feature-based mention encoders varies across different domains, and in most cases the best performance is achieved using a combination of both approaches.", "We also find that performance is minimally impacted by limiting the number of tracked mentions.", "The ability to disambiguate mentions of named entities in text is a central task in the field of information extraction, and is crucial to topic tracking, knowledge base induction and question answering.", "Recent work on this problem has focused almost solely on entity linkingbased apWork done during an internship at Google Research.", "1 Code and data available at: https://github.com/ rloganiv/streaming-cdc proaches, i.e., models that link mentions to a fixed set of known entities.", "While significant strides have been made on this frontwith systems that can be trained end-to-end (Kolitsas et al., 2018), on millions of entities (Ling et al., 2020), and link to entities using only their textual descriptions (Lo-geswaran et al., 2019)all entity linking systems suffer from the significant limitation that they are restricted to linking to a curated list of entities that is fixed at inference time.", "Thus they are of limited use when processing data streams where new entities regularly appear, such as research publications, social media feeds, and news articles.", "In contrast, the alternative approach of cross-document entity coreference (CDC) (Bagga and Baldwin, 1998; Gooi and Allan, 2004; Singh et al., 2011; Dutta and Weikum, 2015), which disambiguates mentions via clustering, does not suffer from this shortcoming.", "Instead most CDC algorithms suffer from a different failure mode: lack of scalability.", "Since they run expensive clustering routines over the entire set of mentions, they are not well suited to applications where mentions arrive one at a time.", "There are, however, a subset of streaming CDC methods that avoid this issue by clustering mentions incrementally (Figure 1).", "Unfortunately, despite such methods' apparent fitness for streaming data scenarios, this area of research has received little attention from the NLP community.", "To our knowledge there are only two existing works on the task (Rao et al., 2010; Shrimpton et al., 2015), and only the latter evaluates truly streaming systems, i.e., systems that process new mentions in constant time with constant memory.", "One crucial factor limiting research on this topic is a lack of free, publicly accessible benchmark datasets; datasets used in existing works are either small and impossible to reproduce (e.g., the dataset collected by Shrimpton et al. (2015) only contains a few hundred unique entities, and many of the SARS CoV fusion peptides induce membrane surface ordering and curvature... ...gather information on the membrane fusion mechanism promoted by two putative SARS FPs...", "(b) Mentions are encoded as points in a vector space and incrementally clustered.", "As the space grows some points are removed to ensure that the amount of memory used does not exceed a given threshold.", "annotated tweets are no longer available for download) or lack the necessary canonical ordering and are expensive to procure (e.g., the ACE 2008 and TAC-KBP 2009 corpora used by Rao et al. (2010)).", "To remedy this, we compile a benchmark of three datasets for evaluating English streaming CDC systems along with a canonical ordering in which evaluation data should be processed.", "These datasets are derived from existing datasets that cover diverse subject matter: biomedical texts (Mohan and Li, 2019), news articles (Hoffart et al., 2011), and Wikia fandoms (Logeswaran et al., 2019).", "We evaluate a number of novel and existing streaming CDC systems on this benchmark.", "Our systems utilize a two step approach where: 1) each mention is encoded using a neural or feature-based model, and 2) the mention is then clustered with existing mentions using an incremental clustering algorithm.", "We investigate the performance of different mention encoders (existing feature-based methods, pretrained LMs, and encoders from entity linkers such as RELIC (Ling et al., 2020) and BLINK (Wu et al., 2020)), and incremental clustering algorithms (greedy nearest-neighbors clustering, and a recently introduced online agglomerative clustering algorithm, GRINCH (Monath et al., 2019)).", "Since GRINCH does not use bounded memory, which is required for scalability in the streaming setting, we introduce a novel bounded memory variant that prunes nodes from the cluster tree when the number of leaves exceeds a given size, and compare its performance to existing bounded memory approaches.", "Our results show that the relative performance of different mention encoders and clustering algorithms varies across different domains.", "We find that existing approaches for streaming CDC (e.g., feature-based mention encoding with greedy nearest-neighbors clustering) outperform neural approaches on two of three datasets (+1-3% abs. improvement in CoNLL F 1 ), while a RELIC-based encoder with GRINCH performs better on the last dataset (+9% abs. improvement in CoNLL F 1 ).", "In cases where existing approaches perform well, we also find that better performance can be obtained by using a combination of neural and feature-based mention encoders.", "Lastly, we observe that by using relatively simple memory management policies, e.g. removing old and redundant mentions from the mention cache, bounded memory models can achieve performance near on-par with unbounded models while storing only a fraction of the mentions (in one case we observe a 2% abs. drop in CoNLL F 1 caching only 10% of the mentions).", "The key goal of cross-document entity coreference (CDC) is to identify mentions that refer to the same entity.", "Formally, let M = (cid:8) m 1 , . . . , m |M| (cid:9) denote a corpus of mentions, where each mention consists of a surface text m.", "surface (e.g., the colored text in Figure 1a), as well as its surrounding context m.", "context (e.g., the text in black).", "Provided M as an input, a CDC system produces a disjoint clustering over the mentions C = (cid:8) C 1 , . . . , C | C | (cid:9) , | C | |M| , as the output, where each cluster C e = { m M | m.", "entity = e } is the set of mentions that refer to the same entity.", "In streaming CDC, there are two additional requirements: 1) mentions arrive in a fixed order ( M is a list) and are clustered incrementally, and 2) memory is constrained so that only a fixed number of mentions can be stored.", "This can be formulated in terms of the above notation by adding a time index t , so that MT = { m t M | t T } is the set of all mentions observed at or before time T , (cid:102) MT MT is a subset of active mentions whose size does not exceed a fixed memory bound k , e.g., | (cid:102) MT | k , and CT is comprised of clusters that only contain mentions in (cid:102) MT .", "Due to the streaming nature, (cid:102) MT { m T } (cid:102) MT 1 , i.e., a mention cannot be added back to (cid:102) MT if it was previously removed.", "When the memory bound is reached, mention are removed from (cid:102) M according to a memory management policy .", "An illustrative example is provided in Figure 1.", "Mentions arrive in left-to-right order (Figure 1a), with the clustering process depicted in Figure 1b (memory bound k = 3 ).", "At time T = 4 , the mention m 1 is removed from (cid:102) M 4 .", "Note that, even though m 1 is removed, it is still possible to disambiguate mentions of all previously observed entities, whereas this would not be possible had m 3 or m 4 been removed.", "This illustrates the effect the memory management policy can have on performance.", "Cross Document Entity Coreference As we show later, we employ a two-stage CDC pipeline where mentions are first encoded as vectors, and subsequently clustered.", "This approach is used in most existing work on CDC (Bagga and Baldwin, 1998; Mann and Yarowsky, 2003; Gooi and Allan, 2004).", "In the past decade, research on CDC has mainly focused in improving scalability (Singh et al., 2011), and jointly learning to perform CDC with other tasks such as entity linking (Dutta and Weikum, 2015) and event coreference (discussed in the next paragraph).", "This work similarly investigates whether entity linking is beneficial for CDC, however we use entity linkers that are pretrained separately and kept fixed during inference.", "Recently, there has been a renewed interest in performing CDC jointly with cross-document event coreference (Barhom et al., 2019; Meged et al., 2020; Cattan et al., 2020; Caciularu et al., 2021) on the ECB+ dataset (Cybulska and Vossen, 2014).", "Although we do not evaluate methods from this line of research in this work, we hope that the benchmark we compile will be useful for future evaluation of these systems.", "Streaming Cross Document Coreference The methods mentioned in the previous paragraphs disambiguate mentions all at once, and are thus unsuitable for applications where a large number of mentions appear over time.", "Rao et al. (2010) propose to address this issue using an incremental clustering approach where each new mention is either placed into one of a number of candidate clusters, or a new cluster if similarity does not exceed a given threshold (Allaway et al. (2021) use a similar approach for joint entity and event coreference).", "Shrimpton et al. (2015) note that the this incremental clustering does not process mentions in constant time/memory, and thus is not truly streaming.", "They present the only truly streaming approach for CDC by introducing a number of memory management policies that limit the size of (cid:102) M , which we describe in more detail in Section 3.3.", "One of the key problems inhibiting further research on streaming CDC is a lack of suitable evaluation datasets for measuring system performance.", "The datasets used in Rao et al. (2010) are either small in size (few hundreds of mentions), contain few annotated entities, or are expensive to procure.", "Additionally, they do not include any canonical ordering of the mentions, which precludes consistent evaluation of streaming systems.", "Meanwhile, Tweets annotated by Shrimpton et al. (2015) only cover two surface texts (Roger and Jessica) and are no longer accessible via the Twitter API.", "2 To address this we collect a new evaluation benchmark, comprised of 3 existing publicly available datasets, covering a diverse collection of topics (News, Biomedical Articles, Wikias) with natural orderings (e.g., chronological, categorical).", "This benchmark is described in detail in Section 4.1.", "Entity Linking CDC is similar to the task of entity linking (EL, Mihalcea and Csomai (2007)), which also addresses the problem of named entity disambiguation, with the key distinction that EL is formulated as a supervised classification problem (list of entities is known at training and test time), while CDC is an unsupervised clustering problem.", "In particular, CDC is similar to time-aware EL (Agarwal et al., 2018)where temporal context is used to help disambiguate mentionsand zero-shot EL (Zeshel, Logeswaran et al. (2019)) where the set of entities linked to during evaluation does not overlap with the set of entities observed during training.", "Streaming CDC can also be con-2 At this time, only 56 of the first 100 tweets were available.", "sidered a method for time/order-aware zero-shot named entity disambiguation, however, it is strictly more challenging as it does not assume access to a curated list of entities at prediction time, or any supervised training data.", "Although CDC is formulated as a strictly unsupervised clustering task, this does not preclude the usage of labeled data for transfer learning.", "One of the primary goals in this work is to investigate whether the mention encoders learned by entity linking systems provide useful representations in the first step of the CDC pipeline.", "Specifically, we consider mention encoders for two state-of-the-art entity linking architectures: RELIC (Ling et al., 2020) and the BLINK bi-encoder (Wu et al., 2020).", "Emerging Entity Detection Streaming CDC is also related to the task of emerging entity detection (EED, Farber et al. (2016)), which, given a mention that cannot be linked, seeks to predict whether it should produce a new KB entry.", "Although both tasks share similar motivations, they adopt different approaches (EED is formulated as a binary classification task), and CDC does not require deciding which entities should and should not be added to a knowledge base.", "However, in many practical applications, it may make sense to apply streaming CDC only to emerging entities.", "Following previous work, we adopt a two-step approach to performing streaming cross-document coreference.", "In the first step, an encoder is used to produce a vector representation of the incoming mention m t = Enc( m t ) .", "In the second step, these vectors are input into an incremental clustering algorithm to update the predicted clustering C t = Clust( C t 1 , m t ) .", "In the following sections we describe in detail the mention encoders and clustering algorithms used in this work.", "The primary goal of mention encoders Enc( m t ) is to produce a compact representation of the mention, including both the surface and the context text.", "Feature-Based Encoders Existing models for streaming cross-document coreference exclusively make use of feature-based mention encoders.", "While there are many feature engineering options explored in the literature, in this work we consider the mention encoding approach proposed by Shrimpton et al. (2015), which uses character skip bigram indicator vectors to encode the surface text, and tf-idf vectors to represent contexts.", "When using this encoding scheme, similarity scores are computed independently for the surface and context embeddings, and a weighted average is taken to produce the final similarity score.", "We use the same setup and parameters as Shrimpton et al. (2015).", "Masked Language Model Encoders We also consider mention encodings produced by masked language models, particularly BERT (Devlin et al., 2019).", "We encode the mention by feeding the contiguous text of the mention (containing both the surrounding and surface text) into BERT and concatenating the contextualized vectors associated with the first and last word-piece of the surface text.", "That is, let s, e N denote the start and end of the mention surface text within the complete mention, and let M = BERT( m ) denote the contextualized word vectors output by BERT.", "Then the mention encoding is given by: Enc MLM ( m ) = [ M [ s ]; M [ e ]] .", "Entity Linker-Based Encoders We consider producing mention encodings using the bi-encoder-based neural entity linkers: RELIC (Ling et al., 2020) and BLINK (Wu et al., 2020).", "The bi-encoder architecture is comprised of two componentsa mention encoder Enc m , and an entity encoder Enc e and is trained to maximize a similarity score (e.g., dot-product) between the mention encoding and the encoding of its underlying entity, while simultaneously minimizing the score for other entities.", "We use Enc m from pretrained entity linkers to encode mentions for CDC.", "Hybrid Encoder We also consider a hybrid encoder which combines feature-based and neural mention encoders.", "We retain the feature-based character skip bigram surface text encoder, but use one of the neural encoders from entity linkers in place of tf-idf context representation.", "Similarity scores are computed by averaging the two without any weights, unlike by Shrimpton et al. (2015).", "Here we describe incremental clustering approaches, Clust( C t 1 , m t ) , that compute a new clustering when m t is added to the mentions under consideration ( (cid:102) M ).", "CDC using a single linkage incremental clustering approach that clusters each new mention m to its nearest neighbor m (cid:48) = arg min m (cid:48) (cid:102) M sim( m, m (cid:48) ) , if the similarity exceeds some threshold .", "We use a similar approach here, however we cluster m with all m (cid:48) (cid:102) M such that sim( m, m (cid:48) ) > thus allowing previously separate clusters to be merged if m is similar to both of them.", "GRINCH Gooi and Allan (2004) find that average link hierarchical agglomerative clustering can outperform greedy single link approaches.", "However, agglomerative approaches are typically not used for streaming CDC because running the algorithm at each time step is too expensive, and incremental variants of the approach are not able to recover from incorrect choices made early on (Fig-ure 2a).", "The recently introduced GRINCH clustering algorithm (Monath et al., 2019) uses rotate and graft operations that reconfigure the tree, thereby avoiding these issues (Figure 2b).", "We defer to the original paper for details, however note that, for our application, each interior node of the cluster tree is computed as a weighted average of its children's representations (where the weights are proportional to the number of leaves).", "Thus at each interior node, it is possible to compute the similarity score between that node's children.", "This allows us to produce a flat clustering from the cluster tree by thresholding the similarity score, just as in the greedy clustering case.", "As described in Section 2.1, memory management policies decide which mentions to remove from", "(cid:102) M to prevent its size from exceeding the memory bound, providing scalable, memory-bound variants of the clustering algorithms.", "Bounded Memory Greedy NN Clustering For bounded memory greedy nearest neighbors clustering, we consider the following memory management policies of Shrimpton et al. (2015): Window : Remove the oldest mention in (cid:102) M .", "Cache : Remove the oldest mention in the least recently updated cluster CLRU .", "Diversity : Remove the most similar mention to mention just added, i.e. arg max m sim( m, m t ) Diversity-Cache : A combination of the diversity and cache strategies, where the diversity strategy is used if the similarity score exceeds a given threshold sim( m, m t ) > , and the cache strategy is used otherwise.", "Bounded Memory GRINCH Memory management for GRINCH is more complicated than for greedy clustering, as instead of maintaining a flat clustering of mentions, GRINCH instead maintains a cluster hierarchy in the form of a binary cluster tree.", "Every time a mention is inserted into the tree, two new nodes are created: one node for the mention itself, and a new parent node linking the mention to its sibling (Figure 2a).", "Accordingly, when the memory bound is reached, the memory management policy for GRINCH must remove two nodes from the tree.", "Furthermore, in order to preserve the tree's binary structure, the removed nodes must be leaf nodes as well as siblings.", "Because the original GRINCH algorithm only includes routines for inserting nodes into the tree, and reconfiguring the tree's structure, we modify GRINCH to |M| |E| % Seen MAE AIDA Train 18.5K 4.1K 100% 1.1K Dev 4.8K 1.6K 23% 290 Test 4.5K 1.6K 16% 263 MedMentions Train 121K 18K 100% 4.7K Dev 42K 8.8K 27% 1.8K Test 39K 8.3K 26% 1.7K Zeshel Train 81K 32K 100% 9.3K Dev 18K 7.5K 0% 2.9K Test 17K 7.2K 0% 3.3K Table 1: Dataset Statistics .", "include a new remove operation that prunes two nodes satisfying the these criteria.", "The parent of these nodes then becomes a leaf node, whose vector representation is produced by combining the vector representations of its former children using a weighted average (this is conceptually similar to the collapse operation described in Kobren et al. (2017)).", "We consider the following policies here: Window : Remove the nodes whose parent was least recently added to the tree.", "Diversity : Remove the pair of nodes that are most similar to each other.", "Current research on CDC is inhibited by a lack of large, publicly accessible datasets.", "We address this by compiling datasets for streaming CDC by adapting existing entity linking datasets: AIDA CoNLL-YAGO, MedMentions, and Zeshel.", "AIDAAIDA CoNLL-YAGO (Hoffart et al., 2011) contains news articles from the Reuters Corpus written between August and December 1996 with annotations linking mentions to YAGO and Wikipedia.", "We create a canonical ordering for this dataset by ordering articles by date.", "As the original train, dev, and test splits respect this ordering, we use the original splits in our benchmark.", "MedMentions The MedMentions (Mohan and Li, 2019) corpus contains abstracts for biomedical articles published to PubMed in 2016, and annotated with links to the UMLS medical ontology.", "We order abstracts by publication date 3 to create a canonical ordering.", "Since the original dataset is not ordered by date, we create new train, dev, and test splits of comparable size that respect this ordering.", "Zeshel The Zeshel (Logeswaran et al., 2019) dataset consists of Wikia articles for different FANDOMs.", "In addition to the original set of annotated mentions, we use the provided entity descriptions as an additional source of mentions.", "We impose an ordering that groups all mentions belonging to the same Wikia together, and otherwise retains their original order in the Zeshel data.", "This is an interesting scenario for streaming CDC as no clusters need be retained when transitioning to a new Wikia.", "Analysis Statistics for the benchmark data are provided in Table 1, which list the number of mentions and unique entities for each dataset.", "We also list the percentage overlap between entities in the training set, and entities in the dev and test sets (% Seen), as well as the maximum active entities (MAE).", "MAE is a quantity introduced by Toshni-wal et al. (2020), which measures the maximum number of active entities (e.g., entities that have been previously mentioned, and will be mentioned in the future) for a given dataset, which can alternatively be interpreted as the smallest possible memory bound that can be used in order to ensure that a CDC system can cluster each mention with at least one other mention of the same entity.", "Importantly, this number is a small fraction of the total number of mentions in each dataset, indicating that these datasets are appropriate for the streaming setting and to compare memory management policies.", "We evaluate CDC performance using the standard evaluation metrics: MUC (Vilain et al., 1995), B 3 (Bagga and Baldwin, 1998), CEAFe (Luo, 2005), and CoNLL F 1 which is an average of the previous three.", "In order to perform evaluation when memory is bounded, we perform the following bookkeeping to track nodes which have been removed by the memory management policy.", "For bounded memory greedy NN clustering , we keep track of the removed node's predicted cluster (e.g., if the node was removed from cluster C , then it is considered an element of C during evaluation).", "3 6 abstracts were omitted due to missing metadata This is similar to the evaluation used by Toshni-wal et al. (2020).", "For bounded memory GRINCH , we keep track of the removed node's place within the tree structure, and produce a flat clustering using the thresholding approach described in Section 3.2 as if the node were never removed.", "Because leaf nodes (and accordingly removed nodes) are never updated by insertion or removal operations, nodes belonging to the same cluster before they are pruned they will always remain in the same cluster during evaluation, which is the same assumption used for the greedy NN evaluation.", "Vocabulary and inverse document frequency (idf) weights are estimated using each dataset's train set.", "For masked language model encoders, we use an unmodified BERT-base architecture, with model weights provided by the HuggingFace transformers library (Wolf et al., 2020).", "For BLINK, we use the released BERT-large bi-encoder weights.", "4 Our bounded memory variant of GRINCH is based on the official implementation.", "5 Note that GRINCH does not currently support sparse inputs, so we do not include results for feature-based mention encoders.", "RELIC model weights are initialized from BERT-base, and then finetuned to perform entity linking in the following settings: RELIC (wiki) : Trained on the same Wikipedia data used to train the BLINK bi-encoder.", "RELIC (in-domain) : Trained on respective bench-mark's training dataset; a separate model is trained for each benchmark.", "Training is performed using hyperparameters suggested by Ling et al. (2020).", "6 For each benchmark, the hybrid mention encoder uses the best performing RELIC variant on that benchmark.", "Cluster thresholds are chosen so that the number of predicted clusters on the dev dataset approximately matches the number of unique entities.", "In this section, we provide a comprehensive evaluation of the design choices that define the existing and proposed approaches for streaming CDC.", "BLINK 5 https://github.com/iesl/grinch 6 Trained on a server w/ 754 GB RAM, Intel Xeon Gold 5218 CPU and 4x NVIDIA Quadro RTX 8000 GPUs.", "Choice of Encoder We include the results for CDC systems with unbounded memory on the benchmark datasets in Table 2, as well as results for two baselines: 1) a system that clusters together all mentions with the same surface forms ( exact match ), and 2) a system that only considers gold within-document clusters and does not merge clusters across documents ( oracle within-doc ).", "We observe that, in general, neural mention encoders are not sufficient to obtain good CDC performance.", "With the exception of the RELIC (In-Domain) on MedMentions, no neural mention encoders are able to outperform the feature-based greedy NN approach, and furthermore, the MLM and BLINK mention encoders do not even surpass the exact match baseline.", "However, note that for AIDA and Zeshel, best results are obtained using a hybrid mention encoder.", "Thus, in these domains, we can conclude that while neural mention encoders are useful for encoding contexts, CDC systems require an additional system to model surface texts to achieve good performance.", "The results on MedMentions provide an interesting contrast to this conclusion.", "Here the RELIC (In-Domain) mention encoder outperforms both the feature-based and hybrid mention encoders.", "In the error analysis below, we find that this is due mainly to improved performance clustering mentions of entities seen when training the mention encoder.", "Choice of Clustering Algorithm Comparing greedy nearest neighbors clustering to GRINCH, we do not observe a consistent trend across mention encoders or datasets.", "While the best performance on AIDA and Zeshel is achieved using greedy nearest neighbor clustering, the best performance on MedMentions is achieved using GRINCH.", "These results highlight the importance of benchmarking CDC systems on a number of different datasets; patterns observed on a single dataset do not extrapolate well to other settings.", "It is also interesting to observe that a much simpler approach often works better than the more complex one.", "Error Analysis We characterize the errors of these models by investigating:", "a) the entities whose mentions are conflated (e.g., are wrongly clustered together) and split (e.g., wrongly grouped into separate clusters) using the approach of Kummerfeld and Klein (2013), and", "b) differences in performance on entities that are seen vs. unseen during training for models that use in-domain data.", "A sub-AIDA MedMentions Zeshel MUCB 3 CEAF Avg.", "set of our results is provided in Table 3, with full results available in Tables 411 in the Appendix.", "In aggregate, these error metrics closely track the results in Table 2, where better models make fewer errors of all types.", "We do, however, observe that in-domain training improves RELIC's performance considerably on MedMentions (+15 CoNLL F 1 on seen entities, and +18 on unseen entities), and is the primary reason underlying the improved performance over feature-based encoders (72.6 vs. 60.7 CoNLL F 1 on seen entities, while performance on unseen entities is comparable).", "Comparing mentions of the most conflated entities provides a qualitative sense of the failure modes of each method.", "We note that the feature-based method tends to fail at distinguishing entities with the same surface form, e.g., world cups of different sports, while neural entity linkers tend to conflate entities with similar contexts, particularly when surface forms are split into multiple word pieces in the model's vocabulary (each surface form in the bottom of Table 3 gets broken into 3+ word pieces).", "Effect of Bounded Memory Results for the bounded memory setting are illustrated in Figure 3.", "In these experiments we take the best neural mention encoder for each benchmark dataset (RELIC (Wiki) for AIDA and Zeshel, and RELIC (In-Domain) for MedMentions), and plot the CoNLL F 1 score for each of the memory management policies described in Section 3.3.", "We measure performance for memory bounds at the maximum number of active entities (MAE) and total unique entities ( |E| ) for each dataset (as well as 1/2x, and 2x multiples of these numbers).", "In sum, these results provide strong evidence that CDC systems can reliably cluster mentions in a truly streaming setting, even when memory is bounded to a small fraction of the number of entities encountered by the system.", "Most impressively, using the diversity-cache memory management policy, a greedy nearest neighbors bounded memory model achieves a CoNLL F 1 score within 2% of the best performing unbounded memory model, while only storing approximately 10% (i.e., E /2) of the mentions.", "We notice a few fairly consistent trends across datasets.", "The first is that increasing the memory bound has diminishing returns; while there is a large benefit incurred by increasing the bound from MAE / 2 to MAE, the difference in performance attained from increasing the bound from E to 2 E is often negligible.", "We also find that nave memory management policies that store recent mentions (i.e., window, W, and cache, C) tend to perform better than the policies that attempt to remove redundant mentions (i.e., diversity, D).", "This effect is particularly pronounced for small memory bounds.", "While this is somewhat surprisingstoring mentions of the same entity is particularly harmful when memory is limited, so encouraging diversity should be a good thingone possible explanation is that the diversity policy is actually removing mentions of entities that appear within the same context, as we saw earlier that neural mention encoders appear to focus more on mention context than surface text.", "Lastly, regarding the comparison of greedy nearest neighbors clustering to GRINCH we again see that inconsistency in performance across datasets; GRINCH appears to perform better at larger cache sizes for AIDA and MedMentions, while greedy nearest neighbors clustering has much better performance than GRINCH on Zeshel.", "Streaming cross document coreference has a number of compelling applications, especially concerning processing streams of data such as research publications, social media feeds, and news articles where new entities are frequently introduced.", "Despite being well-motivated, this task has received little attention from the NLP community.", "In order to foster a more welcoming environment for research on this task, we compile a diverse benchmark dataset for evaluating CDC, comprised of existing datasets that are free and publicly available.", "We additionally evaluate the performance of a collection of existing approaches for CDC, as well as introduce new approaches that leverage modern neural architectures.", "Our results highlight a number of challenges for future CDC research, such as how to better incorporate surface level fea-MAE |E| 2 |E| 60 70 80 90 C o NLLF 1 ( % ) AIDAMAE |E| 2 |E| 50 60 70 80 C o NLLF 1 ( % ) MedMentions MAE |E| 2 |E| 35 40 45 50 C o NLLF 1 ( % ) ZESHELGRINCH (D) Greedy NN (W) Greedy NN (D) GRINCH (W) Greedy NN (C) Greedy NN (D-C) Figure 3: Effect of Bounded Memory .", "tures into neural mention encoders, as well as alternative policies for memory management that improve upon the nave baselines studied in this work.", "Benchmark data and materials needed to reproduce our results are provided at: https://github.", "com/rloganiv/streaming-cdc .", "The authors would like to thank Sanjay Subrama-nian, Nitish Gupta, Keith Hall, Ryan McDonald, Livio Baldini Soares, Nicholas FitzGerald, and Tom Kwiatkowski for their technical guidance and helpful comments while conducting this work.", "We would also like to thank thank the anonymous ACL reviewers for their valuable feedback.", "This project is supported in part by NSF award no. 1817183, and the DARPA MCS program under Contract No.", "N660011924033.", "This paper focuses on systems that perform entity disambiguation without reliance on an external knowledge base.", "The potential benefit of such systems is an improved ability to track mentions of rare and emergent entities (e.g., natural disasters, novel disease variants, etc.); however, this is also relevant in digital surveillance settings, and may result in reduced privacy." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "result", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "objective", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "objective", "other", "method", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "method", "objective", "result", "result", "other", "abstain", "other", "other", "other", "other", "method", "abstain" ]
[ "In this work, we develop SimulSpeech, an end-to-end simultaneous speech to text translation system which translates speech in source language to text in target language concurrently.", "SimulSpeech consists of a speech encoder, a speech segmenter and a text decoder, where 1) the segmenter builds upon the encoder and leverages a connectionist temporal classification (CTC) loss to split the input streaming speech in real time, 2) the encoder-decoder attention adopts a waitk strategy for simultaneous translation.", "SimulSpeech is more challenging than previous cascaded systems (with simultaneous automatic speech recognition (ASR) and simultaneous neural machine translation (NMT)).", "We introduce two novel knowledge distillation methods to ensure the performance: 1) Attention-level knowledge distillation transfers the knowledge from the multiplication of the attention matrices of simultaneous NMT and ASR models to help the training of the attention mechanism in SimulSpeech; 2) Data-level knowledge distillation transfers the knowledge from the full-sentence NMT model and also reduces the complexity of data distribution to help on the optimization of SimulSpeech.", "Experiments on MuST-C English-Spanish and English-German spoken language translation datasets show that SimulSpeech achieves reasonable BLEU scores and lower delay compared to full-sentence end-to-end speech to text translation (without simultaneous translation), and better performance than the two-stage cascaded simultaneous translation model in terms of BLEU scores and translation delay.", "Simultaneous speech to text translation (Fugen et al., 2007; Oda et al., 2014; Dalvi et al., 2018), which translates source-language speech into target-language text concurrently, is of great importance to the real-time understanding of spoken lectures or conversations and now widely used in many scenarios including live video streaming and international conferences.", "However, it is widely considered as one of the challenging tasks in machine translation domain because simultaneous speech to text translation has to understand the speech and trade off translation accuracy and delay.", "Conventional approaches to simultaneous speech to text translation (Fugen et al., 2007; Oda et al., 2014; Dalvi et al., 2018) divide the translation process into two stages: simultaneous automatic speech recognition (ASR) (Rao et al., 2017) and simultaneous neural machine translation (NMT) (Gu et al., 2016), which cannot be optimized jointly and result in inferior accuracy, and also incurs more translation delay due to two stages.", "In this paper, we move a step further to translate the source speech to target text simultaneously, and develop SimulSpeech, an end-to-end simultaneous speech to text translation system.", "The SimulSpeech model consists of 1) a speech encoder where each speech frame can only see its previous frames to simulate streaming speech inputs; 2) a text decoder where the encoder-decoder attention follows the waitk strategy (Ma et al., 2018) to decide when to listen and write on the source speech and target text respectively (see Figure 1); 3) a speech segmenter that builds upon the encoder and leverages a CTC loss to detect the word boundary, which is used to decide when to stop listening according to the How is the weather today Source Audio T a r g e t t e x t Listen Write Figure 1: The waitk strategy for simultaneous speech to text translation.", "Considering the difficulty of this task, we elaborately design two techniques to boost the performance of SimulSpeech: 1) attention-level knowledge distillation that transfers the knowledge from the multiplication of the attention matrices of simultaneous NMT and ASR model to SimulSpeech to help the training of its attention mechanism; 2) data-level knowledge distillation that transfers the knowledge from a full-sentence NMT model to SimulSpeech and also reduces the complexity of data distribution (Zhou et al., 2019) to help on the optimization of SimulSpeech model.", "Compared with the cascaded pipeline that trains simultaneous ASR and NMT models separately, SimulSpeech can alleviate the error propagation problem and optimize all model parameters jointly towards the end goal, as well as reduce the delay of simultaneous translation.", "Experiments on MuST-C 1 English-Spanish and English-German spoken language translation datasets demonstrate that SimulSpeech: 1) achieves reasonable BLEU scores and lower delay compared to full-sentence end-to-end speech to text translation (without simultaneous translation), and 2) obtains better performance than the two-stage cascaded simultaneous translation model in terms of BLEU scores and translation delay.", "In this section, we briefly review some basic knowledge for simultaneous speech to text translation, including speech to text translation, simultaneous translation based on waitk strategy, and CTC loss for segmentation.", "Speech to Text Translation Given a set of bilingual speech-text sentence pairs D = { ( x, y ) ( X Y ) } , an speech to text machine translation model learns the parameter by minimizing the negative log-likelihood (cid:80) ( x,y ) D log P ( y | x ; ) .", "P ( y | x ; ) is calculated based on the chain rule (cid:81) T y t =1 P ( y t | y <t , x ; ) , where y <t represents the text tokens preceding position t , and T y is the length of text sentence y .", "An encoder-attention-decoder framework is usually adopted to model the conditional probability P ( y | x ; ) , where the encoder maps the input audio to a set of hidden representations h and the decoder generates each target token y t using the previously generated tokens y <t as well as the speech representations h .", "Previous works (Berard et al., 2016; Weiss et al., 2017; Liu et al., 2019) on speech to text translation focus on the full-sentence translation where the full source speech can be seen when predicting each target token.", "Simultaneous Translation Based on Waitk Simultaneous translation aims to translate sentences before they are finished according to certain strategies.", "We use wait-k strategy (Ma et al., 2018) in this work: given a set of speech and text pairs D = { ( x, y ) ( X Y ) } , the model with the waitk strategy learns the parameter by minimizing the negative log-likelihood loss (cid:80) ( x,y ) D log P ( y | x ; k ; ) , where k corresponds to the waitk strategy.", "P ( y | x ; k ; ) is calculated based on the chain rule P ( y | x ; k ; ) = T y (cid:89) t =1 P ( y t | y <t , x <t + k ; ) , (1) where y <t represents the tokens preceding position t and T y is the length of target sentence y , x <t + k represents the speech segments preceding position t + k .", "The waitk strategy ensures that the model can see t + k 1 source segments when generating the target token y t , while can see the whole sentence if there is no more source segments.", "loss (Graves et al., 2006) is widely used for alignment and segmentation, which maps the frame-level classification outputs of a speech sequence to a text sequence (with a different length from the speech sequence).", "For a text sequence y , CTC introduces a set of intermediate representation paths ( y ) called CTC paths, which has a many-to-one mapping to y since multiple CTC paths can correspond to the same text sequence.", "For example, both the frame-level classification outputs (CTC paths) HHE L LOO and HHEEL LO are mapped to text sequence HELLO , where is the blank symbol.", "The likelihood of y can thus be evaluated as a sum of the probabilities of its CTC paths: P ( y | x ) = (cid:88) z ( y ) P ( z | x ) , (2) where x is the utterance consisting of speech frames and z is one of the CTC path.", "Similar to many sequence to sequence generation tasks, SimulSpeech adopts the encoder-decoder framework.", "As shown in Figure 2a, both the encoder and decoder follow the basic network structure of Transformer (Vaswani et al., 2017a) for neural machine translation.", "SimulSpeech is different from Transformer in several aspects: To handle speech inputs, we employ a speech pre-net (Shen et al., 2018) to extract speech features, which consists of multiple convolutional layers with the same hidden size as Transformer.", "To enable simultaneous translation, we design different attention mechanisms for the encoder and decoder.", "The encoder adopts masked self-attention, which masks the future frames of a speech frame when encoding it and ensures that each speech frame can only see its previous frames to simulate the real-time streaming inputs.", "The decoder adopts the waitk strategy (Ma et al., 2018), as shown in Equation 1, which guarantees that each target token can only see the source segments following the wait-k strategy.", "As the waitk strategy requires source speech to be discrete segments, we introduce a speech segmenter to split a speech sequence into discrete segments, each representing a word or phrase.", "The segmenter takes the outputs of the speech encoder as inputs, passes through multiple non-linear dense layers and then a softmax linear layer to predict the character in frame level.", "When a word boundary token (the space character in our case) is predicted by the segmenter, SimulSpeech knows a word is ended.", "Multiple consecutive word boundary tokens are merged into one boundary.", "The training of the SimulSpeech model is more difficult than that of an NMT model or an ASR model, since SimulSpeech involves multiple modalities (i.e., speech and text) and multiple languages.", "In this section, we discuss how to train the SimulSpeech model.", "As shown in Figure 2b, we introduce the CTC loss for the training of the speech segmenter, and attention-level and data-level knowledge distillation for the training of the overall SimulSpeech model.", "In SimulSpeech training, the training data are provided in the format of (source speech, source text, target text) tuples.", "In SimulSpeech, the speech segmenter is used to detect word boundaries, and detected boundaries are used to determine when to stop listening and switch to translation, which is critical for the performance of simultaneous translation.", "As it is hard to find frame-level label to guide the output of the softmax linear layer in speech segmenter, we leverage connectionist temporal classification (CTC) loss to train the speech segmenter.", "According to Equation 2, the CTC loss is formulated as L ctc = (cid:88) ( x,y ) ( XY src ) (cid:88) z ( y ) P ( z | x ) , (3) where ( X Y src ) denotes the set of source speech and source text sequence pairs, and ( y ) denotes the set of CTC paths for y .", "During inference, we simply use the best path decoding (Graves et al., 2006) to decide the word boundary without seeing subsequent speech frames, which is consistent with the masked self-attention in speech encoder, i.e., the output of segmenter for position i depends only on the inputs at positions preceding i .", "To better train the SimulSpeech model, we propose a novel attention-level knowledge distillation that is specially designed for speech to text translation,", "which transfers the knowledge from the multiplication of attention weights matrices of simultaneous ASR and NMT models, into the attention of the SimulSpeech model.", "In order to obtain the attention weights of simultaneous ASR and NMT, we add auxiliary simultaneous ASR and NMT tasks which share the same encoder or decoder with SimulSpeech model respectively, as shown in Figure 2b.", "The two auxiliary tasks both leverage a waitk strategy similar to that used in SimulSpeech model.", "Denote the sequence length of the source speech, source text and target text as S src , T src and T tgt respectively.", "Denote the attention weights of simultaneous ASR and NMT as AT src S src and AT tgt T src respectively.", "Ideally, the attention weights of SimulSpeech AT tgt S src should satisfy AT tgt S src = AT tgt T src AT src S src .", "However, the attention weights are difficult to learn, and the attention weights of SimulSpeech model are more difficult to learn than that of the simultaneous ASR and NMT models since SimulSpeech is much more challenging.", "Therefore, we propose to distill the knowledge from the multiplication of the attention weights of the simultaneous ASR and NMT, as shown in Figure 2b and Figure 3. We first multiply the attention matrix of simultaneous NMT by that of simultaneous ASR, and then binarize the attention matrix with a threshold.", "We then match the attention weights that is predicted by the SimulSpeech model to the binarized attention matrix, with the loss function L att kd = B ( AT tgt T src AT src S src ) AT tgt S src , (5) where B is the binarization operation which set the element of the matrix to 1 if above the threshold of 0.05, and otherwise to 0. 4.3 Data-Level Knowledge Distillation Data-level knowledge distillation is widely used to help model training in various tasks and situations (Kim and Rush, 2016; Tan et al., 2019) and can boost the performance of a student model.", "In this work, we leverage knowledge distillation to transfer the knowledge from a full-sentence NMT teacher model to a SimulSpeech model.", "We train a full-sentence NMT teacher model first and then generate target text y (cid:48) given source text y that is paired with source speech x .", "Finally, we train the student SimulSpeech with the generated target text y (cid:48) which is paired with the source speech x .", "The loss function is formulated as L data kd = (cid:88) ( x,y (cid:48) ) ( XY tgt (cid:48) ) log P ( y (cid:48) | x ) , (6) where ( X Y tgt (cid:48) ) denotes the set of speech-text sequence pairs where text is generated by the NMT teacher model.", "The total loss function to train SimulSpeech model is L = 1 L ctc + 2 L att kd + 3 L data kd , (7) where 1 , 2 , 3 are hyperparameters to trade off the three losses.", "In this section, we evaluate SimulSpeech on MuST-C corpus (Di Gangi et al., 2019).", "First we describe experimental settings and details, then we show the experiment results, and further conduct some analyses on our model.", "Datasets We use the MuST-C English-Spanish (En Es) and English-German (En De) speech translation corpus in our experiments.", "Both two datasets contain audio clips in source language, and the corresponding source-language transcripts and target-language translated text.", "The official data statistics and splits for train/dev/test set are shown in Table 1. For the speech data, we transform the raw audio into mel-spectrograms following Shen et al. (2018) with 50 ms frame size and 12.5 ms hop size.", "To simplify the model training, we remove some non-verbal annotation in the text, such as (Laughing), (Music).", "All the sentences are first tokenized with moses tokenizer 2 and then segmented into subword symbols using Byte Pair Encoding (BPE) (Sennrich et al., 2016), except for the label to train the speech segmenter, where we 2 https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl use character sequence of source text.", "We learn the BPE merge operations across source and target languages.", "We use the speech segmenter proposed in Section 3 to split the speech mel-spectrograms into segments, where each segment is regarded as discrete tokens and represents a word or short phrase.", "Model Configuration We use the Transformer (Vaswani et al., 2017b) as the basic SimulSpeech model structure since it achieves state-of-the-art accuracy and becomes a popular choice for recent NMT research.", "The model hidden size, number of heads, number of encoder and decoder-layers are set to 384, 4, 6 and 4 respectively.", "Considering that the adjacent hidden states are closely related in speech task, we replace the feed-forward network in Transformer with a 2-layer 1D convolutional network (Gehring et al., 2017) with ReLU activation.", "Left padding is used in the 1D convolutional network in the target side (Ren et al., 2019) to avoid the output token seeing its subsequent tokens in the training stage.", "The kernel size and filter size of 1D convolution are set to 1536 and 9 respectively.", "The pre-net (bottom left in Figure 2a) is a 3-layer convolutional network with left padding, whose output dimension is same as the hidden size of the transformer encoder.", "The decoder of the auxiliary ASR model and the encoder of the auxiliary NMT model, as well as the encoder and decoder of the NMT teacher model share the same model structures described above.", "Training and Inference SimulSpeech is trained on 2 NVIDIA Tesla V100 GPUs, with totally batch size of 64 sentences.", "We use the Adam optimizer with the default parameters (Kingma and Ba, 2014) and learning rate schedule in Vaswani et al. (2017a).", "We train the SimulSpeech with auxiliary simultaneous ASR and NMT tasks by default.", "We set the 1 , 2 , 3 in Equation 7 as 1.0, 0.1, 1.0 respectively, according to the validation performance.", "SimulSpeech is trained and tested with the same k unless otherwise stated.", "The translation quality is evaluated by tokenized case sensitive BLEU (Papineni et al., 2002) with the perl scripts 3 .", "The Metric of Translation Delay Many previous works focus on proposing the metrics of translation delay for simultaneous text to text translation, such as average proportion (AP) (Cho and Esipova, 2016) and average latency (AL) (Ma et al., 2018).", "The former calculates the mean absolute delay cost by each target token, while the latter measures the degree of out of sync with the speaker.", "In this work, we extend the AP and AL metric that are originally calculated on word sequence to speech sequence for simultaneous speech to text translation task.", "Our extended AP is defined as follows: AP ( x, y ) = 1 | x | time | y | | y | (cid:88) i =1 t ( i ) , (8) where x and y are the source speech and target text, | x | time is the total time duration of source speech, | y | is the length of target text, t ( i ) is real-time delay in terms of source speech when generating the i -th word in target sequence, i.e., the duration of source speech listened by the model before writing the i -th target token.", "Our extended AL is defined as follows: AL ( x, y ) = 1 ( | x | seg ) ( | x | seg ) (cid:88) i =1 g ( i ) i 1 r , (9) where | x | seg is length of speech segments, g ( i ) is the delay at step i , i.e., the number of source segments listened by the model before writing the i -th target token.", "( | x | seg ) denotes the earliest timestep where our model has consumed the entire source sequence: ( | x | seg ) = arg min t ( g ( t ) = | x | seg ) , (10) and r = | y | / | x | seg is the length ratio between target and source sequence.", "Translation Accuracy First, we evaluate the performance of SimulSpeech model under different k .", "The BLEU scores of En-Es and En-De are shown in Table 2. We can see that the performance of our model does not drop a lot when k is small, compared to the full-sentence translation (training with k =inf).", "Translation Delay We plot the translation quality (in terms of BLEU score) against delay metrics (AP and AL) of our SimulSpeech model and test-time waitk model (trained with full-sentence translation but only test with waitk , denoted as train-full test-k) in Figure 4a and 4b.", "We can see that the BLEU scores increase as k increases, with the sacrifice of translation delay.", "The accuracy of SimulSpeech model is always better than the test-time waitk , which demonstrates the effectiveness of the SimulSpeech.", "(a) The translation quality against the latency in terms of AP.", "Comparison with Cascaded Models Finally, we implement the cascaded simultaneous speech to text translation pipeline and compare the accuracy of SimulSpeech with it under the same translation 1 2 3 4 5 6 7 8 9 10 11 12 En (source) the first on here is the classic apple.", "delay by using the same k .", "For cascaded method, we try all possible combinations of waitk ASR and waitk NMT models and report the best one.", "The accuracy of the two methods is shown in Table 3. It can be seen that 1) SimulSpeech outperforms the cascaded method when k < 9 which covers most simultaneous translation scenarios.", "2) Cascaded model only outperforms SimulSpeech in larger k 5 .", "These results demonstrate the advantages of SimulSpeech specifically for simultaneous translation scenario.", "We further plot the BLEU scores of the two methods in Figure 6.", "It can be seen that SimulSpeech with wait-3 can achieve the same BLEU score with the cascaded method under wait-5.", "To sum up, SimulSpeech achieves higher translation accuracy than cascaded method under the same translation delay, and achieves lower translation delay with the same translation accuracy.", "We evaluate the effectiveness of each component and show the results in Table 4.", "From the BLEU scores in Row 2 and Row 3, it can be seen that the translation accuracy with different waitk can be boosted by adding auxiliary task to naive simultaneous speech to text translation model (denoted as Naive S2T).", "The Effectiveness of data-level knowledge distillation We further evaluate the effectiveness of data-level knowledge distillation (Row 4 vs Row 3).", "The result shows that data-level knowledge distillation can achieve a large accuracy improvement.", "The Effectiveness of attention-level knowledge distillation We further evaluate the effectiveness of attention-level knowledge distillation.", "We add attention-level knowledge distillation (Row 5 vs. Row 3) to the model and find that the accuracy can also be improved.", "As a result, we combine all the techniques together (Row 6, SimulSpeech) and obtain the best BLEU scores across different waitk , which demonstrates the effectiveness of all techniques we proposed for the training of SimulSpeech.", "The Effectiveness of Speech Segmenter To evaluate the effectiveness of our segmenter, we compare the accuracy of SimulSpeech model using our segmentation method and the ground-truth segmentation, where we extract the segmentation from the ground-truth speech and corresponding transcripts using the alignment tool 6 and regard it as the ground-truth segmentation.", "As shown in Table 5, the BLEU scores of SimulSpeech using our segmentation method is close to that using ground-truth segmentation 7 , which demonstrates the effectiveness of our speech segmenter.", "6 https://github.com/lowerquality/gentle 7 Note that we cannot obtain the ground-truth segmentation during inference.", "Therefore the accuracy gap in Table 5 is reasonable.", "Case Analysis We further conduct case studies to demonstrate the advantages of our end-to-end translation over the previous cascaded models.", "As shown in Figure 5, simultaneous ASR model makes a mistake which further affects the accuracy of downstream simultaneous NMT model, while SimulSpeech is not suffered by this problem.", "As a result, SimulSpeech outperforms cascaded models.", "Speech to text translation has been a hot research topic in the field of artificial intelligence recently (Berard et al., 2016; Weiss et al., 2017; Liu et al., 2019).", "Early works on speech to text translation rely on a two-stage method by cascaded ASR and NMT models.", "Berard et al. (2016) proposed an end-to-end speech to text translation system, which does not leverage source language text during training or inference.", "Weiss et al. (2017) further leveraged an auxiliary ASR model with a shared encoder with the speech to text model, regarding it as a multi-task problem.", "Vila et al. (2018) applied Transformer (Vaswani et al., 2017b) architecture to this task and achieved good accuracy.", "Bansal et al. (2018) explored speech to text translation in the low-resource setting where both data and computation are limited.", "Sperber et al. (2019) proposed a novel attention-passing model for end-to-end speech to text translation and achieved comparable accuracy to the cascaded models.", "Simultaneous translation aims to translate sentences before they are finished (Fugen et al., 2007; Oda et al., 2014; Dalvi et al., 2018).", "Traditional speech to text simultaneous translation system usually first recognizes and segments the incoming speech stream based on an automatic speech recognition (ASR) system, and then translates it to the text in target language.", "And most of the previous works focus on the simultaneous machine translation part (Zheng et al., 2019): Gu et al. (2016) proposed a framework for simultaneous NMT in which an agent learns to make decisions on when to translate from the interaction with a pre-trained NMT environment.", "Ma et al. (2018) introduced a very simple but effective waitk strategy for simultaneous NMT based on a prefix-to-prefix framework, which predicts the next target word conditioned on the partial source sequence the model has seen, instead of the full source sequence.", "The waitk strategy will wait for the first k source words and then start to generate a target word.", "After that, once receiving a new source word, the decoder generates a new target word until there is no more source word, and then the translation degrades to full-sentence translation.", "In this work, we developed SimulSpeech, an end-to-end simultaneous speech to text translation system that directly translates source speech into target text concurrently.", "SimulSpeech consists of a speech encoder, a speech segmenter, and a text decoder with waitk strategy for simultaneous translation.", "We further introduced several techniques including data-level and attention-level knowledge distillation to boost the accuracy of SimulSpeech.", "Experiments on MuST-C spoken language translation datasets demonstrate the advantages of SimulSpeech in terms of both translation accuracy and delay.", "For future work, we will design more flexible policies to achieve better translation quality and lower delay in simultaneous spoken language translation.", "We will also investigate simultaneous translation from the speech in a source language to the speech in a target.", "This work was supported in part by the National Key R&D Program of China (Grant No.2018AAA0100603), Zhejiang Natural Science Foundation (LR19F020006), National Natural Science Foundation of China (Grant No.61836002), National Natural Science Foundation of China (Grant No.U1611461), and National Natural Science Foundation of China (Grant No.61751209).", "This work was also partially funded by Microsoft Research Asia." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "abstain", "result", "objective", "other", "other" ]
[ "Although neural tensor networks (NTNs) have been successful in many natural language processing tasks, they require a large number of parameters to be estimated, which often results in overfitting and long training times.", "We address these issues by applying eigendecompo-sition to each slice matrix of a tensor to reduce the number of parameters.", "We evaluate our proposed NTN models in two tasks.", "First, the proposed models are evaluated in a knowledge graph completion task.", "Second, a recursive NTN (RNTN) extension of the proposed models is evaluated on a logical reasoning task.", "The experimental results show that our proposed models learn better and faster than the original (R)NTNs.", "Alongside the nonlinear activation functions, linear mapping by matrix multiplication is an essential component of neural network (NN) models, as it determines the feature interaction and thus the expressiveness of models.", "In addition to the matrix-based mapping, neural tensor networks (NTNs) (Socher et al., 2013a) employ a 3-dimensional tensor to capture direct interactions among input features.", "Due to the large expressive capacity of 3D tensors, NTNs have been successful in an array of natural language processing (NLP) and machine learning tasks, including knowledge graph completion (KGC) (Socher et al., 2013a), sentiment analysis (Socher et al., 2013b), and reasoning with logical semantics (Bowman et al., 2015).", "However, since a 3D tensor has a large number of parameters, NTNs need longer time to train than other NN models.", "Moreover, the millions of parameters often make the model suffer from overfitting (Yang et al., 2015).", "techniques drastically decrease the number of parameters in an NTN without diminishing its expressiveness.", "We use the matrix decomposition techniques that are utilized for KGC in Yang et al. (2015) and Trouillon et al. (2016).", "Yang et al. (2015) imposed a constraint that a matrix in the bilinear term in their model had to be diagonal.", "As mentioned in a subsequent section, this is essentially equal to assuming that the matrix be symmetric and performing eigendecomposition.", "Trouillon et al. (2016) also applied eigendecom-position to a matrix by regarding it as the real part of a normal matrix.", "Following these studies, we perform simultaneous diagonalization on all slice matrices of a NTN tensor.", "As a result, mapping by a 3D ( n n k ) tensor is replaced with an array of k triple inner products of two input vectors and a weight vector.", "Thus, we obtain two new NTN models where the number of parameters is reduced from O ( n 2 k ) to O ( nk ) .", "On a KGC task, these parameter-reduced NTNs (NTN-Diag and NTN-Comp) alleviate overfitting and outperform the original NTN.", "Moreover, our proposed NTNs can learn faster than the original NTN.", "We also show that our proposed models perform better and learn faster in a recursive setting by examining a logical reasoning task.", "We consider mapping in a neural network (NN) layer that takes two vectors as input, such as recursive neural networks.", "Recurrent neural networks also has this structure, with one input vector being the hidden state from the previous time step.", "As a mapping before activation in the NN layer, linear mapping (matrix multiplication) is commonly used: W 1 x 1 + W 2 x 2 = [ W 1 , W 2 ] [ x 1 x 2 ] = W x .", "Here, since x 1 , x 2 R n , W 1 , W 2 R k n , this linear mapping is a transformation from R 2 n to R k .", "Linear mapping, which is a standard component of NNs, has been applied successfully in many tasks.", "However, it cannot consider the interaction between different components of two input vectors, which renders it not ideal for modeling complex compositional structures such as trees and graphs.", "To alleviate this problem, some models such as NTNs (Socher et al., 2013a) have explored 3D tensors to yield more expressive mapping: x T1 W [1: k ] x 2 = x T1 W [1] x 2 x T1 W [2] x 2 ... x T1 W [ k ] x 2 = sum ( W [1] ( x 1 x 2 ) ) sum ( W [2] ( x 1 x 2 ) ) .", "The output of this mapping is an array of k bilinear products in the form of x T1 W [ i ] x 2 .", "Thus, this is also a transformation from R 2 n to R k .", "Each element of the output of this mapping equals the sum of W [ i ] ( x 1 x 2 ) , where and represent, respectively, the Hadamard and the outer products.", "Hence this mapping captures the direct interaction between different components (or features) in two input vectors.", "Thanks to this expressiveness, NTNs are effective in tasks such as knowledge graph completion (Socher et al., 2013a), sentiment analysis (Socher et al., 2013b), and logical reasoning (Bowman et al., 2015).", "Although mapping by a 3D tensor provides expressiveness, it has a large number ( O ( n 2 k ) ) of parameters.", "Due to this, NTNs often suffer from overfitting and long training times.", "To reduce the number of parameters of a slice matrix W [ i ] R n n in a tensor, simple matrix decomposition (SMD) is commonly used (Bai et al., 2009).", "SMD factorizes W [ i ] into a product of two low rank matrices S [ i ] R n m and T [ i ] R m n ( m n ): W [ i ] S [ i ] T [ i ] .", "By plugging (1) into bilinear term x T1 W [ i ] x 2 , we obtain the approximation x T1 S [ i ] T [ i ] x 2 .", "SMD reduces the number of parameters of W [ i ] from n 2 to 2 nm .", "However, the dimension m for S and T is a hyperparameter and must be determined prior to training.", "This section introduces two techniques that can simultaneously diagonalize all slice matrices W [1] , . . . , W [ i ] , . . . , W [ k ] R n n .", "As described in (Liu et al., 2017), we make use of the fact that if matrices V [1: k ] form a commuting family: i.e., V [ i ] V [ j ] = V [ j ] V [ i ] , i, j { 1 , 2 , . . . , k } , they can be diagonalized by a shared orthogonal or unitary matrix.", "Both of the two techniques reduce the number of parameters of W [ i ] to O ( n ) from O ( n 2 ) .", "Many NLP datasets contain symmetric patterns.", "For example, if binary relation (Bob, is relative of, Alice) holds in a knowledge graph, then (Alice, is relative of, Bob) should also hold in it.", "English phrases dog and cat and cat and dog have identical meaning.", "For symmetric structures, we can reasonably suppose that each slice matrix W [ i ] of a 3D tensor is symmetric because x T1 W [ i ] x 2 must equal x T2 W [ i ] x 1 .", "When W [ i ] R n n is symmetric, it can be diagonalized as: W [ i ] = O [ i ] W [ i ] O [ i ]T where O [ i ] R n n is an orthogonal matrix and W [ i ] R n n is a diagonal matrix.", "Note that an orthogonal matrix O [ i ] may not be equal to O j if i = j .", "However, if all of the slice matrices W [1] , . . . , W [ i ] , . . . , W [ k ] R n n are commuting, we can diagonalize every slice matrix with the same orthogonal matrix O .", "By substituting W [ i ] with OW [ i ] OT into bilinear term x T1 W [ i ] x 2 , we can rewrite it as follows: x T1 W [ i ] x 2 = x T1 OW [ i ] OT x 2 = y T1 W [ i ] y 2 = y 1 , w [ i ] , y 2 (2) where y 1 = OT x 1 , y 2 = OT x 2 , w [ i ] = diag ( W [ i ] ) R n and a , b , c denotes a triple inner product defined by a , b , c = nl =1 a l b l c l .", "This reduces the number of parameters in a single slice matrix from n 2 to n .", "Since most of the structures in the NLP data are not symmetric, the symmetric matrix assumption is usually violated.", "To obtain more expressive diagonal matrix, we regard each slice matrix W [ i ] as the real part of a complex matrix and consider its eigendecomposition.", "For any real matrix W [ i ] , there exists a complex normal matrix Z [ i ] whose real part is equal to it: W [ i ] = ( Z [ i ] ) .", "( ) represents an operation that takes the real part of a complex number, vector or matrix.", "Further, any complex normal matrix can be diagonalized by a unitary matrix.", "With these two properties, any real matrix W [ i ] can be diagonalized as follows (Trouillon et al., 2016): W [ i ] = ( Z [ i ] ) = ( U [ i ] Z [ i ] U [ i ] ) .", "Here, U [ i ] C n n is a unitary matrix, Z [ i ] C n n is a diagonal matrix, and U [ i ] is the conjugate transpose of U [ i ] .", "To guarantee that every slice matrix can be diagonalized with the same unitary matrix U instead of U [ i ] , we assume all of the normal matrices Z [1] , . . . , Z [ i ] , . . . , Z [ k ] C n n are commuting as in Section 3.2.1.", "Substituting ( UZ [ i ] U ) whose U is the same unitary matrix in all slice matrices, we can rewrite every bilinear term x T1 W [ i ] x 2 as follows: x T1 W [ i ] x 2 = ( y 1 , w [ i ] , y 2 ) = ( y 1 ) , ( w [ i ] ) , ( y 2 ) + ( y 1 ) , ( w [ i ] ) , ( y 2 ) + ( y 1 ) , ( w [ i ] ) , ( y 2 ) ( y 1 ) , ( w [ i ] ) , ( y 2 ) , (3) where y 1 = UT x 1 , y 2 = U x 2 , w [ i ] = diag ( Z [ i ] ) C n , and y 1 , w [ i ] , y 2 is the triple Hermitian inner product of y 1 , w [ i ] and y 2 defined by a , b , c = nl =1 a l b l c l .", "This technique reduces the number of parameters of the matrices from n 2 to 2 n .", "As shown in the right-hand side of Eq.", "(3), ( y 1 , w [ i ] , y 2 ) can be replaced with three additions and a subtraction of the triple inner product of real vectors.", "This section introduces the baseline and our proposed models.", "After describing them, we explain how to extend them for handling compositional structures like binary trees.", "First, we describe a standard single layer neural network (NN) model for two vectors x 1 , x 2 R n .", "The model uses linear mapping V R k 2 n to combine two input vectors: f ( V [ x 1 x 2 ] + b ) where b R k is a bias term and f is a non-linear activation function.", "The NN model has only (2 n + 1) k parameters, and does not consider the direct interactions between x 1 and x 2 .", "Socher et al. (2013a) proposed a neural tensor network (NTN) model that uses a 3D tensor W [1: k R n n k to combine two input vectors:", "Unlike the standard NN model, NTN can directly relate two input vectors using a tensor.", "However, it has too many parameters; ( n 2 + 2 n + 1) k .", "Although the NTN model has tremendous expressive power, it is extremely time-consuming to compute, since a naive 3D tensor product incur O ( n 2 k ) computation time.", "To overcome this weakness, Zhao et al. (2015) and Liu et al. (2015) independently introduced simple matrix decomposition (SMD) to the NTN model by replacing each slice matrix W [ i ] with its factorized approximation given by Eq.", "(1): f ( x T1 S [1: k ] T [1: k ] x 2 + V [ x 1 x 2 ] + b ) where S [1: k ] R n m k , T [1: k ] R m n k .", "When m n , the NTN-SMD model drastically reduces the number of parameters compared to the original NTN model; i.e., from ( n 2 + 2 n + 1) k to (2 mn + 2 n + 1) k .", "In this paper, we introduce two new NTN models: NTN-Diag and NTN-Comp, both of which reduce the number of parameters in a 3D tensor more than NTN-SMD with little loss in the model's generalization performance.", "Table 1 summarizes the number of parameters in each model.", "We replace all slice matrices W [ i ] of W [1: k ] with the triple inner product formulation of Eq.", "(2) by assuming that they are symmetric and commuting.", "As a result, we derive the following new NTN formulation: f ( x 1 , w [1] , x 2 ... x 1 , w [ k ] , x 2 + V [ x 1 x 2 ] + b ) where w [ i ] R n , i { 1 , 2 , . . . , k } .", "Thus, under the symmetric and commuting matrix constraints, we regard mapping by a 3D tensor as an array of k triple inner products.", "The total number of parameters is just (3 n + 1) k .", "By assuming that W [1] , . . . , W [ i ] , . . . , W [ k ] are real parts of normal matrices forming a commuting family, we can replace each slice matrix of a tensor term in NTN with the triple Hermitian inner product shown in Eq.", "(3): f ( ( x 1 , w [1] , x 2 ) ... ( x 1 , w [ k ] , x 2 ) + ( V [ x 1 x 2 ]) + b ) where x 1 , x 2 C n , V C n n and w [ i ] C n , i { 1 , 2 , . . . , k } .", "Similar to NTN-Diag, we regard mapping by a 3D tensor as an array of k triple Hermitian inner products.", "The total number of parameters is just (6 n + 1) k .", "As is clear of its form, NTN-Diag is a special case of NTN-Comp whose vectors x 1 , x 2 and w [ i ] are constrained to be real.", "We extend the above NTN models to handle compositional structures.", "As a representative of compositional structures, we consider a binary tree where each NTN layer computes a vector representation for a node by combining two vectors from its child nodes in the lower layer.", "Except for NTN-Comp, the models implement mappings R n R k so that each of their layers can receive its lower layer's output directly, if k equals to n .", "Thus, the models do not have to be modified for them.", "However, NTN-Comp cannot receive its lower layer's output as it is because NTN-Comp is a mapping from C n to R k .", "To solve this problem, we set k to 2 n and treat the output y R 2 n as the concatenation of vectors representing the real and imaginary parts of y C n : ( y ) = ( y 1 , , y n ) , ( y ) = ( y n +1 , , y 2 n ) .", "Note that this approach is valid since Eq.", "(3) can actually be defined in real vector space by transforming the complex vectors in C n into real vectors in R 2 n .", "In KGC, researchers usually design scoring function for the given triplet ( s, r, o ) to judge whether it is a fact or not.", "Here ( s, r, o ) denotes that entity s is linked to entity o by relation r .", "RESCAL (Nickel et al., 2011) uses e T s W r e o as , where e s , e o are entity embedding vectors and W r is an embedding matrix of relation r .", "This bilinear operation is effective for the task, but its computational cost is high and it suffers from overfitting.", "To overcome these problems, DistMult (Yang et al., 2015) adopts the triple inner product e s , w r , e o as , where w r is an embedding vector of relation r .", "This solves those problems, but it degrades the model's ability to capture directionality of relations, because the scoring function of DistMult is symmetric with respect to s and o ; i.e., e s , w r , e o = e o , w r , e s .", "To reconcile the complexity and expressiveness of a model, ComplEx (Trouillon et al., 2016) uses complex vectors for entity and relation embeddings.", "As scoring function , they adopted the triple Hermitian inner product ( e s , w r , e o ) , where e o denotes the complex conjugate of e o .", "Since ( e s , w r , e o ) = ( e o , w r , e s ) , ComplEx solves the expressiveness problem of DistMult without full matrices as relation embeddings.", "We can regard DistMult as a special case of RESCAL with a symmetric matrix constraint on W r .", "ComplEx is also a RESCAL variant with W r as the real part of a normal matrix.", "Our research is based on these works, but to the best of our knowledge, no previous work applied this ap-509 proach to reduce the number of parameters in a tensor.", "To give additional expressiveness power to standard (R)NNs, many architectures have been proposed, such as LSTM (Hochreiter and Schmid-huber, 1997), GRU (Cho et al., 2014), and CNN (LeCun et al., 1998).", "NTN (Socher et al., 2013a) and RNTN (Socher et al., 2013b) are other such architectures.", "However, (R)NTNs differ in that they only add 3D tensor mapping to standard neural networks.", "Thus, they can also be regarded as a powerful basic component of NNs because 3D tensor mapping can be applied to more complicated architectures such as those examples.", "Several researchers reduced the number of parameters of NNs by using specific parameter sharing mechanisms.", "Cheng et al. (2015) used circulant matrix mapping instead of conventional linear mapping and improved the time complexity of the matrix-vector product by using Fast Fourier Transformation (FFT).", "Circulant matrix C ( w ) = w 1 w n . . . w 3 w 2 w 2 w 1 . . . w 4 w 3 ... ... ... ... ... w n 1 w n 2 . . . w 1 w n w n w n 1 . . . w 2 w 1 for w T = ( w 1 , . . . , w n ) can be factorized into F 1 diag ( F w ) F with the Fourier matrix F .", "By assuming each slice matrix W [ i ] of W [1: k ] is circulant, we get the same scoring function as that in Eq.", "(3); x T1 W [ i ] x 2 = x T1 F 1 diag ( F w [ i ] ) F x 2 = ( x 1 , w [ i ] , x 2 ) where x 1 = F x 1 , x 2 = F x 2 , and w [ i ] = 1 n diag ( F w [ i ] ) are complex vectors in C n .", "In this sense, NTN-Comp is equivalent to NTN where slice matrices of the 3D tensor are restricted to be circulant.", "Hayashi and Shimbo (2017) established a more detailed proof of the equivalence.", "Lu et al. (2016) employed a Toeplitz-like structured matrix, reducing parameters of LSTM.", "Chen et al. (2015) used a feature hashing technique to reduce parameters in RNN.", "Although these techniques can also be extended to reduce the number of tensor-related parameters in NTN, the former needs FFT operations; i.e., O ( n log n ) computation time, and the latter's contribution is only a reduction in memory consumption.", "To evaluate their performance for link prediction on knowledge graphs, we compared our proposed methods (NTN-Diag and NTN-Comp) to baseline methods (NTN (Socher et al., 2013a) and NTN-SMD).", "Let E and R denote entities and relations, respectively.", "A relational triplet , or simply a triplet , ( s, r, o ) is a triple with s, o E and r R .", "It represents a proposition that relation r holds between subject entity s and object entity o .", "A triplet is called a fact if the proposition it denote is true.", "A knowledge graph is a collection of knowledge triplets, with the understanding that all its mem-ber triplets are facts.", "It is called a graph because each triplet can be regarded as an edge in a directed graph; the vertices in this graph represent entities in E , and each edge is labeled by a relation in R .", "Let G be a knowledge graph, viewed as a collection of facts.", "Knowledge graph completion (KGC) is the task of predicting whether unknown triplet ( s , r , o ) G such that s , o E , r R is a fact or not.", "The standard approach to KGC is to design a score function : E R E R that assigns a large value when a triplet seems to be a fact.", "Socher et al. (2013a) defined it as follows.", "Here, e s , e o R n are entity embeddings and W r , V r , b r , u r are parameters for each relation r .", "u r is a k -dimensional vector to map f 's output R k to R which indicates a score.", "f is the hyperbolic tangent.", "To compare the performances of the baselines and proposed models, we change the mapping before an activation.", "For NTN-SMD, we change term e T s W [1: k ] r e o to e T s S [1: k ] r T [1: k ] r e o .", "To apply NTN-Diag and NTN-Comp in this model, 510 WN18 FB15K MRR Hits@ MRR Hits@ model Filter Raw 1 3 10 Filter Raw 1 3 10 NN 0.111 0.106 7.0 11.7 18.3 0.259 0.165 17.9 28.1 41.7 NTN ( k = 1 ) 0.740 0.512 67.6 78.4 85.2 0.347 0.188 24.1 39.3 55.2 NTN ( k = 4 ) 0.754 0.530 69.3 79.5 86.3 0.380 0.198 27.1 43.0 59.2 NTN-SMD ( m = 1 ) 0.243 0.216 15.9 26.1 40.9 0.278 0.172 19.3 30.1 44.7 NTN-SMD ( m = 2 ) 0.224 0.199 15.1 23.8 37.2 0.298 0.177 20.7 32.7 47.8 NTN-SMD ( m = 3 ) 0.299 0.255 20.4 32.4 49.2 0.312 0.183 21.7 34.5 49.9 NTN-SMD ( m = 10 ) 0.533 0.413 42.2 59.4 74.5 0.333 0.188 22.8 37.5 53.8 NTN-SMD ( m = 25 ) 0.618 0.463 52.1 67.8 80.0 0.341 0.187 23.2 38.6 55.5 NTN-Diag 0.824 0.590 74.8 89.6 92.7 0.443 0.238 31.5 51.2 68.5 NTN-Comp 0.857 0.610 80.1 90.9 93.1 0.490 0.246 36.3 56.7 71.9 DistMult 0.822 0.532 72.8 91.4 93.6 0.654 0.242 54.6 73.3 82.4 ComplEx 0.941 0.587 93.6 94.5 94.7 0.692 0.242 59.9 75.9 84.0 Table 3: Mean Reciprocal Rank (MRR) and Hits@n for the models tested on WN18 and FB15k.", "The loss function used to train the models is shown below: N i =1 C c =1 max ( 0 , 1 ( T ( i ) ) + ( T ( i ) c )) + 22 , where 22 is an L2 regularization term, T ( i ) denotes the i -th example of training data of size N , and T ( i ) c is one of C randomly sampled negative examples for the i -th training example.", "We generated negative samples of a triplet ( s, r, o ) by corrupting its subject or object entity.", "We used the Wordnet (WN18) and Freebase (FB15k) datasets to verify the benefits of our proposed methods.", "The dataset statistics are given in Table", "2. We selected hyper-parameters based on Socher et al. (2013a) and Yang et al. (2015): For all of the models, the size of mini-batches was set to 1000 , the dimensionality of the entity vector to d = 100 , and the regularization parameter to 0 .", "0001 ; the tensor slice size was set to k = 4 for all models, except NTN for which we also tested with k = 1 to see the influence of the slice size on the performance.", "We performed 300 epochs of training for Wordnet and 100 on Freebase using Adagrad (Duchi et al., 2011) with the initial learning rate set to 0 .", "1 .", "For evaluation, we removed the subject or object entity of each test example and then replaced it with all the entities in E .", "We computed the scores of these corrupted triplets and ranked them in descending order of scores.", "We here report the results collected in filtered and raw settings.", "In the filtered setting, given test example ( s, r, o ) , we remove from the ranking all the other positive triplets that appear in either training, validation, or test dataset, whereas the raw metrics do not remove these triplets.", "Experimental results are shown in Table", "3. We observe the following: The performance of NN and NTNs differs considerably; Apparently, NN is inadequate for this task.", "By comparing the results of NTNs with different slice sizes, we see that k = 4 performs better than k = 1 .", "NTN-SMDs perform better than NN, but are all inferior to NTNs, although their results improved as m (the rank of decomposed matrices) is increased.", "NTN-Diag achieved better results than NTN, although it has far fewer parameters than NTN and the datasets contain many unsymmetrical triplets.", "This demonstrates that NTN-Diag solves the overfitting problem of NTN without sacrificing the expressiveness power.", "NTN-Diag also has fewer parameters than the smallest ( m = 1 ) NTN-SMD.", "Thus, 511 Conjunctive normal form m i =1 n i j =1 A ij Disjunctive normal form m i =1 n i j =1 A ij Table 4: Conjunctive and disjunctive normal forms in propositional logic.", "we conclude that NTN-Diag is a better alternative of NTN than NTN-SMD is, in terms of both accuracy and computational cost.", "NTN-Comp outperformed NTN-Diag, showing that its flexible constraint on matrices yielded additional expressiveness.", "However, NTN-Diag and NTN-Comp do not exceed DistMult and ComplEx, respectively, in almost all measures.", "To validate the performance of our proposed models in a recursive neural network setting, we experimentally tested them by having them solve a semantic compositionality problem in logic.", "This task definition basically follows Bowman et al. (2015): Given a pair of artificially generated propositional logic formulas, classify the relation between the formulas into one of the seven basic semantic relations of natural logic (MacCartney and Manning, 2009).", "Table 5 shows these seven relation types.", "The formulas consist of propositional variables, negation, and conjunction and disjunction connectives.", "Although Bowman et al. (2015) generated formulas with no constraint on its form, we restricted them to disjunctive normal not p 3 p 3 p 3 ( p 3 or p 2 ) ( p 1 or ( p 2 or p 4 ))) ( p 2 and not p 4 ) Table 6: Short examples of type of formulas and their relations in datasets.", "form (DNF) or conjunctive normal form (CNF) (Table 4).", "Recall that any propositional formula can be transformed into these forms.", "Following Bowman et al. (2015), we constructed a model that infers the relations between formula pairs, as described in Table", "6. The model consists of two layers: composition and comparison layers (Figure 1).", "The composition layer outputs the embeddings of both left and right formulas by recursive neural networks.", "Subsequently, the comparison layer compares the two embeddings using a single layer neural network, and then a softmax classifier receives its output.", "In the composition layer, we set different parameters for and and or operations.", "As a loss function, we used cross entropy with L2 regularization and apply the NTNs in Section 4 to the comparison layer and uses RNTNs for as the composition layer.", "In this experiment, an example is a pair of propositional formulas, and its class label is the seven relation types between the pair.", "We generated examples following the protocol described in Bowman et al. (2015), with the exception that the formulas are restricted to CNF or DNF, as mentioned above.", "We obtained 62,589 training examples, 13,413 validation examples, and 55,150 test examples.", "Each formula in the training and validation examples contains up to four logical operators, whereas those in the test examples have 512 Model 1 2 3 4 5 6 7 8 9 10 11 12 Avg.", "up to 12 logical operators.", "Every formula consists of up to four variables taken from six propositional variables that are shared among all the examples.", "Hyperparameters and optimization are based on Bowman et al. (2015): Embedding size d = 25 (for RNN, d = 45 ) and the output size of comparison layer is k = 75 , and we used AdaDelta (Zeiler, 2012) for an optimizer.", "We searched for the best coefficient of L2 regularization in { 0 .", "0001 , 0 .", "0003 , 0 .", "0005 , 0 .", "0007 , 0 .", "0009 , 0 .", "001 } , whereas Bowman et al. (2015) set to 0 .", "001 for RNN and 0 .", "0003 for RNTN.", "The results are shown in Table", "7. From the table, we observe the following: As with KGC, the large difference in performance between RNN and RNTN suggests that this logical reasoning task requires feature interactions to be captured 1 .", "1 Bowman (2016) also evaluated TreeLSTM, but its advantage over RNN was unclear in their experiment.", "For that RNTN-Diag achieved the best accuracy except for Tests 2 and 12 and outperformed RNTN except for Test", "2. This is not surprising because both and and or are symmetric: p 1 and p 2 equals p 2 and p 1 .", "This matches the tensor term in RNTN-Diag which is symmetric with respect to x 1 and x 2 .", "RNTN-Comp was the second best except for Tests 13 and 1012.", "For all tests, its accuracy was comparable with or superior to that of RNTN.", "RNTN-SMD ( m = 1 ) was inferior to RNTN for most test sets, although some good results were observed with m = 1 , 2 , 3 on Tests 11 and 12.", "Indeed, except for Tests 9 12, RNTN-SMD ( m = 1 ) was inferior even to RNN despite the larger number of parameters in RNTN-SMD.", "RNTN-SMD ( m = 2 ) obtained better results than m = 1 , but it is still worse than RNTN except for Tests 10-12.", "Further increase in m ( m = 4 , 8 , 16 ) worsened the accuracy despite an increase of the number of parameters.", "We also evaluated the stability of the model over different trials and hyperparameters.", "Table 8 shows the best average accuracy for each compared model (among all the tested ) on the validation set.", "The parenthesized figures (on the rightmost column) show the standard deviation over five independent trials used for computing the average, i.e., all five trials used the same value that achieved the best average accuracy.", "We see that RNTN-SMDs have larger standard deviations than reason, we did not test TreeLSTM in this paper.", "RNTN, RNTN-Diag and RNTN-Comp.", "This indicates that RNTN-SMD is a less reliable model.", "RNTN-SMDs are also unstable, not only within the same , but also between different s.", "Figure 2 describes how accuracies are impacted by s.", "The top graph shows validation accuracies between different values.", "RNTN, RNTN-Diag and RNTN-Comp are stable, whereas RNN and RNTN-SMDs have steep drops.", "The bottom one describes the accuracies for Test 12.", "This also shows that RNTN-SMDs are unstable and that RNTN-Diag achieves distinctive performances.", "Finally, Figure 3 shows that training times increase quadratically with dimension for RNTN that has O ( n 2 k ) parameters, but not for our methods, which have only O ( nk ) parameters.", "We proposed two new parameter reduction methods for tensors in NTNs.", "The first method constrains the slice matrices to be symmetric, and the second assumes them to be normal matrices.", "In both methods, the number of a 3D tensor param-Figure 3: Training times of the models.", "eters is reduced from O ( n 2 k ) to O ( nk ) after the constrained matrices are eigendecomposed.", "By removing the tensor's surplus parameters, our methods learn better and faster as was shown in experiments.", "2 Future work will test the versatility of our proposals, RNTN-Diag and RNTN-Comp, in other tasks that deal with data sets exhibiting carious structures." ]
[ "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "objective", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "result", "abstain" ]
[ "Learning multi-hop reasoning has been a key challenge for reading comprehension models, leading to the design of datasets that explicitly focus on it.", "Ideally, a model should not be able to perform well on a multi-hop question answering task without doing multi-hop reasoning.", "In this paper, we investigate two recently proposed datasets, WikiHop (Welbl et al., 2018) and HotpotQA (Yang et al., 2018).", "First, we explore sentence-factored models for these tasks; by design, these models cannot do multi-hop reasoning, but they are still able to solve a large number of examples in both datasets.", "Furthermore, we find spurious correlations in the unmasked version of WikiHop, which make it easy to achieve high performance considering only the questions and answers.", "Finally, we investigate one key difference between these datasets, namely span-based vs. multiple-choice formulations of the QA task.", "Multiple-choice versions of both datasets can be easily gamed, and two models we examine only marginally exceed a baseline in this setting.", "Overall, while these datasets are useful testbeds, high-performing models may not be learning as much multi-hop reasoning as previously thought.", "Question answering from text (Richardson et al., 2013; Hill et al., 2015; Hermann et al., 2015; Ra-jpurkar et al., 2016) is a key challenge problem for NLP that tests whether models can extract information based on a query.", "However, even sophisticated models that perform well on QA benchmarks (Seo et al., 2017; Shen et al., 2017; Yu et al., 2018) may only be doing shallow pattern matching of the question against the supporting passage (Weissenborn et al., 2017).", "More recent work (Ku-mar et al., 2016; Joshi et al., 2017; Welbl et al., 2018) has emphasized gathering information from different parts of a passage to answer the question, leading to a number of models designed to do multi-hop reasoning .", "Two recent large-scale datasets have been specifically designed to test multi-hop reasoning: WikiHop (Welbl et al., 2018) and HotpotQA (Yang et al., 2018).", "In this paper, we seek to answer two main questions.", "First, although the two datasets are explicitly constructed for multi-hop reasoning, do models really need to do multi-hop reasoning to do well on them?", "Recent work has shown that large-scale QA datasets often do not exhibit their advertised properties (Chen et al., 2016; Kaushik and Lipton, 2018).", "We devise a test setting to see whether multi-hop reasoning is necessary: can a model which treats each sentence independently select the sentence containing the answer?", "This provides a rough estimate of the fraction of questions solvable by a non-multi-hop system.", "Our results show that more than half of the questions in WikiHop and HotpotQA do not require multihop reasoning to solve.", "Surprisingly, we find that a simple baseline which ignores the passage and only uses the question and answer can achieve strong results on WikiHop and a modified version of HotpotQA, further confirming this view.", "Second, we study the nature of the supervision on the two datasets.", "One critical difference is that HotpotQA is span-based (the answer is a span of the passage) while WikiHop is multiple-choice.", "How does this difference affect learning and evaluation of multi-hop reasoning systems?", "We show that a multiple-choice version of HotpotQA is vulnerable to the same baseline that performs well on WikiHop, showing that this distinction may be important from an evaluation standpoint.", "Furthermore, we show that a state-of-the-art model, BiDAF++, trained on span-based HotpotQA and adapted to the multiple-choice setting outperforms the same model trained natively on the multiple-choice setting.", "However, even in the span-based setting, the high performance of the sentence-factored models raises questions about whether multi-hop reasoning is being learned.", "Our conclusions are as follows: (1) Many examples in both WikiHop and HotpotQA do not require multi-hop reasoning to solve, as the sentence-factored model can find the answers.", "(2) On WikiHop and a multiple-choice version of HotpotQA, a no context baseline does very well.", "(3) Span-based supervision provides a harder testbed than multiple choice by having more answers to choose from, but given the strong performance of the sentence-factored models, it is unclear whether any of the proposed models are doing a good job at multi-hop reasoning in any setting.", "WikiHop Welbl et al. (2018) introduced this English dataset specially designed for text understanding across multiple documents.", "The dataset consists of 40k+ questions, answers, and passages, where each passage consists of several documents collected from Wikipedia.", "Questions are posed as a query of a relation r followed by a head entity h , with the task being to find the tail entity t from a set of entity candidates E .", "Annotators followed links between documents and were required to use multiple documents to get the answer.", "HotpotQA Yang et al. (2018) proposed a new dataset with 113k English Wikipedia-based question-answer pairs.", "The questions are diverse, falling into several categories, but all require finding and reasoning over multiple supporting documents to answer.", "Models should choose answers by selecting variable-length spans from these documents.", "Sentences relevant to finding the answer are annotated in the dataset as supporting facts so models can use these at training time as well.", "In this section, we seek to answer whether multihop reasoning is really needed to solve these two multi-hop datasets.", "If a question requires a multi-hop model, then we should not be able to figure out the answer by only looking at the question and each sentence separately.", "Based on this idea, we propose a sentence-factored modeling setting, where Method Random Factored Factored BiDAF WikiHop 6.5 60.9 66.1 HotpotQA 5.4 45.4 57.2 SQuAD 22.1 70.0 88.0 Table 1: The accuracy of our proposed sentence-factored models on identifying answer location in the development sets of WikiHop, HotpotQA and SQuAD.", "a model must predict which sentence contains the answer but must score each sentence independently, i.e., without using information from other sentences in this process.", "Identifying the presence of the answer is generally easier than predicting the answer directly, particularly if a sentence is complicated, and is still sufficient to provide a bound on how strongly multi-hop reasoning is required.", "Figure 1 shows a typical example from these datasets, where identifying the answer ( Delhi ) requires bridging to an entity not mentioned in the question.", "Simple Factored Model We encode each passage sentence s i and the question q into a contextual representation h s i and h q using a bidirectional GRU (Chung et al., 2014).", "Then, S i = h (cid:62) s i W h q ; that is, compute a bilinear product of these representations with trainable weights W to get the score of the i th sentence.", "Finally, let p i = softmax i ( S i ) ; softmax over the sentences to get a probability distribution.", "We maximize the marginal log probability of picking a sentence containing the correct answer: log( (cid:80) i : s i s p i ) , where s is the set of sentences containing the answer.", "During evaluation, we pick the sentence s with the highest score and treat it as correct if it contains the answer.", "Factored BiDAF We encode the question and each sentence separately using bi-GRUs.", "Then, we generate the question-aware token representation for each token of sentence by using a co-attention layer (Seo et al., 2017).", "Finally, we max-pool over each sentence to get the sentence representation and feed those to a FFNN to compute the sentence score.", "Training and inference are the same as for the simple model.", "We run this test on both datasets as well as Question: The Oberoi family is part of a hotel company that has a head office in what city?", "The Oberoi family is an Indian family that is famous for its involvement in hotels, namely through The Oberoi Group .", "(cid:1)(cid:1) (cid:1)(cid:1) Figure 1: An example from the HotpotQA dev set.", "SQuAD (Rajpurkar et al., 2016), where multi-hop reasoning is only needed in a few questions.", "Results in Table 1 indicate that although intentionally created for multi-hop reasoning, for more than half of questions in WikiHop and HotpotQA, we can figure out where the answer is without doing multi-hop reasoning.", "This result is initially surprising, but one reason it may be possible is suggested by the example from HotpotQA shown in Figure", "1. We can see that the model could easily figure out the answer sentence without looking at the bridging entities using lexical cues alone.", "This observation is also in accordance with the work of Jansen (2018), which demonstrates that high performance for a simple baseline can be achieved in cases when passages have increasing lexical overlap with the question.", "We note that this method potentially overestimates performance of a non-multi-hop model on HotpotQA, since there are some examples where many plausible answers are in the same sentence and require other context to resolve.", "However, these still form a minority in the dataset (see Table 3 of Yang et al. (2018)).", "The results of the previous section show that a model can identify correspondences between questions and answer sentences.", "One other pair of correlations we can study is suggested in the work of Kaushik and Lipton (2018), namely examining question-answer correlations independent of the passage.", "We construct a no context baseline to verify whether it is possible to pick the correct answer without consulting the passage.", "In a sim-National autonomous university of Mexico Arte, Capital, Life, Monterrey, School, Time Employer Gilberto Aceves Navarro Other Candidates Answer Question Bi-GRU Bi-GRU Bi-linear Dot Figure 2: An example of question and candidates from WikiHop.", "ilar fashion to the factored model, we encode the query q and each answer candidate c i using a bi-GRU and once again compute a bilinear product between them to get the scores over candidates, making no reference to the document.", "Results of this model on the multiple-choice WikiHop dataset are shown in Table", "2. Surprisingly, the no-context baseline achieves high performance, comparable to some recently-published systems, showing that WikiHop is actually possible to solve reasonably well without using the document at all.", "One possible reason for this is that this model can filter possible answers based on expected answer type (Sugawara et al., 2018), as shown in the example of Figure 2, or perhaps capture other correlations between training and test.", "This model substantially outperforms the unlearned baseline reported in the WikiHop paper (Welbl et al., 2018) (38.8%) as well as the BiDAF (Seo et al., 2017) results reported there (42.9%).", "The no context model indicates that having multiple-choice questions may provide an avenue for a dataset to be gamed.", "In order to investigate the difference in multiple-choice vs. span supervision while controlling for other aspects of dataset difficulty, we first recast each dataset in the other's framework, then investigate the performance of two models each of these settings.", "To modify Hotpot to be multiple-choice, we Dataset HotpotQA-MC WikiHop-MC Metric Accuracy Accuracy NoContext 68.01 59.70 MC-BiDAF++ 70.01 61.32 MC-MemNet 68.75 61.80 Span2MC-BiDAF++ 76.01 59.85 Table 3: The performance of different models on the dev sets of WikiHop and HotpotQA.", "randomly select 9 entities in all of the documents as distractors, and add the answer to make a 10-choice candidates set.", "To modify WikiHop to be span-based, we concatenate all documents and treat the first appearance of the answer mention as the gold span for training.", "Any answer occurrence is treated as correct for evaluation.", "MemNet Memory networks (Weston et al., 2015) define a generic model class which can gather information from different parts of the passage.", "Kumar et al. (2016) and Miller et al. (2016) have demonstrated its effectiveness in certain multi-hop settings.", "These models process a document over several timesteps.", "On the i th step, the model takes a question representation q i , attends to the context representation p , gets an attention distribution i , computes a new memory cell value m i = (cid:80) i p i , then forms an updated q i +1 = f ( m i , q i ) .", "The final memory cell m T is used to compute a score s i = g ( m T , c j ) with the j th candidate representation c j .", "We modify this architecture slightly using a standard hierarchical attention module (Li et al., 2015).", "We can also modify this architecture to predict an answer span we use the memory cell m T of the last step, and do a bi-linear product with the context representation p to compute a distribution over start points P start = softmax ( p W start m T ) and end points distribution P end = softmax ( p W end m T ) of the answer span, where W start and W end are two parameter matrix to be learned.", "We call this Span-MemNet.", "BiDAF++ Recently proposed by Clark and Gardner (2018), this is a high-performing model on SQuAD.", "It combines the bi-directional attention flow (Seo et al., 2017) and self-attention Dataset HotpotQA-Span WikiHop-Span Metric EM F1 EM F1 BiDAF++ (Yang+ 18) 42.79 56.19 Span-BiDAF++ 42.45 56.46 24.23 46.13 Span-MemNet 18.75 26.11 13.54 19.23 Table 4: The performance of different models on the dev sets of WikiHop and HotpotQA.", "mechanisms.", "We use the implementation described in Yang et al. (2018).", "We can modify this model for the multiple-choice setting as well.", "Specifically, we use the start P start and end P end distribution to do a weighted sum over the context p to get a summarized representation D start = (cid:80) P start i p i , D end = (cid:80) P end i p i of the context.", "Then we concatenate them to do a bilinear dot product with each candidate representation to get the answer score as we described for MemNet.", "We call this model MC-BiDAF++.", "Table 3 and Table 4 show our results in the two settings.", "As a baseline on multiple-choice HotpotQA, we also test the no-context baseline, which achieves an accuracy of 68.01%, around 10% absolute higher than on WikiHop.", "Our candidates were randomly chosen, so this setting may not be quite as challenging as a well-constructed multiple-choice dataset.", "From Table 3 and Table 4 we draw the following conclusions.", "When trained and tested on multiple-choice datasets, our models do not learn multi-hop reasoning.", "Comparing MC-BiDAF++ and MC-MemNet on the multiple-choice setting of both datasets as shown in Table 3, the models appear to have similar capabilities to learn multi-hop reasoning.", "However, looking at the no-context baseline for comparison, we find that it is only around 2% lower than the two relatively more complex models.", "This indicates that much of the performance is achieved by cheating through the correlation between the candidates and question/context.", "Surprisingly, this is true even for HotpotQA, which seems stronger based on the analysis in Table", "1. Span-based data is less hackable, but models still may not be doing multi-hop reasoning.", "We then compare the results of Span-BiDAF++ E x ac t M a t c h S c o r e 30 40 50 60 70 80 Number of Options 10 20 30 40 50 NoContext BiDAF++ MemNet Span-BiDAF++ Figure 3: Performance of different options on HotpotQA-MC.", "and Span-MemNet on the span-based settings of both datasets, which are substantially different from the multiple-choice setting as shown in Table", "4. BiDAF++ substantially outperforms the MemNet on both datasets, indicating that BiDAF++ is a stronger model for multi-hop reasoning, despite being less explicitly designed for this task.", "However, this model still underperforms the Factored BiDAF model, indicating that it could just be doing strong single-sentence reasoning.", "Adding more options does not qualitatively change the multiple choice setting.", "The span-based model requires dealing with a much larger output space than the multiple-choice setting.", "To test the effects of this, we conduct another experiment by making more spurious options on HotpotQA-MC using the method described in Section", "4. The results are shown in Figure", "3. As we increase the number of options, we can see that the performance of all models drops.", "However, even with more options, the no-context baseline can still achieve comparable performance to the other two more complex models, which indicates that these models still aren't learning multihop reasoning in such a strengthened setting.", "Span-based training data is more powerful.", "To further understand the two different supervision signals, we conduct another experiment where we train using span-based supervision and evaluate on the multiple-choice setting.", "Specifically, during evaluation, we select all document spans that map onto some answer candidate, then max over the scores of all spans to pick the predicted answer candidate.", "The multiple choice options therefore filter the span model's predictions.", "From the results in Table 3, we can see that Span2MC-BiDAF++ achieves higher performance compared to MC-BiDAF++ on HotpotQA and nearly comparable performance on WikiHop even with random span selection during training.", "This shows that with the span-based supervision, the model can learn at least the same thing as the multiple-choice and avoid cheating through learning question-candidate correspondences.", "There exist several other multi-hop reasoning datasets including WorldTree (Jansen et al., 2018), OpenBookQA (Mihaylov et al., 2018), and Mul-tiRC (Khashabi et al., 2018).", "These datasets are more complex to analyze since the answers may not appear directly in the passage and may simply be entailed by passage content.", "We leave a detailed investigation of these for future work.", "For researchers working on the problem of multi-hop reasoning, we think the following points should be considered: (1) Prefer models using span-based supervision to avoid cheating by using the extra candidate information.", "(2) If using multiple-choice supervision, check the no-context baseline to see whether there are strong correlations between question and candidates.", "(3) When constructing a multi-hop oriented dataset, it would be best to do an adversarial test using a sentence-factored model to see whether multi-hop reasoning is really needed.", "Both HotpotQA and WikiHop contain good examples for evaluating multihop reasoning, but this evaluation is clouded by the presence of easily-solvable examples, which can confuse the learning process as well.", "This work was partially supported by NSF Grant IIS-1814522, NSF Grant SHF-1762299, a Bloomberg Data Science Grant, and an equipment grant from NVIDIA.", "The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research.", "Results presented in this paper were obtained using the Chameleon testbed supported by the National Science Foundation.", "Thanks as well to the anonymous reviewers for their helpful comments." ]
[ "abstain", "abstain", "abstain", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "result", "method", "abstain", "abstain", "result", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "We study the problem of learning a named entity recognition (NER) tagger using noisy labels from multiple weak supervision sources.", "Though cheap to obtain, the labels from weak supervision sources are often incomplete, inaccurate, and contradictory, making it difficult to learn an accurate NER model.", "To address this challenge, we propose a conditional hidden Markov model (CHMM), which can effectively infer true labels from multi-source noisy labels in an unsupervised way.", "CHMM enhances the classic hidden Markov model with the contextual representation power of pretrained language models.", "Specifically, CHMM learns token-wise transition and emission probabilities from the BERT embeddings of the input tokens to infer the latent true labels from noisy observations.", "We further refine CHMM with an alternate-training approach (CHMM-ALT).", "It fine-tunes a BERT-NER model with the labels inferred by CHMM, and this BERT-NER's output is regarded as an additional weak source to train the CHMM in return.", "Experiments on four NER benchmarks from various domains show that our method outperforms state-of-the-art weakly supervised NER models by wide margins.", "Named entity recognition (NER), which aims to identify named entities from unstructured text, is an information extraction task fundamental to many downstream applications such as event detection (Li et al., 2012), relationship extraction (Bach and Badaskar, 2007), and question answering (Khalid et al., 2008).", "Existing NER models are typically supervised by a large number of training sequences, each pre-annotated with token-level labels.", "In practice, however, obtaining such labels could be prohibitively expensive.", "On the other hand, many domains have various knowledge resources such as knowledge bases, domain-specific dictionaries, or labeling rules provided by domain experts (Far-makiotou et al., 2000; Nadeau and Sekine, 2007).", "These resources can be used to match a corpus and quickly create large-scale noisy training data for NER from multiple views.", "Learning an NER model from multiple weak supervision sources is a challenging problem.", "While there are works on distantly supervised NER that use only knowledge bases as weak supervision (Mintz et al., 2009; Shang et al., 2018; Cao et al., 2019; Liang et al., 2020), they cannot leverage complementary information from multiple annotation sources.", "To handle multi-source weak supervision, several recent works (Nguyen et al., 2017; Safranchik et al., 2020; Lison et al., 2020) leverage the hidden Markov model (HMM), by modeling true labels as hidden variables and inferring them from the observed noisy labels through unsupervised learning.", "Though principled, these models fall short in capturing token semantics and context information, as they either model input tokens as one-hot observations (Nguyen et al., 2017) or do not model them at all (Safranchik et al., 2020; Lison et al., 2020).", "Moreover, the flexibility of HMM is limited as its transitions and emissions remain constant over time steps, whereas in practice they should depend on the input words.", "We propose the conditional hidden Markov model (CHMM) to infer true NER labels from multi-source weak annotations.", "CHMM conditions the HMM training and inference on BERT by predicting token-wise transition and emission probabilities from the BERT embeddings.", "These token-wise probabilities are more flexible than HMM's constant counterpart in modeling how the true labels should evolve according to the input tokens.", "The context representation ability they inherit from BERT also relieves the Markov constraint and expands HMM's context-awareness.", "Further, we integrate CHMM with a supervised BERT-based NER mode with an alternate-training method (CHMM-ALT).", "It fine-tunes BERT-NER with the denoised labels generated by CHMM.", "Taking advantage of the pre-trained knowledge contained in BERT, this process aims to refine the denoised labels by discovering the entity patterns neglected by all of the weak sources.", "The fine-tuned BERT-NER serves as an additional supervision source, whose output is combined with other weak labels for the next round of CHMM training.", "CHMM-ALT trains CHMM and BERT-NER alternately until the result is optimized.", "Our contributions include: A multi-source label aggregator CHMM with token-wise transition and emission probabilities for aggregating multiple sets of NER labels from different weak labeling sources.", "An alternate-training method CHMM-ALT that trains CHMM and BERT-NER in turn utilizing each other's outputs for multiple loops to optimize the multi-source weakly supervised NER performance.", "A comprehensive evaluation on four NER benchmarks from different domains demonstrates that CHMM-ALT achieves a 4 .", "83 average F1 score improvement over the strongest baseline models.", "The code and data used in this work are available at github.com/Yinghao-Li/CHMM-ALT.", "Weakly Supervised NER There have been works that train NER models with different weak supervision approaches.", "Distant supervision , a spe-cific type of weak supervision, generates training labels from knowledge bases (Mintz et al., 2009; Yang et al., 2018; Shang et al., 2018; Cao et al., 2019; Liang et al., 2020).", "But such a method is limited to one source and falls short of acquiring supplementary annotations from other available resources.", "Other works adopt multiple additional labeling sources, such as heuristic functions that depend on lexical features, word patterns, or document information (Nadeau and Sekine, 2007; Rat-ner et al., 2016), and unify their results through multi-source label denoising .", "Several multi-source weakly supervised learning approaches are designed for sentence classification (Ratner et al., 2017, 2019; Ren et al., 2020; Yu et al., 2020).", "Although these methods can be adapted for sequence labeling tasks such as NER, they tend to overlook the internal dependency relationship between token-level labels during the inference.", "Fries et al. (2017) target the NER task, but their method first generates candidate named entity spans and then classifies each span independently.", "This inde-pendence makes it suffer from the same drawback as sentence classification models.", "A few works consider label dependency while dealing with multiple supervision sources.", "Lan et al. (2020) train a BiLSTM-CRF network (Huang et al., 2015) with multiple parallel CRF layers, each for an individual labeling source, and aggregate their transitions with confidence scores predicted by an attention network (Bahdanau et al., 2015; Lu-ong et al., 2015).", "HMM is a more principled model for multi-source sequential label denoising as the true labels are implicitly inferred through unsupervised learning without deliberately assigning any additional scores.", "Following this track, Nguyen et al. (2017) and Lison et al. (2020) use a standard HMM with multiple observed variables, each from one labeling source.", "Safranchik et al. (2020) propose linked HMM, which differs from ordinary HMM by introducing unique linking rules as an adjunct supervision source additional to general token labels.", "However, these methods fail to utilize the context information embedded in the tokens as effectively as CHMM, and their NER performance is further constrained by the Markov assumption.", "Neuralizing the Hidden Markov Model Some works attempt to neuralize HMM in order to relax the Markov assumption while maintaining its generative property (Kim et al., 2018).", "For example, Dai et al. (2017) and Liu et al. (2018) incorporate recurrent units into the hidden semi-Markov model (HSMM) to segment and label high-dimensional time series; Wiseman et al. (2018) learn discrete template structures for conditional text generation using neuralized HSMM.", "Wessels and Omlin (2000) and Chiu and Rush (2020) factorize HMM with neural networks to scale it and improve its sequence modeling capacity.", "The work most related to ours leverages neural HMM for sequence labeling (Tran et al., 2016).", "CHMM differs from neural HMM in that the tokens are treated as a dependency term in CHMM instead of the observation in neural HMM.", "Besides, CHMM is trained with generalized EM, whereas neural HMM opti-Center in New York was...", "mizes the marginal likelihood of the observations.", "In this section, we formulate the multi-source weakly supervised NER problem.", "Consider an input sentence that contains T tokens w (1: T ) , NER can be formulated as a sequence labeling task that assigns a label to each token in the sentence.", "1 Assuming the set of target entity types is E and the tagging scheme is BIO (Ramshaw and Marcus, 1995), NER models assign one label from the label set l L to each token, where the size of the label set is |L| = 2 |E| + 1 , e.g. , if E = { PER , LOC } , then L = { O , B-PER , I-PER , B-LOC , I-LOC } .", "Suppose we have a sequence with K weak sources, each of which can be a heuristic rule, knowledge base, or existing out-of-domain NER model.", "Each source serves as a labeling function that generates token-level weak labels from the input corpus, as shown in Figure 1.", "For the input sequence w (1: T ) , we use x (1: T ) k , k { 1 , . . . , K } to represent the weak labels from the source k , where x ( t ) k R |L| , t { 1 , . . . , T } is a probability distribution over L .", "Multi-source weakly supervised NER aims to find the underlying true sequence of labels y (1: T ) , y ( t ) L given { w (1: T ) , x (1: T ) 1: K } .", "In this section, we describe our proposed method CHMM-ALT.", "We first sketch the alternate-training procedure ( 4.1), then explain the CHMM component ( 4.2) and how BERT-NER is involved ( 4.3).", "The alternate-training method trains two models a multi-source label aggregator CHMM and a BERT-NER modelin turn with each other's output.", "CHMM aggregates multiple sets of labels from different sources into a unified sequence of 1 We represent vectors, matrices or tensors with bold fonts and scalars with regular fonts; 1 : a (cid:44) { 1 , 2 , . . . , a } .", "In phase I , CHMM takes the annotations x (1: T ) 1: K from existing sources and gives a set of denoised labels y (1: T ) , which are used to fine-tune the BERT-NER model.", "Then, we regard the fine-tuned model as an additional labeling source, whose outputs y (1: T ) are added into the original weak label sets to give the updated observation instances: x (1: T ) 1: K +1 = { x (1: T ) 1: K , y (1: T ) } .", "In phase II , CHMM and BERT-NER mutually improve each other iteratively in several loops.", "Each loop first trains CHMM with the observation x (1: T ) 1: K +1 from the previous one.", "Then, its predictions are adopted to fine-tune BERT-NER, whose output updates x (1: T ) K +1 .", "Figure 2 illustrates the alternate-training method.", "In general, CHMM gives high precision predictions, whereas BERT-NER trades recall with precision.", "In other words, CHMM can classify named entities with high accuracy but is slightly disadvantaged in discovering all entities.", "BERT-NER increases the coverage with a certain loss of accuracy.", "Combined with the alternate-training approach, this complementarity between these models further increases the overall performance.", "The conditional hidden Markov model is an HMM variant for multi-source label denoising.", "It models true entity labels as hidden variables and infers them from the observed noisy labels.", "Traditionally, discrete HMM uses one transition matrix to model the probability of hidden label transitioning and one emission matrix to model the probability of the observations from the hidden labels.", "These two matrices are constant, i.e. , their values do not change over time steps.", "CHMM, on the contrary, conditions both its transition and emission matrices on the BERT embeddings e (1: T ) of the input tokens w (1: T ) .", "This design not only allows CHMM to leverage the rich contextual representations of the BERT embeddings but relieves the constant matrices constraint as well.", "In phase I, CHMM takes K sets of weak labels from the provided K weak labeling sources.", "In phase II, in addition to the existing sources, it takes CHMM Aggregated labels Train with Generalized EM Train with KLD loss Phase I Phase IIBERT predictions BERT-NERBERT embeddings Weak labels 1: BERT Input sentence Weak Source 1 ... ...", "another set of labels from the previously fine-tuned BERT-NER, making the total number of sources K + 1 .", "For convenience, we use K as the number of weak sources below.", "Model Architecture Figure 3 shows a sketch of CHMM's architecture.", "2 z (1: T ) denotes the discrete hidden states of CHMM with z ( t ) L , representing the underlying true labels to be inferred from multiple weak annotations.", "( t ) R |L||L| is the transition matrix, whose element ( t ) i,j = p ( z ( t ) = j | z ( t 1) = i, e ( t ) ) , i, j { 1 , . . . , |L|} denotes the probability of moving from label i to label j at time step t .", "( t ) k R |L||L| is the emission matrix of weak source k , each element in which ( t ) i,j,k = p ( x ( t ) j,k = 1 | z ( t ) = i, e ( t ) ) represents the probability of source k observing label j when the 2 We relax plate notation here to present details.", "hidden label is i at time step t .", "For each step, e ( t ) R d emb is the output of a pre-trained BERT with d emb being its embedding dimension.", "( t ) and ( t ) 1: K are calculated by applying a multi-layer perceptron (MLP) to e ( t ) : s ( t ) R |L| 2 = MLP( e ( t ) ) , (1) h ( t ) R |L||L| K = MLP( e ( t ) ) .", "To achieve the proper probability distributions, we apply the Softmax function along the label axis so that these values are positive and sum up to 1 :", "( t ) i, 1: |L| = ( S ( t ) i, 1: |L| ) , ( t ) i, 1: |L| ,k = ( H ( t ) i, 1: |L| ,k ) , where ( a ) i = exp ( a i ) (cid:80) j exp ( a j ) .", "(5) a is an arbitrary vector.", "The formulae in the following discussion always depend on e (1: T ) , but we will omit the dependency term for simplicity.", "Model Training According to the generative process of CHMM, the joint distribution of the hidden states and the observed weak labels for one sequence p ( z (0: T ) , x (1: T ) | ) can be factorized as: p ( z (0: T ) , x (1: T ) | ) = p ( z (0) ) p ( x (1: T ) | z (1: T ) ) = p ( z (0) ) T (cid:89) t =1 p ( z ( t ) | z ( t 1) ) T (cid:89) t =1 p ( x ( t ) | z ( t ) ) , (6) where represents all the trainable parameters.", "HMM is generally trained with an expectation-maximization (EM, also known as Baum-Welch) algorithm.", "In the expectation step (E-step), we compute the expected complete data log likelihood: Q ( , old ) (cid:44) E z [ (cid:96) c ( ) | old ] .", "(7) old is the parameters from the previous training step, E z [ ] is the expectation over variable z , and (cid:96) c ( ) (cid:44) log p ( z (0: T ) , x (1: T ) | ) is the comptelete data log likelihood.", "Let ( t ) R |L| be the observation likelihood where ( t ) i (cid:44) p ( x ( t ) | z ( t ) = i ) = K (cid:89) k =1 |L| (cid:88) j =1 ( t ) i,j,k x ( t ) j,k .", "(8) Combining (6)(8) together, we have Q ( , old ) = |L| (cid:88) i =1 (0) i log i + T (cid:88) t =1 |L| (cid:88) i =1 |L| (cid:88) j =1 ( t ) i,j log ( t ) i,j + T (cid:88) t =1 |L| (cid:88) i =1 ( t ) i log ( t ) i , (9) where 1 = 1 , 2: |L| = 0 ; 3 ( t ) i (cid:44) p ( z ( t ) = i | x (1: T ) ) is the smoothed marginal; ( t ) i,j (cid:44) p ( z ( t 1) = i, z ( t ) = j | x (1: T ) ) is the expected number of transitions.", "These parameters are computed using the forward-backward algorithm.", "4 In the maximization step (M-step), traditional HMM updates parameters HMM = { , , } by optimizing (7) with pseudo-statistics.", "5 However, as the transitions and emissions in CHMM are not standalone parameters, we cannot directly optimize CHMM by this method.", "Instead, we update the model parameters through gradient descent w.r.t. CHMM using (9) as the objective function: CHMM = Q ( CHMM , oldCHMM ) CHMM .", "In practice, the calculation is conducted in the logarithm domain to avoid the loss of precision issue that occurs when the floating-point numbers become too small.", "To solve the label sparsity issue, i.e. , some entities are only observed by a minority of the weak 3 This assumes the initial hidden state is always O .", "sources, we modify the observations x (1: T ) before training.", "If one source k observes an entity at time step t : x ( t ) j (cid:54) =1 ,k > 0 , the observation of nonobserving sources at t will be modified to x ( t ) 1 , = (cid:15) ; x ( t ) j (cid:54) =1 , = (1 (cid:15) ) / |L| , { 1 , . . . , K }\\ k , where (cid:15) is an arbitrary small value.", "Note that x ( t ) 1 , corresponds to the observed label O .", "CHMM Initialization Generally, HMM has its transition and emission probabilities initialized with the statistics and computed from the observation set.", "But it is impossible to directly set ( t ) and ( t ) in CHMM to these values, as these matrices are the output of the MLPs rather than standalone parameters.", "To address this issue, we choose to pre-train the MLPs before starting CHMM's training by minimizing the mean squared error (MSE) loss between their outputs and the target statistics: (cid:96) MSE = 1 T (cid:88) t (cid:107) S ( t ) (cid:107) 2 F + (cid:107) H ( t ) (cid:107) 2 F , where (cid:107) (cid:107) F is the Frobenius norm.", "Right after initialization, MLPs can only output similar probabilities for all time steps: ( t ) , ( t ) , t { 1 , 2 , . . . , T } .", "But their token-wise prediction divergence will emerge when CHMM has been trained.", "The initial hidden state z (0) is fixed to O as it has no corresponding token.", "most probable sequence of hidden labels z (1: T ) along with the probabilities of all labels y (1: T ) .", "z (1: T ) = arg max z (1: T ) p CHMM ( z (1: T ) | x (1: T ) 1: K , e (1: T ) ) , y ( t ) i = p CHMM ( z ( t ) = i | x (1: T ) 1: K , e (1: T ) ) , where CHMM represents the trained parameters.", "These results can be calculated by either the Viterbi decoding algorithm (Viterbi, 1967) or directly maximizing the smoothed marginal (1: T ) .", "The pre-trained BERT model encodes semantic and structural knowledge, which can be distilled to further refine the denoised labels from CHMM.", "Specifically, we construct the BERT-NER model by stacking a feed-forward layer and a Softmax layer on top of the original BERT to predict the probabilities of the classes that each token belongs to (Sun et al., 2019).", "The probability predictions of CHMM, y (1: T ) , often referred to as soft labels , are chosen to supervise the fine-tuning procedure.", "Compared with the hard labels z (1: T ) , soft labels lead to a more stable training process and higher model robustness (Thiel, 2008; Liang et al., 2020).", "We train BERT-NER by minimizing the Kullback-Leibler divergence (KL divergence) between the soft labels y and the model output y : BERT = arg min BERTD [ y (1: T ) (cid:107) y (1: T ) ] = arg min BERTT (cid:88) t =1 |L| (cid:88) i =1 y ( t ) i log y ( t ) i y ( t ) i , (11) where BERT denotes all the trainable parameters in the BERT model.", "We obtain the refined labels y (1: T ) RT |L| from the fine-tuned BERT-NER directly through a forward pass.", "Different from CHMM, we continue BERT-NER's training with parameter weights from the last loop's checkpoint so that the model is initialized closer to the optimum.", "Correspondingly, phase II trains BERT-NER with a smaller learning rate, fewer epoch iterations, and batch gradient descent instead of the mini-batch version.", "6 This strategy speeds up phase II training without sacri-ficing the model performance as y (1: T ) does not change significantly from loop to loop.", "We benchmark CHMM-ALT on four datasets against state-of-the-art weakly supervised NER baselines, including both distant learning models and multi-source label aggregation models.", "We also conduct a series of ablation studies to evaluate the different components in CHMM-ALT's design.", "Datasets We consider four NER datasets covering the general, technological and biomedical domains: 1) CoNLL 2003 (English subset) (Tjong Kim Sang and De Meulder, 2003) is a general domain dataset containing 22 , 137 sentences manually labelled with 4 entity types.", "2) LaptopReview dataset (Pontiki et al., 2014) consists of 3 , 845 sentences with laptop-related entity mentions.", "3) NCBI-Disease dataset (Dogan et al., 2014) contains 793 PubMed abstracts annotated with disease 6 Hyper-parameter values are listed in appendix C. Co03 NCBI CDR LR # Instance 22 , 137 793 1 , 500 3 , 845 # Training 14 , 041 593 500 2 , 436 # Development 3 , 250 100 500 609 # Test 3 , 453 100 500 800 Ave# Tokens 14 .", "mentions.", "4) BC5CDR (Li et al., 2016), the dataset accompanies the BioCreative V CDR challenge, consists of 1 , 500 PubMed articles, annotated with chemical disease mentions.", "Table 1 shows dataset statistics, including the average number of tokens, entities and weak labeling sources.", "We use the original word tokens in the dataset if provided and use NLTK (Bird and Loper, 2004) otherwise for sentence tokenization.", "For weak labeling sources, we use the ones from Lison et al. (2020) for CoNLL 2003, and the ones from Safranchik et al. (2020) for LaptopReview, NCBI-Disease and BC5CDR.", "7 Baselines We compare our model to the following state-of-the-art baselines: 1) Majority Voting returns the label for a token that has been observed by most of the sources and randomly chooses one if it's a tie; 2) Snorkel (Ratner et al., 2017) treats each token in a sequence as i.i.d. and conducts the label classification without considering its context; 3) SwellShark (Fries et al., 2017) improves Snorkel by predicting all the target entity spans before classifying them using nave Bayes; 4) AutoNER (Shang et al., 2018) augments distant supervision by predicting whether two consecutive tokens should be in the same entity span; 5) BOND (Liang et al., 2020) adopts self-training and high-confidence selection to further boost the distant supervision performance.", "6) HMM is the multi-observation generative model used in Lison et al. (2020) that does not have the integrated neural network; 7) Linked HMM (Safranchik et al., 2020) uses linking rules to provide additional inter-token structural information to the HMM model.", "For the ablation study, we modify CHMM to another type of i.i.d. model by taking away its transition matrices.", "This model, named CHMM-i.i.d. , 7 Details are presented in appendix B. Models CoNLL 2003 NCBI-Disease BC5CDR LaptopReview Supervised BERT-NER (cid:92) 90.74 (90.37/91.10) 88.89 (87.05/90.82) 88.81 (87.12/90.57) 81.34 (82.02/80.67) best consensus (cid:92) 89.18 (100.0/80.47) 81.60 (100.0/68.91) 87.58 (100.0/77.89) 77.72 (100.0/63.55) SwellShark (noun-phrase) -67.10 (64.70/69.70) 84.23 (84.98/83.49) SwellShark (hand-tuned) -80.80 (81.60/80.10) 84.21 (86.11/82.39) AutoNER 67.00 (75.21/60.40) 75.52 (79.42/71.98) 82.13 (83.23/81.06) 65.44 (72.27/59.79) Snorkel 66.40 (71.40/62.10) 73.41 (71.10/76.00) 82.24 (80.23/84.35) 63.54 (64.09/63.09) Linked HMM -79.03 (83.46/75.05) 82.96 (82.65/83.28) 69.04 (77.74/62.11) BOND-MV (cid:92) 65.96 (64.22/67.82) 80.33 (84.77/76.34) 83.18 (82.90/83.49) 67.19 (68.90/65.75) Majority Voting (cid:92) 58.40 (49.01/72.24) 73.94 (79.76/68.91) 80.73 (83.79/77.88) 67.92 (72.93/63.55) HMM (cid:92) 68.84 (70.80/66.98) 73.06 (83.88/64.70) 80.57 (88.75/73.76) 66.96 (77.46/58.96) CHMM-i.i.d. (cid:92) 68.57 (69.67/67.50) 71.69 (83.49/62.87) 79.37 (85.68/73.92) 65.89 (75.70/58.34) CHMM (cid:92) 70.11 (72.98/67.47) 78.88 ( 93.37 /68.28) 82.39 ( 89.93 /76.02) 73.02 ( 87.23 /62.79) CHMM + BERT-NER (cid:92) 74.30 (75.02/73.58) 82.87 (89.42/77.22) 84.33 (85.58/83.12) 69.67 (75.48/64.70) CHMM-ALT (cid:92) 75.54 ( 76.22 / 74.86 ) 85.02 (87.92/ 82.47 ) 85.12 (84.97/ 85.28 ) 76.55 (81.39/ 72.32 ) Table 2: Evaluation results on four datasets.", "directly predicts the hidden steps from the BERT embeddings, while otherwise identical to CHMM.", "We also investigate how CHMM-ALT performs with other aggregators other than CHMM.", "We also introduce two upper bounds from different aspects: 1) a fully supervised BERT-NER model trained with manually labeled data is regarded as a supervised reference; 2) the best possible consensus of the weak sources.", "The latter assumes an oracle that always selects the correct annotations from these weak supervision sources.", "According to the definition, its precision is always 100% and its recall is non-decreasing with the in-crease of the number of weak sources.", "Evaluation Metrics We evaluate the performance of NER models using entity-level precision, recall, and F1 scores.", "All scores are presented as percentages.", "The results come from the average of 5 trials with different random seeds.", "Implementation Details We use BERT pretrained on different domains for different datasets, both for embedding construction and as the component of the supervised BERT-NER model.", "The original BERT (Devlin et al., 2019) is used for CoNLL 2003 and LaptopReview datasets, bioBERT (Lee et al., 2019) for NCBI-Disease and SciBERT (Belt-agy et al., 2019) for BC5CDR.", "Instances with lengths exceeding BERT's maximum length limitation ( 512 ) are broken into several shorter segments.", "The only tunable hyper-parameter in CHMM is the learning rate.", "But its influence is negligible benefitted from the stability of the generalized EM, the model is guaranteed to converge to a local optimum if the learning rate is small enough.", "For all the BERT-NER models used in our experiments, the hyper-parameters except the batch size are fixed to the default values (appendix C).", "To prevent overfitting, we use a two-scale early stopping strategy for model choosing at two scales based on the development set.", "The micro-scale early stopping chooses the best model parameters for each individual training process of both CHMM and BERT-NER; the macro-scale early stopping selects the best-performing model in phase II iterations, which reports the test results.", "In our experiments, phase II exits if the macro-scale development score has not increased in 5 loops or the maximum number of loops ( 10 ) is reached.", "Table 2 presents the model performance from different domains.", "We find that our alternate-training framework outperforms all weakly supervised baseline models.", "In addition, CHMM-ALT approaches or even exceeds the best source consensus, which sufficiently proves the effectiveness of the design.", "For general HMM-based label aggregators such as CHMM, it is impossible to exceed the best consensus since they can only predict an entity observed by at least one source.", "Based on this fact, CHMM is designed to select the most accurate observations from the weak sources without shrinking their coverage.", "In comparison, BERT's language 68 70 72 74 76 PIPII1 PII2 PII3 PII4 PII5 PII6 PII7 PII8 PII9 PII10 CoNLL 2003 CHMMBERT-NERStrongest Baseline", "representation ability enables it to generalize the entity patterns and successfully discovers those entities annotated by none of the sources.", "Comparing CHMM + BERT to CHMM, we can conclude that BERT basically exchanges recall with precision, and its high-recall predictions can improve the result of CHMM in return.", "The complementary nature of these two models is why CHMM-ALT improves the overall performance of weakly supervised NER.", "Looking at Table 2, we notice that CHMM performs the best amongst all generative models including majority voting, HMM and CHMM-i.i.d. The performance of conventional HMM is largely limited by the Markov assumption with the unchanging transition and emission probabilities.", "The results in the table validate that conditioning the model on BERT embedding alleviates this limitation.", "However, the transition matrices in HMM are indispensable, implied by CHMM-i.i.d.'s results, as they provide supplemental information about how the underlying true labels should evolve.", "Performance Evolution Figure 4 reveals the details of the alternate-training process.", "For less ambiguous tasks including NCBI-Disease, BC5CDR and LaptopReview with fewer entity types, BERT generally has better performance in phase I but gets surpassed in phase II.", "Interestingly, BERT's performance never exceeds that of CHMM on the LaptopReview dataset.", "This may be because BERT fails to construct sufficiently representative patterns from the denoised labels for this dataset.", "For CoNLL 2003, where it is harder for the labeling sources to model the language structures, the strength of a pre-trained language model in pattern recognition becomes more prominent.", "From the reModels Co03 NCBI CDR Laptop MV (cid:92) 58.40 73.94 80.73 67.92 MV-ALT (cid:92) 66.64 80.83 82.78 70.45 HMM (cid:92) 68.84 73.06 80.57 66.96 HMM-ALT (cid:92) 74.04 82.99 83.34 72.90 i.i.d. (cid:92) 68.57 71.69 79.37 65.89 i.i.d.-ALT (cid:92) 73.84 83.15 83.17 72.61 CHMM (cid:92) 70.11 78.88 82.39 73.02 CHMM-ALT (cid:92) 75.54 85.02 85.12 76.55 Table 3: Alternate-training F1 scores with different label aggregators.", "sults it seems that the performance increment of the denoised labels y (1: T ) provides marginally extra information to BERT after phase II, as most of the increment comes from the information provided by BERT itself.", "Even so, keeping phase II is reasonable when we want to get the best out of the weak labeling sources and the pre-trained BERT.", "BERT-NER Initialization CHMM-ALT initializes BERT-NER's parameters from its previous checkpoint at the beginning of each loop in phase II to reduce training time ( 4.3).", "If we instead fine-tune BERT-NER from the initial parameters of the pre-trained BERT model for each loop, CHMM-ALT gets 84 .", "30 , 84 .", "71 , and 76 .", "68 F1 scores on NCBI-Disease, BC5CDR, and LaptopReview datasets.", "These scores are close to the results in Table 2, but the training takes much longer.", "Consequently, our BERT-NER initialization strategy is a more practical choice overall.", "Applying Alternate-Training to Other Methods Table 3 shows the alternate-training performance acquired with different label aggregators.", "The accompanying BERT-NER models are identical to those described in 5.1.", "The results in the table suggest that the performance improvement obtained by using alternate-training on the label aggregators is stable and generalizable to any other models yet to be proposed.", "In this work, we present CHMM-ALT, a multi-source weakly supervised approach that does not depend on manually labeled data to learn an accurate NER tagger.", "It integrates a label aggregator CHMM and a supervised modelBERT-NER together into an alternate-training procedure.", "CHMM conditions HMM on BERT embeddings to achieve greater flexibility and stronger context-awareness.", "Fine-tuned with CHMM's prediction, BERT-NER discovers patterns unobserved by the weak sources and complements CHMM.", "Training these models in turn, CHMM-ALT uses the knowledge encoded in both the weak sources and the pre-trained BERT model to improve the final NER performance.", "In the future, we will consider imposing more constraints on the transition and emission probabilities, or manipulating them according to sophisticated domain knowledge.", "This technique could be also extended to other sequence labeling tasks such as semantic role labeling or event extraction.", "This work was supported by ONR MURI N00014-17-1-2656, NSF III-2008334, Kolon Industries, and research gifts from Google and Amazon.", "In addition, we would like to thank Yue Yu for his insightful suggestions for this work." ]
[ "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other" ]
[ "Using a case study, we show that variation in oral reading rate across passages for professional narrators is consistent across readers and much of it can be explained using features of the texts being read.", "While text complexity is a poor predictor of the reading rate, a substantial share of variability can be explained by timing and story-based factors with performance reaching r =0.75 for unseen passages and narrator.", "Listening to and performing oral reading are activities that permeate daily life, from parents reading aloud to young children, through reading instruction in elementary school, to audiobook narrations increasingly chosen by adults as the form of book-reading that fits in a busy schedule.", "Oral reading is also used in assessment of language skills for children and language learners, and in professions such as teaching and news broadcasting.", "Reading rate is a common metric used to control or evaluate oral reading.", "It is usually computed as a number of words read per minute, and is used in many applications.", "For example, research in second language acquisition has considered both optimal reading rates for listening materials aimed at English language learners and reading rates that ensure the highest comprehensibility of accented speech (Munro and Derwing, 1998).", "Speech rate is a standard feature in systems for automated scoring of second language proficiency (Higgins et al., 2011) including read aloud tasks (Zechner et al., 2012; Evanini et al., 2015).", "Reading rate is also one of the main measures used to assess the fluency of oral reading (Hasbrouck and Tindal, 2006).", "The assumption underlying these uses is that reading rate is a property of the reader (or controlled by the reader).", "However, variation in reading rate across different passages for the same readers has also been reported (Foulke, 1968; Tauroza and Allison, 1990; Ardoin et al., 2005; Compton et al., 2004; Beigman Klebanov et al., 2017).", "Improving the understanding of the properties of oral reading, such as reading rate, is thus an important theoretical goal.", "We also have a specific practical reason to study text-based variation in reading rate.", "We are developing an intervention for improving literacy that would encourage sustained reading by having the student read aloud multiple passages from an engaging novel-length book, taking turns with others.", "While it is technically easy to compute reading rate by timing the readers, if a reader's rate across different texts is not stable given his current reading skill, it is not clear that tracking the rate over time would yield a valid measurement of improvement in skill.", "However, if such variation is systematically dependent on the text being read, rather than a random or idiosyncratic fluctuation, we might be able to adjust the measurement to account for text effect.", "1. Is reading rate constant for a given reader across various texts?", "2. If not, do different readers show similar patterns of variation across texts, or is variation idiosyncratic?", "3. If variation exists and is systematic across readers, can we identify the properties of texts that impact reading rate?", "In this paper, we study reading rates in two professional narrations of the same book-length text.", "By using professional narrations we are able to 2143 eliminate other factors that might cause variation in reading rate, such as reader fatigue or disfluen-cies.", "While these would play a role in a practical application, we seek first to answer the research questions in a setup that allows focusing on the relationship between reading rate and the passage being read, controlling for other factors.", "Passage effects in reading have been addressed most directly in the context of assessment of reading.", "Since the intention is to measure the stu-dent's reading ability, any difference in performance that is not due to reading ability confounds the measurement.", "In particular, since comprehension complexity of a passage is known to impact reading comprehension, it seems reasonable to assume that it would also impact other aspects of reading skill, including oral reading fluency.", "In fact, this assumption underlies text selection for tests of oral reading fluency such as DIBELS (Good and Kaminski, 2002) that rely on readability to select comparable passages (Francis et al., 2008).", "Yet research also suggests that controlling for readability does not entirely solve the problem of text-based variation in reading fluency.", "Ardoin et al. (2005) examined readability formulas for their ability to predict fluency and generally found only low-to-moderate correlations (r < 0.5).", "Researchers also observed that fluency measurements for the same students varied across texts even for passages of comparable readability (Ar-doin et al., 2005; Compton et al., 2004; Petscher and Kim, 2011; Francis et al., 2008).", "Moreover, Francis et al. (2008) found that while actual fluency scores vary across different readability-controlled passages, the relative ranking of students is only minimally different when estimated using different passages, suggesting that variation in fluency has some consistency across readers; results to a similar effect were reported by Beigman Klebanov et al. (2017).", "Oral reading fluency is commonly measured using words correct per minute a combination of reading accuracy and reading rate.", "It is thus not clear whether the observations above pertain more to the accuracy aspect of oral reading (not considered in the current paper) or to reading rate, although Beigman Klebanov et al. (2017) noted that consistent variation across students was observed both for reading rate and for reading fluency.", "To summarize, it appears that while readability could explain some of the variation in oral reading performance, there are also indicators that it is not sufficient on its own to effectively control for variation in oral reading performance caused by the properties of the passage being read.", "Since oral reading involves saying the text aloud, the durations of individual segments, words and phrases as well as location and duration of silent pauses are subject to constraints that have been extensively studied in literature on phonetic timing; see White (2014); Hirschberg (2002) for a review.", "Thus it has been long known that different segments have different intrinsic durations which account for a lot of variation in segmental durations (Peterson and Lehiste, 1960; Klatt, 1976; van Santen, 1992): for example, high vowels tend to be shorter than low vowels.", "At the syllable level, in many languages vowels tend to be shorter when followed by a voiceless consonant than when followed by a voiced consonant (House and Fairbanks, 1953; Crystal and House, 1988) while consonants within a consonant cluster tend to be shorter than single consonants (Klatt, 1976).", "Further constraints are at play at word, phrase and sentence level.", "White (2014) summarizes these as domain-head and domain-edge lengthening effects.", "Domain-head lengthening refers to lengthening of salient elements such as syllables bearing lexical stress, words in prominent positions (Peterson and Lehiste, 1960; Crystal and House, 1988; van Santen, 1992).", "Domain-edge effects include lengthening of segments in word-initial position or sentence-final lengthening (Turk and Shattuck-Hufnagel, 2000, 2007).", "Finally, these domain-head and domain-edge lengthening effects do not apply uniformly: some segments and some positions are more resistant to lengthening than others (Peterson and Lehiste, 1960; Klatt, 1976; van Santen, 1992; White, 2014).", "The magnitude of lengthening also depends on the number of elements within each domain: in monosyllabic words, the stressed syllable receives all of the prosodic lengthening, but in disyllabic and trisyllabic words, some of the 2144 lengthening spreads to the unstressed syllables and lengthening of the stressed syllable is attenuated (Turk and Shattuck-Hufnagel, 2000; White and Turk, 2010).", "In addition to segmental lengthening, phrase and sentence boundaries are often associated with some amount of pause.", "The location and duration of sentence-internal pauses depends both on syntactic structure and the number of syllables in each adjacent unit: sentence-internal pauses associated with punctuation or major syntactic boundaries tend to be longer than other sentence-internal pauses, with sentence-final pauses being the longest (Pfitzinger and Reichel, 2006; Burrows et al., 2005; Bailly and Gouvernayre, 2012).", "Beyond domain-head and domain-edge effects, duration of segments and pauses is also affected by other aspects of text content.", "Frequent words tend to have shorter duration than phonologically similar less frequent words.", "Words that are more predictable in a given context tend to be shorter than words with higher information load and repeated words are pronounced shorter than the first mention (see Zhao and Jurafsky (2009); Bell et al. (2009) for reviews).", "Note that these effects persist after one controls for domain-head effects described in the previous section (Bell et al., 2009).", "Further factors come into play in the context of story-telling where the speaker is either reading or narrating a well-rehearsed story.", "Montano and Alas (2017) review approaches used to characterize story-telling speech.", "Several studies observed that the duration of pauses between sentences and paragraphs in a longer story is not uniform.", "In their analysis of pausing in book reading, Bailly and Gouvernayre (2012) reported that pauses between paragraphs were longer than pauses between sentences.", "They also found that the thematic relationships between sentences affect breathing patterns although these were not immediately related to pause duration.", "Reading rate has also been shown to depend on the emotional state of the speaker, whether genuine or performed as part of a dramatic reading: for example, actors tend to speak slower when expressing anger, fear or sorrow (Williams and Stevens (1972), see Scherer (2003) for a comprehensive review).", "Doukhan et al. (2011) analyzed pause distribution in a corpus of tales and reported speakers' expressive reinterpretation of sentence syntactic structure which they attributed to expressiveness of the reader.", "There is also evidence that prosody may be affected by the narrative structure.", "Theune et al. (2006) observed in an informal analysis that Dutch actors narrating fairy-tales reduced their speech tempo when approaching the story climax.", "They also noticed an increase in duration in some words that indicated extreme value of a property.", "Doukhan et al. (2011) analyzed prosody in a corpus of French tales using Propp's morphology of Folktale (Propp, 1968).", "They found that narrative structures had a significant effect on various prosodic properties.", "For duration, epilogues were associated with lower articulation rate (syllables/min without pauses) while refrains had the lowest pausing time percentage.", "Finally, several studies found that impersonation by narrator of different characters leads to clear differences in pitch, intensity and spectral quality (Doukhan et al., 2011; Wang et al., 2006).", "In short, previous research suggests that multiple factors may affect phone and pause duration in a reading of a story: from the phonetic properties of individual segments to where the passages falls within the narrative structure.", "However, most of these studies considered durations of individual segments, words, or pauses.", "It is not clear which of these effects will still persist when durations are averaged over a longer text as is the case for reading rate computation.", "In fact, studies in phonetics talk about emergent speech rate that can be relatively consistent over long stretches of speech (White, 2014).", "Furthermore, pause duration is likely to have a substantial effect on the reading rate (Kendall, 2013) yet previous research on pausing in story-telling suggests that this can be highly idiosyncratic.", "We use the Harry Potter and the Sorcerer's Stone by J.K. Rowling (Rowling, 2015) as the case study for this paper.", "The book consists of 79,508 words spread across 17 chapters.", "We divided the text into 313 non-overlapping passages of about 250 words each (mean = 249 words; range: 190-309).", "1 Boundaries of passages were set to be the starts 1 This is roughly the intended length of a reading turn in the turn-taking reading intervention described in the Introduction.", "and ends of paragraphs, where the end of a passage consists of a paragraph whose addition brings the passage closer to 250 words than without adding the paragraph.", "When generating passages, we took into account chapter boundaries so that no passage spanned two chapters: the word-count for passage generation was always re-set from the beginning of the chapter and any short fragments left at the end of a chapter were not included in the analysis.", "We randomly assigned 156 passages to the training set and 157 passages to the test set.", "We used data from two narrators.", "The first dataset, hereafter referred to as JD , comes from a narration by the actor Jim Dale published as an audio-book (Rowling and Dale, 2016).", "The book is released as 17 .mp3 files with one file per chapter.", "The second dataset comes from the audio-book with a female narrator, provided to us by Learning Ally.", "2 We will refer to it as LA .", "These recordings are created by volunteers and are made available on subscription-basis to students diagnosed with disabilities that impact their ability to read print-based materials.", "Learning Ally recordings are subject to quality control similar to that of commercial audio-books.", "3 3.3 Calculating reading rate We used forced alignment to automatically align the audio for JD narration for each chapter with the book text and establish the passage boundaries.", "We used the Kaldi toolkit (Povey et al., 2011) and publicly available acoustic models trained on the LibriSpeech corpus (Panayotov et al., 2015).", "The forced alignment was spot-checked manually for accuracy and found to be very accurate.", "The LA audio was already aligned with the book text.", "The LA recordings were split across multiple audio files.", "To avoid any artifacts of the recording process, we only used the passages where the whole audio was in the same file.", "Out of the original 314 passages, 270 passages (86%) satisfied this condition, of these 134 in the training set and 136 in the test set.", "We used these matching training and testing passages for both narrators, in order to facilitate comparisons.", "2 https://www.learningally.org/ 3 We cannot use another well-known commercially available narration, by Stephen Fry, since he narrates the British version of the book.", "For both narrators we used the time stamps for the beginning of the first word in the passage and the end of the last word in the passage to compute the total duration of the passage, which was then divided by the number of words in the passage to yield the reading rate (words per minute, WPM ).", "To answer our first question, we looked at the distribution of the reading rate across the passages in the training set.", "The distribution of WPM for both narrators was close to normal.", "JD: mean = 164.01; SD = 12.66; min = 129.2; max = 197.7.", "LA: mean = 125.12; SD = 11.4; min = 86.8; max = 156.9.", "Based on discussions in the literature regarding syllables per second being a more stable measure of reading rate than WPM (Tauroza and Allison, 1990; Grif-fiths, 1991; Munro and Derwing, 1998), we calculated rate in syllables per second, and observed a similar pattern of variation (JD: mean = 3.52, SD = 0.30, LA: mean = 2.72, SD = 0.27).", "We also found that WPM and syllables per second were highly correlated, for each of the narrators ( r 0.9).", "We therefore continue with WPM , as this is the commonly used measure in the reading assessment context.", "The distribution of WPM for each of the narrators is shown in Figure ??", ".", "The answer to RQ1 is that while there is clearly a sense in which one narrator generally reads slower than the other, it is not the case that a narra-2146 tor keeps the same rate of reading across different passages.", "We next compared the reading rates of the two narrators on the 134 training set passages.", "We found them to be highly correlated: Pearson's r = .81.", "This suggests that a substantial share of the variation across passages is systematic rather than idiosyncratic.", "We therefore proceed to the next question what factors can explain this variation?", "We use a standard model building approach to answer RQ3.", "We used the train partition with JD 's WPM (hereafter JD -train) to identify possible textual features as well as the best learner to combine these features.", "We then trained separate models on the training data for the two narrators.", "We evaluated the two models on:", "(a) different passages from the test partition as read by the same narrator;", "(b) same passages as used for model training but read by a different narrator;", "(c) different passages (test partition) read by a different narrator.", "We used text complexity as our baseline, following the practice in the reading assessment community.", "While we do not expect either of the narrators to experience any reading comprehension difficul-ties, one might reasonably assume that a skilled narrator would slow down on fragments which are harder for the listener to comprehend.", "We used TextEvaluator, 4 a state-of-the-art measure of comprehension complexity of a text (Napolitano et al., 2015; Sheehan et al., 2014, 2013; Nelson et al., 2012).", "5 TextEvaluator extracts a range of linguistic features and uses them to compute a complexity score on the scale of 100 2000.", "TextEvaluator computes three complexity scores based on the models optimized for literary, informational and mixed texts.", "We used the literary metric.", "The average complexity score for passages in the training set was 613.1, with a large variation across passages: min=240, max=1,019, 4 https://textevaluator.ets.org/ 5 TextEvaluator appears in the Nelson et al. (2012) benchmark as SourceRater.", "We used the passage text to extract 107 features that capture different factors that might affect durations in oral reading.", "These could be grouped into four categories.", "We hypothesized that the timing effects described in section 2.2.1 are likely to be the source of at least some variation in reading rates across the text.", "Due to the complexity of these effects, building an accurate model that would predict segmental durations based on the text is not a trivial task.", "This problem has been extensively discussed in literature on modeling prosody for text-to-speech synthesis systems (TTS) which generally combined the insights from the phonetic studies with statistical learning in order to establish the optimal duration for each segment and pause in synthesized audio.", "Therefore rather than attempting to build our own model, we synthesized the audio for each passage using Apple's built-in TTS engine (OS X 10.11.6).", "We used the male Alex voice which in terms of overall quality and default speaking rate appeared closest to JD .", "According to Capes et al. (2017), linguistic features used for training this system include segment identity and segmental context, stress, part-of-speech context, prominence 6 , sentence type and initial/final positional features for syllable, word, phrase and sentence; 7 in other words, features directly related to timing factors discussed in 2.2.1.", "We used the generated audio to compute the WPM for each passage.", "The mean reading rate of TTS was close to that of JD : 157.1 vs. 164.0.", "There was variation across passages with WPM varying from 129.2 to 197.7 (SD = 9.13).", "Next, we considered lexico-syntactic properties of the passages.", "Some of these (lexical fre-6 See for example (Hirschberg, 1993) for features used to establish prominence 7 Note that Capes et al. (2017) describe a different engine from the one used in this study, however as noted in the paper it shares the front-end for linguistic feature extraction with other Mac OS TTS systems.", "The same features are also described in (Zen et al., 2009).", "quency, emotion, arousal) may be associated with local changes in segment and pause durations (see 2.2.2).", "Many of the features are used as low-level features in readability estimation (Graesser et al., 2004; Sheehan et al., 2014), and are thus likely to capture facets of a reader's experience when reading the text.", "These included: (1) Vocabulary features capturing presence as well as average score along some meaning dimension, such as concreteness, imageability, emotion, arousal, motion, academic register (Coltheart, 1981; Warriner et al., 2013; Coxhead, 2000); (2) Morphological features (e.g., count of nominalizations, count of syllables); (3) Distributional features such as average word frequency; (4) Syntactic features such as counts of different part-of-speech, as well as features based on specific constructions (relative clauses, preposed clauses etc.); (5) Discourse features that deal with paragraphing (e.g., word count of the longest paragraph, average paragraph length in sentences) and overall cohesion (e.g., average lexical overlap across adjacent sentences).", "Considering previous work on prosody in storytelling, we also built features that relate to the overall story development.", "These included: (1) The number of occurrences of names of the main characters and other proper nouns important to the plot ( Harry , Hermione , Weasley , Dumbledore , Ollivander , Quidditch ), under the assumption that there might be systematic ways in which the narrators act out certain kinds of people (older vs younger, for example), as well as events that could indicate a fast-paced event, such as a commentated game of Quidditch; (2) The order in which the passage appears in the book (as a numeric continuous variable); (3) Plot arc as estimated by syuzhet package (Jockers, 2015).", "This package uses sentiment analysis to attempt to reveal the latent structure of the narrative.", "We used the default sentiment lexicon developed by the Nebraska Literary Lab supplied with the package.", "Finally we considered typographic features that provide clues to how the text should be performed when read aloud.", "These included exclamation marks ( ! ), ellipses (. . . ), words printed in all capitals and indications that a character stutters.", "We used 3-fold cross-validation on JD -train to compare performance of 9 regressors available via SKLL 8 package including Random Forest, SVR and various regularized linear models.", "We used grid-search with 3-fold cross-validation within each fold to fine-tune the parameters for all learners.", "We found that Lasso regression achieved the highest average performance and therefore used it as the learner for subsequent evaluations.", "Table 1 shows the performance of all models on the four datasets in our study.", "Since we are interested in explaining variation across passages rather than predicting the actual reading rate of a given narrator for a given passage, we use Pearson's r as our evaluation metric, as it would capture the extent to which the predicted and the observed values deviate similarly from their respective means, and thus would not be affected by differences in absolute values between the two narrators.", "The correlations between the baseline (esti-mates of comprehension complexity) and WPM of the two narrators were r =0.370.45.", "We also note that the direction of the correlation was opposite to our expectation: more complex passages were in fact read faster.", "Our models substantially outperformed the baseline with r increasing from 0.4 to 0.70.8.", "In other words, the final models explain much of the variability in reading rates.", "Furthermore, this level of performance holds for predicting variation in reading rate for a set of unseen passages 8 We used v1.3 from https://github.com/ EducationalTestingService/skll .", "read by a different narrator (MLA on JD -test and MJD on LA -test), suggesting fairly strong generalization.", "Results also suggest that the prediction is somewhat easier for LA than for JD , in that evaluations on the former are in the 0.750.80 range, and for the latter in the 0.710.74 range, no matter which narrator supplied the training data.", "This could be due to Jim Dale's narration being more theatrical/artistic, hence somewhat more idiosyncratic.", "We used 3-fold cross-validation on JD -train and LA -train to further consider how much of the variation can be explained by different groups of features discussed in Section 6.3.", "The results are shown in Table 2. For both narrators, models based on all groups of features outperformed models based on individual groups of features, but all groups of features were effective in explaining at least some variance in reading rate across passages.", "Timing as modeled by TTS was the highest performing feature followed by lexico-syntactic features and story-based features.", "To summarize, we found text complexity to be a poor predictor of passage-to-passage variability in reading rates of adult narrators.", "These findings are consistent with recent work in the oral reading fluency community which found variation in chil-dren's reading fluency across passages after controlling for grade level (see Section 2.1).", "We found that textual factors that explain a substantial share of passage-to-passage variability in reading rates include sentence-level timing factors such as distribution of segments, stressed syllables, sentences, and pauses as well as features related to passage vocabulary and syntax, story and performance.", "Given the good generalization of our results to both a new narrator and to new passages, we believe they hold promise for explaining some of the unaccounted-for variation in reading rates observed in the oral reading fluency studies; more research is necessary to explore this direction.", "Out of 107 original features, 17 features had nonzero coefficients in MJD and 14 in MLA , with 6 features in the overlap: timing, ellipsis, number of verbs in past tense, preposition count, Weasley , and Dumbledore .", "Additional features selected in only one of the two models included various vocabulary features (such as age of acquisition, imageability for MJD ), syntax (average word count before main verb, contractions for MJD ), discourse (average lexical overlap in adjacent sen-tences), as well as story features ( syuzhet and Dudley for MLA , Ollivander , quidditch for MJD ).", "Some of these features lend themselves to an easy explanation.", "Thus in our study, a strong predictor of narrator slowdown was occurrence of ellipsis (...), a mark of hesitation or thoughtfulness; these were not modeled as such by TTS.", "Similarly, the positive weight alloted to the average lexical overlap in adjacent sentences is consistent with the expectation that repeat instances would be read faster.", "Effective character features included Ollivander and Dumbledore ; mentions of both of these indicate a slowdown in narration.", "One possible explanation is that passages with multiple mentions of these characters are likely to be those where they speak.", "Both of these characters are elderly; acting them out could yield a slower rate of speech.", "9 In other cases the interpretation of the feature was less straightforward.", "Thus the feature with the second highest coefficient after timing for both narrators was that which counted occurrences of members of the Weasley family.", "Why?", "Figure 1 plots standardized reading rates of JD (blue), LA (orange), and TTS (black) as a function of the location in the book.", "It is clear from the plot that in addition to passage-by-passage variation there is a global pattern in narrator WPM : the narrators slow down over the first few chapters, 9 Barbara Roseblatt, an audiobook narration coach, explicitly advises to slow down when reading the contribution of the old character in a coversation with a young one:", "https://www.youtube.com/watch?v=MVmywsM9h4, 5:17.", "Jim Dale himself describes his image of Dumbledore as hesitant, wheezy old man :", "https://www.youtube.com/watch?v=whzhEIB9Qkg: 2:45.", "then speed up, and slow down again in the last third.", "It is also apparent that the TTS curve is flat-ter, suggesting that some of the slowdown and especially the speedup are not due to sentence-level timing factors.", "This book-level trend can help explain the strong performance of Weasley .", "This feature covers a number of characters that are prominent in the magic world as experienced by Harry (Ron, his brothers, sister, mother); they play no role at all in the first part of the story that is based in the Muggle world.", "The large red dot in Figure 1 indicates the first passage with a non-zero count for Weasley .", "This is very close to the onset of the speedup that is not captured by TTS.", "Apparently, the speedup coincides with an important plot transition (see Behr (2005) on plot transitions in Harry Potter), which is, in turn, indicated by a character mention pattern.", "Next, we looked closely at one of the vocabulary features, specifically, imageability, calculated as the number of word tokens in a passage that belong to the MRC Imageability list (Coltheart, 1981).", "This feature has a partial correlation of -0.186 with JD controlling for TTS.", "In an attempt to identify the subset of the 1,194 words on the list that drive the correlation, we removed stopwords, all words that appeared in only one training passage, as well as short words (2-3 letters) and long words (7 letters or more).", "The partial correlation remained virtually the same (-0.178, p < 0 . 05 ).", "These manipulations left us with 573 non-stop reasonably frequent 4-6 letter words.", "These words tend to name common everyday objects and properties (henceforth, everyday list), such as body parts ( knee, skin, neck, hair, nose, face, teeth ), colors ( blue, gray, green, white, black, orange, yellow ), family ( aunt, uncle, mother, father, sis-ter, wife ), elements ( fire, water, wind, rain ) and materials ( silver, gold, stone, metal, glass, paper, silk ), eating ( cake, wine, dinner, hungry, eating ), common properties of objects ( warm, cold, broad, narrow, soft, hard, tall, short, long, clean, dirty ) and humans ( kind, evil, rude, polite, eager, proud, stupid, famous ), standard house interior ( chair, table, mirror, door, wall, room, clock ), feelings and emotions ( fear, hurt, hate, pain, anger, gloom, tired, panic, safe, boring, afraid, relief ), as well as numbers ( first, nine, half, dozen ), directional ( inside, back, front, behind, bottom ) and time expressions ( soon, hour, late, week, month, early, minute, moment ).", "These words carry the story, so to speak, in that on average about one third of all nonstop words in each passage belong to this list, albeit with substantial variation (min = 0.20; max = 0.49).", "If the effect of the feature was simply due to higher incidence of the short high frequency everyday words, we would expect a positive correlation with the reading rate; in fact, the correlation is negative, suggesting that perhaps the feature is useful as an indirect indicator of something else, rather than for the phonological properties of the words on the list.", "Variation across passages in the use of everyday words appears non-random.", "In particular, the first third of the book averages 41.4 matches per passages; the rest of the book averages 37.3.", "Given the above observations with Weasley , this is easy to explain in reference to the story line the first part of the book mainly happens in Muggle (nor-mal) world, while the rest of the book happens in an alternative world of Hogwarts that is familiar enough (and so references to human feelings, bodies, and character still draw on the culturally familiar stock) yet different enough to drive a 10% average decline in the use of stock vocabulary, where special foods, special money units, the special game of quidditch, special subjects on the school's curriculum remain off the common list.", "Since overlap with the everyday list has a negative correlation with reading rate, we flip the sign 2150 of the standardized everyday token counts, and overlay the plot with that of reading rates; see the green dotted line in Figure 1. It is apparent that the global pattern of JD WPM is closely traced by the feature, especially in the middle area where JD is speeding up and then slowing down again.", "The observed global slowdown, speedup, and slowdown appear to align with the traditional three-part narrative structure (exposition, complication, and resolution) (Chandler and Munday, 2016).", "One of our features ( syuzhet ) was based on the plot arc.", "While this feature was selected in one of the models, its partial correlation with JD after controlling for TTS was not significant.", "Our results suggest that important plot transitions can sometimes be captured indirectly by tracing patterns of word usage for other specific classes of words such as characters or everyday words and that for skilled readers these transitions can be associated with systematic changes in reading rate.", "The main contributions of this paper are as follows.", "First, we demonstrate using a case study that variation in reading rate across passages for professional narrators is consistent across readers and much of it can be explained using features of the texts being read.", "These findings suggest that it is possible to estimate the expected variation in durations of oral reading across texts.", "In the assessment context, this has a potential of providing a powerful control mechanism for selecting comparable passages for parallel forms of a test of oral reading; in a context when one cannot adjust the materials (such as a reading intervention using a particular book), it might be possible to adjust the measurement of reading rate to compensate for the effects of the text on the observed performance.", "Secondly, we found that timing is a very powerful feature, yet not a perfect predictor of reading rate (the two narrators are still highly correlated controlling for timing, partial r =0.64).", "This opens up a possibility for a sophisticated assessment of oral reading using both TTS and human benchmark to separate reading that adheres to basic timing constraints of English speech (which constitutes a demonstrably big part of fluent reading) from a more nuanced expressive reading that TTS is not currently doing, but good human readers are.", "Thus beyond assessment context, our findings can also inform work on text-to-speech synthesis for book-length texts.", "Extending and validating the results reported here using additional types of text and separating the effect of text factors on the two components of reading rate, articulation rate and pausing, is an important next step to get a more comprehensive picture of the impact of text on oral reading.", "We thank John Sabatini and Tenaha O'Reilly for many comments and suggestions that inspired this work; Binod Gyawali and Patrick Lange for preprocessing the book; Diane Napolitano for help with extracting TextEvaluator features; Learning Ally for allowing us to use their version of the narration; Nitin Madnani, Michael Flor, Keelan Evanini, and three anonymous NAACL reviewers for their comments and suggestions." ]
[ "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "objective", "objective", "method", "method", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other" ]
[ "This paper describes the Critical Role Dungeons and Dragons Dataset (CRD3) and related analyses.", "Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.", "The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns.", "It also includes corresponding abstractive summaries collected from the Fandom wiki.", "The dataset is linguistically unique in that the narratives are generated entirely through player collaboration and spoken interaction.", "For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail, and semantic ties to the previous dialogues.", "In addition, we provide a data augmentation method that produces 34,243 summary-dialogue chunk pairs to support current neural ML approaches, and we provide an abstractive summarization benchmark and evaluation.", "Artificial intelligence applied to human conversation remains an incredibly challenging task in computer science.", "Task-oriented dialogues, which are more narrowly scoped and information dense than conversational dialogue, have been the focus of recent progress in dialogue understanding (Budzianowski et al., 2018).", "A difficulty for hypothesis testing on non-task oriented dialogues is a lack of large datasets that are fully representative of the spontaneity and noise of real world conversation, especially in the areas of storytelling and narrative beyond long-form text or monologue.", "Many potential dialogue processing tasks involve multi-speaker dialogues where narrative elements are conveyed through interaction between two or more speakers.", "These narrative elements can include changes in the states of narrative objects, Sample Dialogue Chunk 0 TRAVIS: i felt like i almost died and i had n't taken care of any of the shit that got me here in the first place . i was so worried about trying to learn about these new abilities that i felt like i got distracted . i have people i want to find and things i want to remedy . 1 MARIHSA: yeah . how did jester do ? no offense , but she seems like she 's a little bit more willfully stronger than you are . 2 TRAVIS: i mean , fuck , it 's really disturbing . like , she came out of there like a little kettle of popcorn , just no problem . i mean can i see jester ? is she nearby ? 3 MATT: jester , are you nearby ? 4 LAURA: i 'm across the bar just fucking dancing alone . -lrblaughter -rrb. 5 LIAM: just sixteen candles-ing it . 6 MARIHSA: yep . 7 TRAVIS: i was worried . there were really dark times . i would hear jester singing to herself at night and then she 'd change lyrics , and then my name would be in the lyrics sometimes . every morning , she would try and cheer everybody up that was around her , but she had the muffle ? so i could n't tell if my brain was playing tricks on me , or if she was just i do n't think there 's much that gets her down . it 's kind of inspiring .", "descriptions of events, or changes in the states of speakers themselves.", "Some explored sub-tasks for narrative understanding are topic understanding, character state tracking, and abstractive summarization.", "Though progress has been made in these areas, it has been on datasets where conversation has been constrained to specific topics, constrained by medium of communication, or scripted (in the case of television or movies) (Forchini, 2009).", "With datasets that involve naturally occurring dialogue, the small amount of data per narrative or speaker makes modeling challenging.", "The Critical Role show 1 is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.", "Critical Role is set in a fictional world created by the Dungeon Master (DM) Matthew Mercer.", "Separate from Matthew, there are eight other players who participate in his world as role-played characters; whose actions in the game influence the fictional world (as per the DM) along with their own character's state.", "There are multiple objectives to the game, both hidden and explicitly stated by both parties.", "For example, the DM might explicitly state a quest for the players to complete or a player's character might have an explicit personal goal that needs to be met.", "Examples of implicit objectives are non-player characters objectives created by the DM, and a player's character's back-story that influence their actions.", "This definition and expansion of the fictional world, the interaction with the world, and the development of the narrative is done entirely through unscripted spoken dialogue between the DM and the other players.", "Fans have maintained dialogue transcriptions for each episode as well as an online knowledge base (the Fandom wiki 2 ) where details about the players, characters, world, and game sessions are continuously added to.", "By extracting dialogues from the Critical Role transcripts, CRD3 aims to provide the community with a narrative-centered dataset that is unscripted, noisy, and spontaneous; while being coherent, consistent in latent speaker attributes and personalities, and considerably longer in dialogue length than similar conversational dialogue datasets.", "From the wiki, we obtain human-authored, structured summaries for each episode that support tasks of narrative understanding and extraction, topic understanding and segmentation, and summarization from conversational dialogue.", "We make five contributions in this paper.", "First, we produce a cleaned and structured dialogue dataset 1 critrole.com 2 criticalrole.fandom.com extracted from the Critical Role transcripts (CRD3-Dialogues) 3 .", "Second, we provide corresponding structured abstractive summaries for each episode, mined from the Fandom wiki (CRD3-Summaries).", "Third, we analyze the dataset and compare it to similar datasets.", "Fourth, we describe our method of data augmentation via text alignment to make this data scale-appropriate for neural ML approaches, and provide these summary-dialogue chunk pairs (CRD3-SD-pairs).", "Finally, we construct an abstractive summarization baseline from these pairs and discuss its evaluation (CRD3-Baseline).", "We believe that better abstractive summarization tools to distill information is essential given the ongoing growth of unscripted, multi-person dialogues in entertainment and business scenarios.", "We hope that CRD3 will support research and development for such tools.", "The Critical Role Dungeons and Dragons Dataset is a combination of story-telling dialogues structured around the game-play of Dungeons and Dragons and corresponding abstractive summarizations for each dialogue.", "As such, it can be compared to existing dialogue datasets and summarization datasets.", "There are currently many existing dialogue datasets (disregarding machine-to-machine) that can be roughly grouped into task-oriented, conversational, scripted, constrained, and spontaneous dialogues (Serban et al., 2015).", "Task-oriented datasets address specific tasks and are constrained by an ontology (Budzianowski et al., 2018).", "If the task is sufficiently constrained, even a human-to-human task-oriented dialogue can lack spontaneity and noise of open domain conversation (Haber et al., 2019), (Vaidyanathan et al., 2018), (Lison and Tiedemann, 2016).", "Agents trained on such datasets cannot be expected to model spontaneous conversational dialogue.", "Scripted dialogue datasets are closer to conversational dialogue.", "Popular scripted dialogues come from TV shows, movies, and novels; sometimes featuring further annotations (Poria et al., 2019a), (Lison and Tiedemann, 2016), (Banchs, 2012).", "Though the lack of noise can be helpful in training a dialogue system, they do contain artificialities in their linguistic properties (Forchini, 2009).", "With datasets that do have 3 github.com/RevanthRameshkumar/CRD3 natural conversation, either with provided topics (Rashkin et al., 2019), (Godfrey et al., 1992), (Car-letta et al., 2006) or truly naturally occurring (Ritter et al., 2010),(Schrading et al., 2015), (Li et al., 2017), (Leech, 1992), (Misra et al., 2015), the larger scope and noise along with the small amount of data for individual domains, latent speaker attributes, and linguistic attributes make tasks like response generation, abstractive summarization, and speaker personality modeling more difficult (Vinyals and Le, 2015), (Black et al., 2011), (Stent et al., 2005), (Poria et al., 2019b).", "Story-telling and game-playing dialogues can have properties from both task-oriented and conversational dialogues, as they have specific topics or tasks and are primarily human-to-human (Gratch et al., 2007), (Hung and Chittaranjan, 2009), (Afantenos et al., 2012), (Djalali et al., 2012), (Hu et al., 2016).", "In storytelling dialogues there is a clear topic constraint and purpose of conveying narratives.", "In game-play dialogues, there are clear tasks that the speakers try to complete, to either win or progress the game.", "This helps reduce topic noise and increase information density, but retains natural noise like disfluencies, false starts, fragments, and spontaneity.", "CRD3 has extensive storytelling and narrative building through dialogue, as well as game-playing since Dungeons and Dragons is the show's focus.", "The episodes are unscripted and live-streamed, so the dialogue is naturally occurring and contains a large amount of context-switching and chit-chat.", "Since it is spoken then transcribed to text, there exists linguistic noise as usually present in naturally spoken dialogue.", "Finally, the large amount of turns combined with consistent cast and persistent environments make modelling based on latent speaker and linguistic attributes more feasible.", "Most of the recent abstractive summarization research is conducted on document datasets (news, scientific papers, and patents) (Hermann et al., 2015), (Cohan et al., 2018), (Sharma et al., 2019).", "However, the methods used to perform well in these domains are less effective in dialogue (movies, personal-interviews, multi-person dialogues, etc) (Kedzie et al., 2018).", "As (Narayan et al., 2018) noted, many of the current summarization datasets highly reward extractive approaches due to the large amount of phrasal overlap in document and summary.", "Dialogue summarization is under-explored in datasets.", "For abstractive summarization, the most popular spoken dialogue datasets are AMI and Switchboard.", "Others exist, but are more constrained or purely textual, (Zhou et al., 2018), (Gella et al., 2018), (Misra et al., 2015), (Louis and Sutton, 2018), (Pan et al., 2018).", "Notably, (Gorin-ski and Lapata, 2015), (Gorinski and Lapata, 2018) combine movie scripts with Wikipedia plot summaries and other metadata.", "Though this brings us closer to longer form abstractive dialogue summarization data, there is significant information about the plot conveyed through script notes and descriptions, and not spoken dialogue.", "Briefly, Dungeons and Dragons is a popular roleplaying game that is driven by structured storytelling.", "Players create characters to participate in a fictional world created by the Dungeon Master (DM).", "They interact with the world entirely through dialogue with the DM and use dice rolls as a way to introduce randomness to the consequences of their actions.", "Actions can include exploring the environment, talking to fictional characters (role played by the DM), battle, and puzzle solving.", "4 3.2 Critical Role Video Stream Transcripts The CRD3 dataset consists of 159 episodes (dia-logues) from two campaigns.", "Campaign 1 has 113 episodes and Campaign 2 has 46 episodes, with new episodes being actively added.", "The episodes are unscripted and live-streamed, then archived and transcribed; they are usually several hours long.", "Detailed episode information can be found on the Fandom wiki 5 .", "The episodes usually start with some out-of-narrative logistics, then proceed to the actual D&D game where the players communicate character action by in-character role-playing or by describing the characters' actions in third person.", "There is also substantial out of narrative chit-chat and context switching.", "For each episode, we extract the names and turns from the dialogue transcript and clean the data as much as possible.", "We try to resolve the inconsistencies in spelling of speaker names, use of quotes, onomatopoeia, speaker aliases (and character aliases), parse multiple speakers for turns if needed, and others that exist due to the transcripts 4 dnd.wizards.com/dungeons-and-dragons 5 criticalrole.fandom.com/wiki/List of episodes Metric CRD3 MELD M. WOZ AMI CNN DailyMail Dialogue Count 159 190 10438 142 92465 219506 Turn Count 398682 13708 143048 79672 3074340 6189038 Total token count in dialogues 5056647 120913 1886018 706803 60476397 154282948 Unique token count in dialogues 42509 6251 20197 9958 341451 596032 Avg.", "being written over time by fans.", "We also replace all instances of character aliases in the speaker field with the real speakers' names to reduce noise.", "Along with the cleaned data, we provide the raw transcription data to document the changes via diff.", "The summaries for each episode were mined from the Critical Role Fandom wiki.", "The summaries are unique in that they are structured and offer different levels of summarization.", "Most episodes have a (1) wiki opening blurb, which offers briefest level of summarization.", "This is followed by a synopsis section which is (usually) comprised of several parts: (2) pre-show and announcements, where some logistical information is mentioned; (3) recap, where the previous episode is summarized (usually done by Matt in the episode and is narrative focused); and (4) the episode's plot which is the largest part and summarizes the narrative developments of the episode.", "The plot sections are also usually divided into sub-sections aligned to narrative topics.", "Sometimes the wiki also has a break and post-episode sections (usually non-narrative), which we include in the dataset.", "Refer to Table 1 for turn and token count comparisons.", "CRD3's total turn count, turns per dialogue, and unique token count are substantially larger than MELD (Poria et al., 2019a) (scripted Friends TV show dataset), Multi-WOZ (Budzianowski et al., 2018) (unscripted task-oriented dialogue dataset), and AMI (Carletta et al., 2006) (unscripted meetings dataset).", "For AMI, we only consider the dialogues with available abstractive summaries 6 .", "Multi-WOZ is dyadic while AMI, MELD, and CRD3 have multiple speakers per dialogue.", "We extract 72 total speakers from the entire CRD3 dataset; 9 of which are the main cast (players and DM) and make up 99.48% of the total turns; the DM alone makes up 111,994 turns.", "In comparison, the 6 main cast of MELD make up 83.27% of the total turns.", "In addition to real (human) speakers, there are also purely in-game characters role-played by the DM.", "The indication of the DM role-playing through the use of quotes seem to be mostly consistent in the transcripts.", "As a loose measure of role-playing, we find the turns that contain quotes from the DM ( 21383) and compare to all other players ( 2497).", "A core aspect of the game is players querying the DM, so we also measure the instances of questions from a player (turn ending in ?') followed by a DM response; a mean of 199 per dialogue with 58 standard deviation.", "Finally, we apply the spaCy English NER model on all dialogues as a loose measure of named entity presence.", "We get a mean of 1275 entities per dialogue with standard deviation of 344.5.", "For the summaries, we measure the token counts per summary and compare to AMI, CNN, and Daily Mail (Table 1).", "Again, CRD3 is substantially larger (though smaller in total tokens than the news datasets).", "The news datasets also feature more summary-article pairs, making them more amenable to current neural ML approaches; we address this for CRD3 in Section 4.", "We also measure the compression of the original text to summary via ratio of tokens per summary to tokens per original text and find they correspond to the ratios of total tokens to unique tokens.", "Finally, we measure the average token count and standard deviation of each section of the structured summaries for the CRD3 dataset (outlined in Section 3.3): (1) Wiki opening blurb: 50 16 .", "7 ; (2) pre-show and announcements: 183 254 ; (3) recap: 335 123 .", "9 ; and (4) episode plot: 1544 1553 .", "7 .", "The CRD3 dataset can be applied to many tasks, but we find abstractive dialogue summarization the most compelling task to explore in this paper.", "Due to the extensive length of the dialogues and summaries, and the frequent context switching and noise, we are presented with challenges that are poorly addressed by the current modeling and evaluation methods: 1. The dataset has relatively few episodes (159); as is, this is not enough samples to train, test, and validate using current neural approaches.", "2. The current, most successful summarization approaches do not explicitly attempt to capture coreference, semantics, and pragmatics in very long documents or conversations.", "3. Current automatic summarization evaluation methods have specific failures in evaluating narrative summarization.", "We do not attempt to propose a solution for either the second or third challenges, as they are beyond the scope of this paper.", "Instead, we address the first challenge by proposing a novel data augmentation method to dramatically scale up the number of available summary-dialogue turn sequence pairs.", "That outcome enables the community to start modeling and evaluation for the dialogue summarization task and we discuss initial benchmark results over this augmented set in Section 5.", "We found that the summaries written by fans on the wiki are detailed, mostly ordered with respect to the corresponding episode, and mostly non-repetitive.", "Due to the large number of sentences in the summaries, we can break up the summaries into chunks and align each chunk to some continuous segment of the dialogue.", "Formally, given dialogue D consisting of T turns { t i | i 1 . . . T } and summary S split into n contiguous chunks { s i | i 1 . . . n } , we try to determine A = { a i | i 1 . . . n } where a i is a contiguous set of turns from D ( a i = t j : k ) and where t j and t k ( j k ) are the earliest and latest turns in D to align to s i ; refer to Figure 2. To determine A , we try two approaches.", "Greedy Algorithm We make an independence assumption for all s and t and try to maximize an alignment score, ( A ; S, ) , where ( s, a ) calculates an alignment score between a single s and a .", "where bounds for w are determined empirically.", "For several dialogues, we tested 0 w T , but this had no change in the final assignments A and greatly increased computation time.", "To choose , we tried several scoring functions including variations of ROUGE (Lin, 2004), variations of TF-IDF (Jones, 1988), and other n-gram overlap scorings.", "We selected a scaled version of ROUGE-F1 score: ( s, a ) = | ( s ) ( a ) | ROUGEF 1 = 2 | ( s ) ( a ) | 2 | ( s ) | + | ( a ) | (2) where is a tokenization function for the given text.", "The scaling via | ( s ) ( a ) | term gives extra importance to the absolute token overlap count.", "To calculate the tokens, we found just unigrams and bigrams gave us the least noisy alignments.", "We also found lemmatization and stop-word removal greatly reduces the alignment quality because of the large number of n-grams ( 2 ) from the turn windows that are directly used in the summaries.", "In Figure", "3(a), we plot the turn indices as a function of the summary chunk indices.", "We notice the greedy alignment approach can largely preserve the order of the summary chunks relative to the dialogue turns, without any ordering constraints.", "However, there are some issues with this method.", "First, it allows out-of-order alignments of summary chunks, which we have assessed as almost always erroneous in this dataset.", "Second, the recall can Figure 3:", "be low due to early cutoffs at boundaries, generally because of extensive chit-chat in between two salient utterances.", "Forcing boundaries between a i and a i +1 to be contiguous leads to lower precision due to salient utterances being incorrectly assigned near the borders of the turn windows.", "Needleman-Wunsch Algorithm The recursive approach to determining A involves imposing strict order constraints using the sequence alignment algorithm Needleman-Wunsch (Needleman and Wunsch, 1970), similar to (Nelken and Shieber, 2006).", "The algorithm imposes order by forcing a i and a i +1 to be assigned to contiguous turn windows.", "We can also forgo the maximization over some window w as the algorithm does this by virtue of its score maximization function.", "We tried several functions for , including the TF-IDF function proposed by (Nelken and Shieber, 2006) and found (2) still performs best.", "To use the algorithm, we first apply independently for each turn (of size 1) and summary chunk to generate a match-score matrix M of size T n .", "We then build an alignment score matrix H of size ( T + 1) ( n + 1) using: H xy = max H y 1 ,x 1 + M y 1 ,x 1 H y 1 ,x + M y 1 ,x 1 H y,x 1 + M y 1 ,x 1 (3) with M y 1 ,x 1 = ( s x 1 , t y 1 ) ; 1 y T ; and 1 x n and the first column and row of H initialized to y and x respectively.", "We perform the traceback from HT +1 ,n +1 to H 0 , 0 to generate Figure 4: Visualization of the traceback along the H matrix in the Needleman-Wunsch alignment approach.", "the alignment A where each a A can be seen as a vertical line in the traced path (Figure 4).", "We exclude gap penalties when generating H , since we want to allow multiple turns to be assigned to a summary chunk and we want to allow a single turn to overlap several summary chunks.", "We also notice that column-wise normalization on M reduced the quality of the alignments substantially because large scores can act as an anchor for the algorithm to localize erroneous alignments.", "It forces the algorithm to catch up' or pull back' the turn alignments to include the high M y,x in the final path.", "Normalization also reduces incentives to keep the path going down a column and heavily favors moving to the next column (summary chunk).", "We can visualize the improvements in Figure", "3(b), where we also notice the algorithm captures turns past t 1833 (upto t 1878 ) that were previously ignored, leading to higher recall we manually verified this.", "The strong ordering constraint is also the source of some noise.", "For example, if a summary alignment overshoots the correct turn window by a large margin, it is likely that the subsequent summaries will also be misaligned due to the contiguity constraint.", "However, the localization effect due to large M scores help mitigate this.", "Another source of noise is the forced alignment of the first and last turns in dialogues that continue past the summary.", "We also analyze the distribution of the scores along the paths (each path normalized to 1) traced on M with respect to the nine main players (Ta-ble 2).", "This gives us the distribution of the player contributions to the summaries.", "Matt's turns contribute most to the summaries since he contributes the most salient narrative points.", "As the Dungeon Master, he is responsible for world building and the narrative's interaction with the other players.", "We Player MATT 0.0307 .", "can see the other players have much lower mean scores.", "One explanation for this is that they engage in more non-narrative chit chat than Matt, which leads to a lower mean .", "Data Augmentation Running the Needleman-Wunsch algorithm for a dialogue D will give us N s, a pairs.", "We can extend this by calculating S as S 0 . . . SC 1 where C is the chunk size and S x is the shift in the starting point of the contiguous chunking windows.", "For each of these S x , we can then determine an A x pair.", "This method increases our s, a pairs by a factor of C .", "We can go further by running this for different chunk sizes.", "For our experiment, we chose to run this algorithm for C =2, 3, and 4 sentences.", "We remove dialogues with | S | 10 chunks (since there are some incomplete wikis) and get 55385 s, a pairs.", "To reduce noise, we also: (1) impose 2 < | t j : k | 100 ; and (2) strip out pairs where s i contains Q: (signifies a differently formatted question answer segment in an episode).", "We end up with 34243 pairs (Table 3), a substantial increase from the original 159 summary, dialogue pairs.", "Refer to Figure 1 and to the Appendix for examples of the summaries and examples.", "These are then split as 26232 training, 3470 validation, and 4541 testing s, a pairs; refer to Appendix for details.", "We calculate precision and recall with respect to the turns on a random sample of 100 pairs from the training split of these 34243 pairs and obtain a precision of 0.8692 and recall of 0.9042.", "Refer to Appendix for precision and recall calculation Summary The Mighty Nein make their way up the ladder and through the hatch into the Keystone Pub proper, where they order breakfast. A hooded female wearing a long green cloak covering her left face and side approaches and asks if they're heading into the swamp today she's desperate to go there herself. Calianna apologizes for bothering them, but she couldn't help but overhear their conversation last night.", "method.", "We find precision errors are mostly from extraneous trailing or leading turns attached to the properly aligned set of turns, and almost never from complete misalignment.", "We find recall errors are from turn sequences that start too late or end too early, and also almost never from complete misalignment.", "In most cases where a contains a recall error, we notice the precision for that a is 1.0, because a ends up being a subset of the correct t j : k .", "We posit this is due to the strong order constraints of the algorithm and our post-alignment filtering, which removes the pairs with the highest risk of complete misalignment.", "As a measure of quality of the human written summaries, we also perform a question-answering task on a random sample of 50 s i , a i pairs from the filtered set.", "First the questioner records two questions and answers per pair, with the questions and answers coming only from the summaries s i .", "For each pair, there is one factoid question with an open-ended answer and one multiple choice question with four possible answers.", "The factoid question can be answered by yesno responses, entity names, or short text.", "The multiple choice question has at most one correct answer of the four contained in the summary chunks.", "(Figure 5).", "The questions are then answered by another person, using only the aligned turns a i from the pair.", "The scores are recorded in Table 4.", "Out of the 19 incorrect answers, we found that 17 of them were due to summary alignment errors.", "This is where the correct information was in the dialogue, but not in the aligned set of turns.", "The other 2 were due to misinterpretation of the question when answering.", "This indicates, with perfect alignment, all questions Question Type Correct Incorrect Precision Free Form 39 11 78% Multiple Choice 42 8 84% Total 81 19 81% Table 4: Correct and incorrect answers for the Q&A evaluation method, for measuring precision w.r.t. the human written summaries in the s i , a i pairs.", "could have been answered correctly; meaning what is in the summaries is an accurate reflection of what is in the transcript.", "However, we recognize all the information in the transcripts is not necessarily in the summaries; for example, out-of-game information.", "We also notice that multiple choice questions have a higher accuracy due to easier questions and additional context provided by the set of answers themselves, and not due to random guessing.", "We also found that 12 incorrect answers were due to no answer, meaning the answerer did not feel they had enough information to attempt an answer.", "For the other 7, the answerer felt that at least some information pertaining to the question was available in the aligned turns.", "Unlike ROUGE precision, which relies on word overlap, this evaluation can incorporate latent semantic and contextual information.", "It is important to note that latent information used when answering varies greatly between people, making this method subjective with respect to the answerer.", "In future work, it would be interesting to measure variance of accuracy and information in the answers using a large number of people.", "We establish a baseline for abstractive summarization by using the neural summarization architecture introduced by (Chen and Bansal, 2018) 7 .", "The generated data has noise due to imperfections in the alignment method and due to potentially broken coreference, so we use the model in a semi-supervised fashion.", "We choose this architecture as a baseline for several reasons: (1) The paradigm for narrative summarization from noisy dialogue is close to the paradigm assumed by Chen and Bansal.", "Namely, first extract salient sentences, then abstractively rewrite them with an included copy mechanism to deal with OOV words.", "(2) The ability to analyze the extractor behavior separately from the abstrac-7 github.com/ChenRocks/fast abs rl R1 R2 RL M Extractive (rnn-ext + RL) P 20.83 .", "tor due to the independence of training (before connection by the reinforcement learning mechanism).", "(3) The speed of training due to the shortened input-target pairs.", "We briefly describe the model: First, the model optimizes a sentence extraction module and an abstractive rewrite module independently using maximum-likelihood objectives.", "Then, end-to-end training is achieved by applying policy gradient methods (due to the non-differentiable hard ex-traction performed by the extractor).", "The extractor uses a temporal convolutional model to obtain hierarchical sentence representations, then selects sentences using a pointer network.", "The abstractor is an encoder-aligner-decoder network with a copy mechanism for OOV words.", "Due to the large amount of non-narrative chit-chat turns between salient turns, we train the extractor on a sequence of turns rather than individual sentences.", "We use precision, recall, and F-1 scores of ROUGE-1, 2, and L, along with METEOR (Denkowski and Lavie, 2014) to evaluate the generated summaries (Table 5).", "We run these metrics on the test set, using both the combined extractive-abstractive model and the purely extractive model for analysis on what turns are considered salient.", "The purely extractive model significantly outperforms the combined model in recall and in F-1, due to the much higher recall.", "In the validation set, we notice the recall measures are improved by the n-grams in summary chunks that have indirect speech (fjord says, he says, etc).", "In the validation Generated Abstractive Summary he says he feels worried about trying to learn about these abilities and abilities .", "This high rate of 3-gram overlap motivates changes to the modeling architecture that are more lenient towards phrasal copy instead of just enabling word copy and depending on the learned language model and the word level copy probability.", "The grammatical person shift and significant paraphrasing of turns lower the precision of the purely extractive model, leading to a higher precision in the combined model.", "For example in Figure 1, beau asks about jester . from the human-authored summary is entirely from turn 1, but the only overlapping word is jester.", "From Figure 6, we can see the encoder-decoder model learns the grammatical shift behavior but doesn't include the proper nouns, so the resulting summary misses important speaker information that is included in the human generated summaries.", "For example, Beau is the character alias for Marisha, which is latent information that was not available to the model at the time of decoding/generation.", "We also note the encoder-decoder module's learned language model is biased by the narrative elements present in the training dialogue chunks.", "This causes decoding of similar, but fundamentally different, narrative focused turns to be noisy and nonfactual.", "Compared to news summarization metrics with the same model architectures, the dialogue summarization metrics are substantially lower.", "The disparity in model performance can be attributed to content selection differences between news where effective summary information is available early in an article (position bias) and dialogue where the positional effects are not observed.", "Other factors include the grammatical and stylistic differences explored earlier.", "Our findings also confirm the findings of (Kedzie et al., 2018), which compares content selection methods for summarization across various domains (CNN/DM, NYT, DUC, Reddit, AMI, and PubMed).", "They find a similar disparity in R-2 (recall) and METEOR scores between the news domain and the AMI meeting dialogue domain.", "They also include an oracle measurement as a performance ceiling; it achieves a max METEOR score of 17.8 and R-2 recall of 8.7 on the AMI corpus.", "Though ROUGE and METEOR are more useful for relative measurements than absolute, we find the current evaluation methods in summarization lead to skewed and less informative scores in dialogue domains.", "The problem is compounded in narrative summarization due to narrative specific lexical information, including speaker aliases.", "For example, METEOR specifically considers synonyms, paraphrases, and function words; all of which can change a lot from narrative to narrative.", "Dialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics.", "In this paper, we contribute the Critical Role Dungeons and Dragons Dataset (CRD3), a linguistically rich dataset with dialogue extracted from the unscripted, live-streamed show Critical Role and long, abstractive summaries extracted from the Critical Role Fandom wiki.", "We provide a data augmentation method to help the community start modeling and evaluation for the dialogue summarization task and discuss the initial modeling benchmark results.", "We find current paradigms in summarization modeling to have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue.", "We hope CRD3 offers useful, unique data for the community to further explore dialogue modeling and summarization.", "We also hope that the dataset can be added to in the future with multi-modal extractions, more granular annotations, and deeper mining of the wiki.", "First and foremost, we thank the Critical Role team 8 for creating a fun, entertaining, organized, and growing set of livestreams that we used in this dataset.", "Next, we thank the CRTranscript team 9 for providing high quality transcripts of the show for the community and we thank all the contributors of the Critical Role Wiki.", "Finally, we thank Rahul Jha for providing feedback and Oli Bailey for contributing evaluation questions." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "method", "method", "method", "method", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "result", "result", "objective", "method", "other", "other", "other" ]
[ "Current advances in machine translation (MT) increase the need for translators to switch from traditional translation to post-editing (PE) of machine-translated text, a process that saves time and reduces errors.", "This affects the design of translation interfaces, as the task changes from mainly generating text to correcting errors within otherwise helpful translation proposals.", "Since this paradigm shift offers potential for modalities other than mouse and keyboard, we present MMPE, the first prototype to combine traditional input modes with pen, touch, and speech modalities for PE of MT. The results of an evaluation with professional translators suggest that pen and touch interaction are suitable for deletion and reordering tasks, while they are of limited use for longer insertions.", "On the other hand, speech and multi-modal combinations of select & speech are considered suitable for replacements and insertions but offer less potential for deletion and reordering.", "Overall, participants were enthusiastic about the new modalities and saw them as good extensions to mouse & keyboard, but not as a complete substitute.", "As machine translation (MT) has been making substantial improvements in recent years 1 , more and more professional translators are integrating this technology into their translation workflows (Zaret-skaya et al., 2016; Zaretskaya and Seghiri, 2018).", "The process of using a pre-translated text as a basis and improving it to create the final translation is called post-editing (PE).", "Older research showed a strong dislike of translators towards PE (Lagoudaki, 2009; Wallis, 2006), and more recent studies agree that translators are still cautious about PE and question its benefits (Gaspari et al., 2014; Koponen, 1 WMT 2019 translation task: http://matrix.statmt.org/, accessed 16/04/2020 2012), partly because they see it as a threat to their profession (Moorkens, 2018).", "Experienced translators in particular exhibit rather negative attitudes (Moorkens and O'Brien, 2015).", "Conversely, novice translators have been shown to have more positive views on PE (Yamada, 2015).", "Green et al. (2013) demonstrated that some translators actually strongly prefer PE and argue that users might have dated perceptions of MT quality.", "Apart from translators' preference, productivity gains of 36% when using modern neural MT for PE (Toral et al., 2018) already result in substantial changes in translation workflows (Zaretskaya and Seghiri, 2018) and will probably continue to do so the better MT becomes.", "Thus, PE requires thorough investigation in terms of interface design, since the task changes from mostly text production to comparing and adapting MT and translation memory (TM) proposals, or put differently, from control to supervision.", "Previous elicitation-based research (Herbig et al., 2019a) investigated how translation environments could better support the PE process and found that translators envision PE interfaces relying on touch, pen, and speech input combined with mouse and keyboard as particularly useful.", "A small number of prototypes exploring some of these modalities also showed promising results (Teixeira et al., 2019).", "This paper presents MMPE, the first translation environment combining standard mouse & keyboard input with touch, pen, and speech interactions for PE of MT. The results of a study with 11 professional translators show that participants are enthusiastic about having these alternatives, even though time measurements and subjective ratings do not always agree.", "Overall, pen and touch modalities are well suited for deletion and reordering operations, while speech and multi-modal interaction are suitable for insertions and replacements.", "In this section, we present related research on translation environments and particularly focus on existing multi-modal approaches to PE.", "Most professional translators nowadays use so-called CAT (computer-aided translation) tools (van den Bergh et al., 2015).", "These provide features like MT and TM together with quality estimation and concordance functionality (Fed-erico et al., 2014), alignments between source and MT (Schwartz et al., 2015), interactive MT offering assistance like auto-completion (Green et al., 2014b,a), or intelligibility assessments (Coppers et al., 2018; Vandeghinste et al., 2016, 2019).", "While TM is still often valued higher than MT (Moorkens and O'Brien, 2017), a recent study by Vela et al. (2019) shows that professional translators who were given a choice between translation from scratch, TM, and MT, chose MT in 80% of the cases, highlighting the importance of PE of MT. Regarding the time savings achieved through PE, Zampieri and Vela (2014) find that PE was on average 28% faster for technical translations, Aranberri et al. (2014) show that PE increases translation throughput for both professionals and lay users, and Laubli et al. (2013) find that PE also increases productivity in realistic environments.", "Furthermore, it has been shown that PE not only leads to reduced time but also reduces errors (Green et al., 2013).", "Furthermore, PE changes the interaction pattern (Carl et al., 2010), leading to a significantly reduced amount of mouse and keyboard events (Green et al., 2013).", "Therefore, we believe that other modalities or combinations thereof might be more useful for PE.", "Dictating translations dates back to the time when secretaries transcribed dictaphone content on a typewriter (Theologitis, 1998); however, the use of automatic speech recognition also has a long history for translation (Dymetman et al., 1994; Brousseau et al., 1995).", "A more recent approach, called SEECAT (Martinez et al., 2014), investigates the use of automatic speech recognition (ASR) in PE and argues that its combination with typing could boost productivity.", "A survey regarding speech usage with PE trainees (Mesa-Lao, 2014) finds that they have a positive attitude towards speech input and would consider adopting it, but only as a complement to other modalities.", "In a small-scale study, Zapata et al. (2017) found that ASR for PE was faster than ASR for translation from scratch.", "Due to these benefits, commercial CAT tools like memoQ and MateCat are also beginning to integrate ASR.", "The CASMACAT tool (Alabau et al., 2013) allows the user to input text by writing with e-pens in a special area.", "A vision paper (Alabau and Casacu-berta, 2012) proposes to instead use e-pens for PE sentences with few errors in place and showcases symbols that could be used for this.", "Studies on mobile PE via touch and speech (O'Brien et al., 2014; Torres-Hostench et al., 2017) show that participants especially liked reordering words through touch drag and drop, and preferred voice when translating from scratch, but used the iPhone keyboard for small changes.", "Zapata (2016) also explores the use of voiceand touch-enabled devices; however, the study did not focus on PE, and used Microsoft Word instead of a proper CAT environment.", "Teixeira et al. (2019) explore a combination of touch and speech for translation from scratch, translation using TM, and translation using MT. In their studies, touch input received poor feedback since", "(a) their tile view (where each word is a tile that can be dragged around) made reading more complicated, and", "(b) touch insertions were rather complex to achieve within their implementation.", "In contrast, integrating dictation functionality using speech was shown to be quite useful and even preferred to mouse and keyboard by half of the participants.", "The results of an elicitation study by Herbig et al. (2019a) indicate that pen, touch, and speech interaction should be combined with mouse and keyboard to improve PE of MT. In contrast, other modalities like eye tracking or gestures were seen as less promising.", "In summary, previous research suggests that professional translators should switch to PE to increase productivity and reduce errors; however, translators themselves are not always eager to do so.", "It has been argued that the PE process might be better supported by using different modalities in addition to the common mouse and keyboard approaches, and an elicitation study suggests concrete modalities that should be well suited for various editing tasks.", "A few of these modalities have already been explored in practice, showing promising results.", "However, the elicited combination of pen, touch, and speech, together with mouse and keyboard, has not yet been implemented and evaluated.", "We present the MMPE prototype (see Figure 1) which combines these modalities for PE of MT. A more detailed description of the prototype can be found in Herbig et al. (2020), and a video demonstration is available at https://youtu.be/ H2YM2R8Wfd8 .", "On the software side, we decided to use Angular for the frontend, and node.js for the backend.", "As requested in Herbig et al. (2019a), we use a large tiltable touch & pen screen for the study (see Figure 1b): the Wacom Cintiq Pro 32 inch display with the Flex Arm that allows the screen to be tilted and moved flat on the table, or to be moved up to work in a standing position.", "We further use the Sennheiser PC 8 Headset for speech input.", "The goal of this hardware setup was to limit induced bias as much as possible, in order to get results on the modalities and not on a flawed apparatus.", "We implemented a horizontal source-target layout (see Figure 1a), where each segment's status (unedited, edited, confirmed) is visualized between source and target.", "On the far right, support tools are offered as requested in Herbig et al. (2019a): (1) the unedited MT output, to which the users can revert their editing using a button, and (2) a corpus combined with a dictionary.", "The current segment is enlarged, thereby offering space for handwritten input and allowing the user to view a lot of context while still seeing the current segment in a comfortable manner (Herbig et al. (2019a); see Figure 1a).", "The view for the current segment is further divided into the source segment (left) and two editing planes for the target, one for handwriting and drawing gestures (middle), and one for touch deletion & reordering, as well as standard mouse and keyboard input (right).", "Both initially show the MT proposal and synchronize on changes to either one.", "The reason for having two editing fields instead of only one is that some interactions are overloaded, e.g., a touch drag can be interpreted as both handwriting (middle) and reordering (right).", "Undo and redo functionality, as well as confirming segments, are also implemented through buttons between the source and target texts, and can further be triggered through hotkeys.", "The target text is spell-checked, as a lack of this feature was criticized in Teixeira et al. (2019).", "For handwriting recognition (see Figure 1c), we use the MyScript Interactive Ink SDK.", "Apart from merely recognizing the written input, it offers gestures 2 like strike-through or scribble for deletions.", "For inserting words, one can directly write into an empty space, or create such a space first by breaking the line (draw a long line from top to bottom), and then handwriting the word.", "All changes are immediately interpreted, i.e., striking through a word deletes it immediately, instead of showing it in a struck-through visualization.", "The editor further shows the recognized text immediately at the very top of the drawing view in a small gray font, where alternatives for the current recognition are offered.", "Apart from using the pen, the user can also use his/her finger or the mouse on the left-hand editing view for handwriting.", "On the right-hand editing view, the user can delete words by simply double-tapping them with pen/finger touch, or reorder them through a simple drag and drop procedure (see Figure 1d), which visualizes the picked-up word as well as the current drop position, and automatically fixes spaces between words and punctuation marks.", "This reordering functionality is strongly related to Teixeira et al. (2019); however, only the currently dragged word is temporarily visualized as a tile to offer better readability.", "Naturally, the user can also edit using mouse and keyboard, where all common navigation inputs work as expected from other software.", "For speech recognition, we stream the audio recorded by the headset to IBM Watson servers to receive a transcription, which is then analyzed in a command-based fashion.", "Thus, our speech module not only handles dictations as in Teixeira et al. (2019), but can correct mistakes in place.", "As commands, the user has the option to in-sert, delete, replace, and reorder words or subphrases.", "To specify the position, if it is ambiguous, one can define anchors as in af-ter/before/between, or define the occurrence 2 see https://developer.myscript.com/docs/concepts/editing-gestures/, accessed 16/04/2020", "of the entity (first/second/last).", "A full example is insert A after second B, where A and B can be words or subphrases.", "Character-level commands are not supported, so instead of e.g. deleting a suffix, one should replace the word.", "Last, the user can use a multi-modal combination, i.e., pen/touch/mouse combined with speech.", "For this, the cursor first needs to be positioned on or next to a word, or the word needs to be long-pressed with pen/touch, resulting in a pickup visualization.", "Afterwards, the user can then use a simplified voice command like delete , insert A , move after/before A/ between A and B , or replace by A without needing to specify the position/word.", "In a log file, we store all concrete keypresses, touched pixel coordinates, etc.", "Much more importantly, we directly log all UI interactions (like seg-mentChange ), as well as all text manipulations (like replaceWord ) together with the concrete changes (e.g. with the oldWord , newWord , and complete segmentText ).", "The prototype was evaluated by professional translators 3 .", "We used ENDE text, as our participants were German natives and we wanted to avoid ASR recognition errors as reported in Dragsted et al. (2011).", "In the following, modalities refers to Touch (T), Pen (P), Speech (S), Mouse & Keyboard (MK), and Multi-Modal combinations (MM, see Section 3.5), while operations refers to Insertions, Deletions, Replacements, and Reorderings.", "The experiment consisted of the following phases and took approximately 2 hours per participant: 4.1 Introduction & Independent PE First, participants filled in a questionnaire capturing demographics as well as information on CAT usage.", "Then the experimenter introduced all of the prototype's features in a prepared order to ensure a similar presentation for all participants.", "After that, participants were given 1015 minutes to explore the prototype on their own.", "We 3 The study has been approved by the university's ethical review board.", "Freelance participants were paid their usual fee, while in-house translators participated during working hours.", "The data and analysis scripts can be found at https: //mmpe.dfki.de/data/ACL2020/ specifically told them that we are more interested in them exploring the presented features than in receiving high-quality translations.", "This phase had two main purposes: (1) to let the participants become familiar with the interface (e.g., how best to hold the pen) and to resolve questions early on; (2) to see how participants intuitively work with the prototype.", "Two experimenters carefully observed the participants and took notes on interesting behavior and questions asked.", "The central part of the study was a structured test of each modality for each of our four operations.", "For this, we used text from the WMT news test set 2018.", "Instead of actually running an MT system, we manually introduced errors into the reference set to ensure that there was only a single error per segment.", "Overall, four sentences had to be corrected per operation using each modality, which results in 4 4 5 = 80 segments per participant.", "Within the four sentences per operation, we tried to capture slightly different cases, like deleting single words or a group of words.", "For this, we adapted the prototype, such that a pop-up occurs when changing the segment, which shows (1) the operation to perform and which modality to use, (2) the source and the MT, which is the reference with the introduced error, as well as (3) the correction to apply, which uses color, bold font, and strike-through to easily show the required change to perform.", "The reason why we provided the correction to apply was to ensure a consistent editing behavior across all participants, thereby making subjective ratings and feedback as well as time measurements comparable.", "The logging functionality was extended, such that times between clicking Start and confirming the segment were also logged.", "To avoid ordering effects, the participants went through the operations in counter-balanced order, and through the modalities in random order.", "After every operation (i.e., after 4 5 = 20 segments) and similar to Herbig et al. (2019a), participants rated each modality for that operation on three 7-point Likert scales ranging from strongly disagree to strongly agree, namely as to whether the interaction is a good match for its intended purpose, whether it is easy to perform, and whether it is a good alternative to the current mouse and keyboard approach.", "Furthermore, we asked the translators to give us their thoughts on advantages and disadvantages of the modalities, and how they could be improved.", "Afterward, participants further had to order the 5 modalities from best to worst.", "In the end, after completing all 80 segments, we performed a final unstructured interview to capture high-level feedback on the interface as well as things we missed in our implementation.", "While a direct comparison to state-of-the-art CAT tools would be interesting, the results would be highly questionable as the participants would be expert users of their day-to-day tool and novice users of our tool.", "Furthermore, the focus of our prototype was on the implemented modalities, while widely used features like a TM or consistency checker are currently missing.", "Since our main question was whether the newly implemented features have potential for PE of MT or not, we focus on qualitative feedback, ratings, and timing information, which is more relevant to this research question.", "Overall, 11 (f=10, m=1, 2 left-handed) professional ENDE translators participated in the experiment, 3 freelance and 8 in-house translators.", "Their ages ranged from 30 to 64 ( avg =41.6, =9.3) 4 , with 3 to 30 years of professional experience ( avg =13.3, =7.4) and a total of 27 language pairs ( avg =2.6).", "All translators translate from EN to DE, and all describe their German Language skills as native and their English skills as C1 to native level.", "For most participants, the self-rated CAT knowledge was good (6 times) or very good (4 times, 1 neutral).", "However, participants were less confident about their PE skills (4 neutral, 4 good, 3 very good), thereby matching well with the CAT usage surveys.", "Years of experience with CAT tools ranged from 3 to 20 ( avg =11.5, =5.1), where participants had used between 1 and 10 distinct CAT tools ( avg =4.9, =2.7).", "Figure 2 shows the subjective ratings provided for each modality and operation on the three scales", "4 The small number of participants and their age distribution (with 10 participants of age 30 to 48, and only one aged 64) did not us allow to analyze the effect of age on the results.", "Goodness, Ease of use, and Good alternative to mouse & keyboard after having tested each feature (see Section 4.2).", "As can be seen, participants tended to give similar ratings on all three scales.", "For insertions and replacements , which required the most text input, the classical mouse & keyboard approach was rated highest; however, the multi-modal combination and speech were also perceived as good, while pen and especially touch received lower scores.", "For deletions and reorderings , pen, touch, and mouse & keyboard were all perceived as very good, where P and T were ranked even slightly higher than MK for reorderings.", "Speech and multi-modal were considered worse here.", "After each operation, participants ordered the modalities from best to worst, with ties being allowed.", "As an example, for MM & S best, then P, then MK, and last T we assigned 0.5 times the 1 st and 0.5 times the 2 nd position to both MM and S, while P got 3 rd , MK 4 th , and T the 5 th position.", "To get an overall ordering across participants, we then multiplied the total amount of times a modality was rated 1 st /2 nd /3 rd /4 th /5 th by 1/2/3/4/5 (similar to Zenner and Kruger (2017)).", "Consequently, a lower score indicates that this modality is better suited for the operation.", "The scores for each modality and operation are: Insertions: 1 st : MK (20.5), 2 nd : MM (26.5), 3 rd : S (31.5), 4 th : P (38.5), 5 th : T (48) Deletions: 1 st : P (21.5), 2 nd : MK (29), 3 rd : T (31.5), 4 th : MM (41), 5 th : S (42) Replacements: 1 st : MK (21), 2 nd : MM (29), 3 rd : S (30), 4 th : P (35), 5 th : T (50) Reorderings: 1 st : P (21.5), 2 nd : T (31), 3 rd : S (35.5), 4 th : MK (36), 5 th : MM (41) 5.4 Timings We analyzed the logged duration of each modality-operation pair.", "Note that this is the time from clicking Start until confirming the segment; thus, it includes recognition times (for speech and handwriting) and really measures how long it takes until a participant is satisfied with the edit.", "Even though participants were instructed to provide feedback or ask questions only while the popup is shown, i.e., while the time is not measured, participants infrequently did so during editing.", "We filtered out such outliers and averaged the 4 sentences of each modality-operation pair per participant to get a single value, thereby making the samples independent for the remaining analyses.", "Figure 3 shows boxplots of the dataset for the 20 modality-operation pairs.", "For statistical analysis, we first conducted Friedman tests per operation, showing us that significant differences exist for each operation (all p < 0 . 001 ).", "Afterward, posthoc analyses using Wilcoxon tests with Bonferroni-Holm correction showed which pairs of modalities are significant and how large the effect r is.", "For insertions , MK was by far the fastest modality, followed by MM and S. All differences except for MM vs. S and T vs. P are statistically significant with large effect sizes (all p < 0 . 01 , all r > 0 . 83 ).", "As expected, deletions were faster than insertions.", "Here, MK, T, and P were the fastest, followed by S; MM was slowest by far.", "Regarding significance, all modalities were significantly faster than MM, and MK was significantly faster than S (all p < 0 . 01 , all r > 0 . 88 ).", "For reordering , P and T were the fastest, followed by MK and S. The statistical analysis revealed that T is significantly faster than all modalities except P, both P and MK are significantly faster than S, and S is significantly faster than MM (all p < 0 . 05 , all r > 0 . 83 ).", "Replacements with MK were the fastest, followed by P, T, S, and MM.", "MK was significantly faster than all other modalities, and P significantly faster than S and MM (all p < 0 . 05 , all r > 0 . 83 ), while no significant differences exist between the other three.", "Apart from the ratings and timings, we present the main qualitative feedback from the interviews.", "Especially for short insertions and replacements, handwriting was seen as a suitable input mode; for more extended changes, one should instead fall back on typing or dictation.", "Both touch/pen deletion mechanisms (strike-through and double-tap) and touch/pen reordering were highlighted as very useful or even perfect as they nicely resemble a standard correction task.", "Most participants seemed to prefer the pen to finger handwriting for insertions and replacements due to its precision, although it was considered less direct.", "A major concern was thinking about and creating sufficient space to handwrite into.", "A suggested improvement was to make the available space con-figurable to one's own handwriting.", "Furthermore, placing the palm of the hand on the screen should not be interpreted as input.", "Six participants also noted that the text jumps around when reordering a word from the end of a line, as the picked-up word is removed from the text, resulting in all remaining words being moved to the front, which could be prevented by adapting the text only on drop.", "Perceptions regarding speech recognition were somewhat mixed, with some thinking it worked super while two participants found it exhausting to formulate commands while mentally working with text.", "Furthermore, speech was considered impractical for translators working in shared offices.", "Both insertions and replacements using speech received lots of positive feedback (from 8 and 7 participants, respectively), interesting findings being that the longer the insertion, the more interesting speech becomes.", "Speech deletion was considered to work fine and to be simpler than insertion as there is usually no need to specify the position.", "However, it would be unsatisfactory to have to read 10 words to delete them.", "The main advantage of the multi-modal approach was that one has to speak/think less.", "However, it was also argued that when you talk, you can also just say everything, meaning that the simplified MM command was not seen as an advantage for this participant.", "An interesting statement was that if there are no ambiguities, speech is better, but if there are, multi-modal is cool.", "Ideas on how to improve speech ranged from better highlighting the changes in the target view, to adding the possibility to restate the whole segment.", "While the ASR tool used (IBM Watson) is one of the state-of-the-art APIs, it might still have negatively impacted the results for S and MM, as a few times a word was wrongly recognized (e.g., when replacing an ending, the ASR did not always correctly recognize the word form).", "To improve this aspect, participants discussed the idea of passing the text to the speech recognition (Dymetman et al., 1994) or training the ASR towards the user.", "Due to daily usage, participants stated they were strongly biased regarding mouse and keyboard, where the muscle memory helps.", "However, many actually considered MK as very unintuitive if they imagined never having used it before, especially compared to pen and touch, or as one participant stated for reordering: why do I have to do all of this, why is it not as simple as the pen.", "In general, we received lots of positive feedback in the final discussion about the prototype, where participants made statements such as I am going to buy this once you are ready or expressed respect for the prototype.", "Multiple participants reported that it would be nice to have multiple options to vary between the modalities.", "It was frequently suggested to combine the two editing views, e.g. by having a switch to enable/disable the drawing mode.", "Participants also commented positively on the large typeface for the current segment (you really see what you are working on).", "Suggestions for further improvements included adaptation possibilities for the size of the editing fields and a switch between vertical and horizontal source-target layout.", "This section discusses the main takeaways regarding each modality.", "According to ordering scores, subjective ratings, and comments, we see that the pen is among the best modalities for deletions and reordering.", "However, other modalities are superior for insertions and replacements, where it was seen as suitable for short modifications, but to be avoided for more extended changes.", "In terms of timings, P was also among the fastest for deletions and reorderings, and among the slowest for insertions.", "What is interesting, however, is that P was significantly faster than S and MM for replacements, even though it was rated lower.", "The main concern for handwriting was the need to think about space and to create space before actually writing.", "Results for touch were similar, but it was considered worse for insertions and replacements.", "Furthermore, and as we expected due to its precision, pen was preferred to finger touch by most participants.", "However, in terms of timings, the two did not differ significantly apart from replace operations, and even for replacements, where it was clearly rated as the worst modality, it actually turned out to be (non-significantly) faster than S and MM. 5.6.3 Speech & Multi-modal Combinations Speech and multi-modal PE were considered the worst and were also the slowest modalities for reordering and deletions.", "For insertions and replacements, however, these two modalities were rated and ordered 2 nd (after MK) and in particular much better than P and T. Timing analysis agrees for insertions, being 2 nd after MK.", "For replacements, however, S and MM were the slowest even though the ratings put them before P and T. An explanation of why MM was slower than S for deletion is that our implementation did not support MM deletions of multiple words in a single command.", "Still, we would have expected a comparable speed of MM and S for reordering.", "Insertions are the only operation where the multi-modal approach was (non-significantly) faster than S since the position did not have to be verbally specified.", "Furthermore, the participants' comments highlighted their concern regarding formulating commands while already mentally processing text.", "Still, S and MM received a lot of positive feedback for insertions and replacements, where they would be more interesting the more text was to be added.", "The main advantage of the MM approach, as argued by the participants, was that one has to speak less, albeit at the cost of doing two things at once.", "Mouse & keyboard received the best scores for insertions and replacements, where it was the fastest modality.", "Furthermore, it got good ratings for deletions and reorderings, where it was also fast (but not the fastest) for reordering.", "However, some participants commented negatively, stating that it only works well because of years of expertise.", "Interestingly, our findings are not entirely in line with translators' intuitions reported in our previous elicitation study (Herbig et al., 2019a): while touch worked much better than expected, handwriting of whole subphrases did not work as well as they thought.", "Additionally, it is interesting to note that some newly introduced modalities could compete with mouse & keyboard even though participants are biased by years of training with the latter.", "Overall, many participants provided very positive feedback on this first prototype combining pen, touch, speech, and multi-modal combinations for PE MT, encouraging us to continue.", "Furthermore, several promising ideas for improving and extending the prototype have been proposed.", "The focus of our study was to explore the implemented interactions in detail, i.e., each modality for each operation irrespective of frequency.", "The chosen methodology guaranteed that we receive comparable feedback on all interactions from professional translators by having them correct the same mistakes using different modalities.", "Nevertheless, a more realistic natural workflow follow-up study should be conducted in the future, which will also show if participants swap modalities within sentences depending on the error type, or if they stick to single modalities to avoid frequent modality switches.", "While more and more professional translators are switching to the use of PE to increase productivity and reduce errors, current CAT interfaces still heavily focus on traditional mouse and keyboard input, even though the literature suggests that other modalities could support PE operations well.", "This paper therefore presents MMPE, a CAT prototype combining pen, touch, speech, and multi-modal interaction together with common mouse and keyboard input possibilities, and explores the use of these modalities by professional translators.", "The study shows a high level of interest and enthusiasm for using these new modalities.", "For deletions and reorderings, pen and touch both received high subjective ratings, with pen being even better than mouse & keyboard.", "In terms of timings, they were also among the fastest for these two operations.", "For insertions and replacements, speech and multimodal interaction were seen as suitable interaction modes; however, mouse & keyboard were still favored and faster here.", "As a next step, we will integrate the participants' valuable feedback to improve the prototype.", "While the presented study provided interesting first insights regarding participants' use of and preferences for the implemented modalities, it did not allow us to see how they would use the modalities over a longer time period in day-to-day work, which we also want to investigate in the future.", "Furthermore, participants in Herbig et al. (2019a) were positive regarding the idea of a user interface that adapts to measured cognitive load, especially if it automatically provides additional resources like TM matches or MT proposals.", "An exploration of multi-modal measuring approaches (Herbig et al., 2019b) shows the feasibility of this, so we will try to combine explicit multi-modal input, as done in this work, with implicit multi-modal sensor input to better model and support the user during PE.", "This research was funded in part by the German Research Foundation (DFG) under grant number GE 2819/2-1 (project MMPE).", "We thank AMPLEXOR ( https://www.amplexor.com ) for their excellent support in providing access to professional human translators for our experiments." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "result", "other", "other" ]
[ "The goal of event detection (ED) is to detect the occurrences of events and categorize them.", "Previous work solved this task by recognizing and classifying event triggers, which is defined as the word or phrase that most clearly expresses an event occurrence.", "As a consequence, existing approaches required both annotated triggers and event types in training data.", "However, triggers are nonessential to event detection, and it is time-consuming for annotators to pick out the most clearly word from a given sentence, especially from a long sentence.", "The expensive annotation of training corpus limits the application of existing approaches.", "To reduce manual effort, we explore detecting events without triggers.", "In this work, we propose a novel framework dubbed as T ype-aware B ias N eural N etwork with A ttention M echanisms (TBN-NAM), which encodes the representation of a sentence based on target event types.", "Experimental results demonstrate the effectiveness.", "Remarkably, the proposed approach even achieves competitive performances compared with state-of-the-arts that used annotated triggers.", "This work tackles the task of event detection (ED), whose goal is to detect the occurrences of predefined events and categorize them.", "For example, consider the following sentence In Baghdad, a cameraman died when an American tank fired on the Palestine Hotel. , an ideal event detection system should recognize two events, Death and Attack (suppose that both Death and Attack are in the predefined event set) .", "Previous work typically solved this task by recognizing and classifying event triggers.", "According to ACE (Automatic Context Extraction) event evaluation program, event trigger is defined as the word or phrase that most clearly expresses an event occurrence.", "died is the trigger word of Death event, and fired is the trigger word of Attack event.", "The majority of existing approaches modeled this task as word classification (Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011; Li et al., 2013; Nguyen and Grishman, 2015; Liu et al., 2016b,a; Chen et al., 2017), which predicted whether each word in a given sentence is an event trigger and what type of event it triggered.", "As a consequence, these approaches required both annotated triggers and event types for training.", "However, event triggers are nonessential to this task.", "Remind that the goal of event detection is to recognize and categorize events, thus triggers could be viewed as intermediate results of this task.", "Furthermore, it is time-consuming for annotators to pick out the most clearly word from a given sentence, especially from a long sentence, which limits the application of existing ED approaches.", "To reduce manual effort, we explore detecting events without triggers.", "In this study, the only annotated information of each sentence is the types of events occurred in it.", "Consider the aforementioned example S again, its annotation is { Death, Attack } .", "On the contrast, previous work also required an annotated trigger for each event, which means the annotated information of S is { Death :died, Attack :fired } in previous work.", "Without event triggers, it is intuitive to model this task via text classification.", "However, there are two challenges: (1) Multi-label problem : each sentence may contain arbitrary number of events, which means it could have zero or multiple target labels.", "In machine learning, this problem is called multi-label problem.", "(2) Trigger absence problem : previous work illustrated that trigger words play important roles in event detection(Chen et al., 2015; Liu et al., 2016a).", "It is challenging to model this information without annotated triggers.", "To solve the first challenge, we transform multi-label classification to multiple binary classification problems.", "Specifically, a given sentence s attached each pre-defined event type t forms an instance, which is expected to be labeled with 0 or 1 according to whether s contains an event of type t .", "For example, suppose there totally are 3 predefined types of events(denoted by t 1 , t 2 and t 3 ), and sentence s contains two events of type t 1 and t 3 , then it could be transformed to the following three instances: instance label < s, t 1 > 1 < s, t 2 > 0 < s, t 3 > 1 Table 1: Example of instances in binary classifications for sentence s , which contains events of type t 1 and t 3 .", "In this paradigm, sentences that convey multiple events will yield multiple positive pairs, thus the multi-label problem could be well solved.", "Furthermore, each type of events are usually triggered by a set of specific words, which are called event trigger words.", "For example, Death events are usually triggered by die, passed away, gone, etc.", "Therefore, event trigger words are important clues to this task.", "Since existing work explicitly exploited annotated trigger words in their approaches, they can directly model this observation.", "However, in our case, annotated triggers are unavailable.", "To model this information, we propose a simple but effective model, called T ype-aware B ias N eural N etwork with A ttention M echanisms ( TBNNAM ).", "Figure 1 illustrates the framework of TBNNAM .", "The input is consisted of two parts: a tokenized sentence with NER tags and a target event type.", "The output o is expected to be 1 if the given sentence conveys an event of the target type, otherwise 0 (the output should be 1 for the example given in Figure 1).", "Specifically, given a sentence, the proposed model first transforms the input tokens into embeddings, and applies an LSTM layer to calculate a context-dependent representation for each token.", "Then it computes an attention vector, , based on the target event type, where the trigger word is expected to obtain higher score.", "Finally, the sentence representation s att is calculated based on .", "Here, s att is expected to focus on local information (trigger word).", "To capture global information, the final output, o , is also connected to the last LSTM units, which encodes the global information of the input sentence.", "Furthermore, to reinforce the influence of positive samples, we devise a bias objective function in our model .", "We call our model type-aware because the representation of a sentence, s att , is calculated based on the target event type.", "We have conducted experimental comparisons on a widely used benchmark dataset ACE2005 1 .", "The results illustrate that our approach outperforms all the compared baselines, and even achieves competitive performances compared with exiting approaches that used annotated triggers.", "We publish our code for further study by the NLP community.", "2 In summary, the main contributions of this work are: (1) To the best of our knowledge, this is the first work that focuses on detecting events without triggers.", "Compared with existing approaches, the proposed method requires less manual annotations.", "(2) Without triggers, this task encounters two challenges: multi-label problem and trigger absence problem.", "We propose a simple but effective model, which even achieves competitive results compared with approaches that using annotated triggers.", "(3) Since this is the first work on detecting events without triggers, we implement a series of baseline models for this task, and systematically evaluate and analyze them.", "Event detection task requires that certain speci-fied types of events, which are mentioned in the annotated data, to be detected.", "The most common used benchmark dataset in previous work is ACE 2005 corpus.", "This corpus includes 8 types of events, with 33 subtypes.", "Following previous work(Ahn, 2006; Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011; Li et al., 1 https://catalog.ldc.upenn.edu/LDC2006T06 2 https://github.com/liushulinle/event detection without triggers Figure 1: The framework of type-aware bias neural network with attention mechanisms.", "2013; Chen et al., 2015; Nguyen and Grishman, 2016), we treat them simply as 33 separate event types and ignore the hierarchical structure among them.", "Consider the following sentence In Baghdad, a cameraman died when an American tank fired on the Palestine Hotel , an ideal event detector should detect two events from this sentence: a Die event and an Attack event.", "Event detection is one of important topics in NLP.", "Many approaches have been proposed for this task.", "Nearly all the existing methods on ACE event task follow supervised paradigm.", "We further divide them into feature-based methods and representation-based methods.", "In feature-based methods, a diverse set of strategies has been exploited to convert classification clues into feature vectors.", "Ahn (2006) us-es the lexical features(e.g., full word), syntactic features (e.g., dependency features) and external-knowledge features(WordNet (Miller, 1998)) to extract the event.", "Inspired by the hypothesis of One Sense Per Discourse (Yarowsky, 1995), Ji and Grishman (2008) combined global evidence from related documents with local decisions for the event extraction.", "To capture more clues from the texts, Gupta and Ji (2009), Liao and Grishman (2010) and Hong et al. (2011) proposed the cross-event and cross-entity inference for the ACE event task.", "Li et al. (2013) proposed a joint model to capture the combinational features of triggers and arguments.", "Liu et al. (2016b) proposed a global inference approach to employ both latent local and global information for event detection.", "In recent years, representation-based methods have dominated the research.", "In this paradigm, candidate event mentions are represented by embeddings, which typically are fed into neural networks.", "Chen et al. (2015) and Nguyen and Grishman (2015) are the first work in this paradigm.", "Their models are based on CNNs (Convolu-tional Neural Networks).", "To model the dependency of triggers and arguments, Nguyen and Grishman (2016) proposed a joint event extraction approach based RNNs(Recurrent Neural Networks).", "Liu et al. (2017) proposed to encode argument information in event detection via supervised attention mechanisms.", "Recently, Nguyen and Grishman (2018) and Sha et al. (2018) proposed to exploit syntactic information for event detection.", "All the existing approaches required annotated triggers.", "The expensive annotation of training data limits the application of these approaches.", "To reduce manual effort, we perform this task without event triggers.", "To deal the multi-label problem, we model this task via multiple binary classifications.", "Given a sentence, it will be fed into a binary classifier with each candidate event type.", "We add the label NA to sentences that do not contain any events.", "To capture the hidden trigger information, we propose a simple but effective model, called T ype-aware B ias N eural N etwork with A ttention M echanisms ( TBNNAM ).", "Our model is type-aware because it calculates the representation of a sentence based on the target event type.", "Figure 1 illustrates the framework of TBNNAM .", "The input is consisted of two parts: a tokenized sentence with NER tags and a target event type.", "The output o is expected to be 1 if the given sentence conveys an event of the target type, otherwise", "0. Next, we describe the structure of this model in bottom-up order.", "Given a sentence, we use Stanford CoreNLP tools 3 (Manning et al., 2014) to convert texts into tokens.", "The ACE 2005 corpus annotated not only events but also entities for each given sentence.", "Following previous work, we exploit the annotated entity tags in our model(Li et al., 2013; Chen et al., 2015; Nguyen and Grishman, 2015, 2016; Liu et al., 2016b).", "Word embeddings learned from a large amount of unlabeled data have been shown to be able to capture the meaningful semantic regularities of word-s(Bengio et al., 2003; Erhan et al., 2010).", "Much work(Socher et al., 2012; Zeng et al., 2014) has shown its power in many NLP tasks.", "In this work, we use the Skip-gram mod-el(Mikolov et al., 2013) to learn word embeddings on the NYT corpus 4 .", "Furthermore, we randomly initialized an embedding table for each entity tags.", "All the input word tokens and entity tags will be transformed into low-dimensional vectors by looking up these embedding tables.", "In this work, we denote the dimension of word embeddings by d w , and that of entity embeddings by d e .", "As illustrated in Figure 1, an event type is transformed into two embedding vectors: t 1 and t 2 .", "The first one (colored with brown) is designed to capture local information (hidden trigger word), and the latter one (colored with red) is designed to capture global information.", "Both of them are randomly initialized.", "The dimension of event type embeddings is denoted by d evt .", "3 http://stanfordnlp.github.io/CoreNLP 4 https://catalog.ldc.upenn.edu/LDC2008T19 3.4 LSTM Layer As shown in Figure 1, the LSTM layer is run over the sequence of concatenation of word and entity embeddings.", "LSTM has three gates(input i , forget f and output o ), and a cell memory vector c .", "The input gate can determine how incoming vectors x(t) alter the state of the memory cell.", "The output gate can allow the memory cell to have an effect on the outputs.", "Finally, the forget gate allows the cell to remember or forget its previous state.", "Each type of events are usually triggered by a set of specific words, which are called event trigger words.", "For example, Death events are usually triggered by die, passed away, gone, etc.", "Therefore, event trigger words are important clues to this task.", "However, this information is hidden in our task, because annotated triggers are unavailable.", "To model the hidden triggers, we introduce attention mechanisms in our approach.", "As illustrated in Figure 1, the attention vector is calculated based on the target event type embedding t 1 and the hidden states h yielded by LSTM.", "Specifically, the attention score for the k -th token in a given sentence is calculated by the following equation: k = exp( h k t T 1 ) (cid:80) i exp( h i t T 1 ) (1) In this model, trigger words of the target event type are expected to obtain higher scores than other words.", "Finally, the representation of the sentence, s att , is computed by the following equation: s att = TH (2) where = [ 1 , ..., n ] is the attention vector, H = [ h 1 , h 2 , ..., h n ] is a matrix, h k is the LSTM's output for the k -th token, and s att is the representation of the given sentence.", "As illustrated in Figure 1, the final output o is connected to two components: v att and v global .", "On one hand, v att is calculated by the dot product of s att and t 1 , which is designed to capture local features (specifically, features about hidden trigger words).", "On the other hand, the last output of the LSTM layer, h n , encodes global information of the whole sentence, thus v global = h n t T 2 is expected to capture global features of a sentence.", "Finally, o is defined as the weighted sum of v att and v global : o = ( v att + (1 ) v global ) (3) where is the Sigmoid function, [0 , 1] is a hyper-parameter for trade-off between v att and v global .", "We devise a bias loss function to reinforce the influence", "influence of positive samples because of the following reasons.", "1) positive samples are much less than negative samples.", "In our approach, each training sample is a < sentence,event type > pair, whose label is 1 or 0 according to whether the given sentence conveys an event of type t.", "For example, we totally have 33 target event types, if a sentence only contains one event, then it will be transformed into 32 negative pairs and 1 positive pair.", "The majority of sentences convey at most two events, thus negative samples are much more than positive samples.", "2) positive samples are more informative than negatives.", "A positive pair < s , t > means that s conveys an event of type t , whereas negative pair means s does not convey any event of type t .", "Apparently, the former is more informative.", "Given all of the (suppose T) training instances ( x ( i ) , y ( i ) ), the loss function is defined as follows: J ( ) = 1 TT (cid:88) i =1 ( o ( x i ) y ( i ) ) 2 (1+ y ( i ) )+ || || 2 (4) where x is a pair consisted of a sentence and a target event type, y { 0 , 1 } , is the parameter of our model and > 0 is the weight of L2 normalization term.", "(1 + y ( i ) ) is the bias term.", "Specifically, the value of this term is 1 for negative samples ( y ( i ) is 0) and 1 + for positive samples ( y ( i ) is 1), where 0 .", "We train the model by using a simple optimization technique called stochastic gradient descent (S-GD) over shuffled mini-batches with the Adadelta rule (Zeiler, 2012).", "Regularization is implemented by a dropout and L2 norm.", "Given a instance x , the model assign it a label (cid:101) y according to the following equation: (cid:101) y = (cid:26) 0 o ( x ) < 0 .", "Since this is the first work to perform event detection without triggers, we implement a series of baseline systems for comparisons, which could be divided into two categories: binary classification based methods and multi-class classification based methods.", "Similar with the proposed approach, baseline systems in this group solved this task via binary classification.", "Figure 2 illustrates the framework of these methods.", "These models take a sentence and a target event type as input.", "Then all the inputs are transformed into embeddings by looking up embedding tables.", "These models have the same loss function as the proposed approach (see Equation 4).", "The key component of these models is sentence encoder.", "According to the strategy of encoding sentence, we implement three models for comparison: BC-CNN, BC-LSTM last , BC-LSTM avg .", "BC-CNN employs a CNN model to encode sentence.", "BC-LSTM last employs LSTM model, and use the hidden state of the last token as the representation of a given sentence.", "BC-LSTM avg also employs LSTM model, but use the average of all hidden states as the representation of a given sentence.", "All existing approaches model the task of event detection (with triggers) via multi-class classification 5 .", "Given a sentence, these methods predict whether each token is an event trigger and what type of event it triggered.", "We also implement several multi-class classification based systems for comparison.", "Since annotated triggers are unavailable in our task, the sentence is the input of our model.", "Figure 3 illustrates the framework of these models.", "Following existing work(Chen et al., 2015; Liu et al., 2017), we employ a negative log-likelihood loss function in the soft-max classifier: J ( ) = 1 T (cid:80) Ti =1 log( p ( y ( i ) | x ( i ) , )) , where ( x ( i ) , y ( i ) ) is a training sample, y ( i ) is a label from the valid label set (all the predefined event types plus a NA for none event), T is the total number of training instances, is the parameters of the model.", "According to the strategy of encoding sentence, we implement three models: MC-CNN, MC-LSTM last and MC-LSTM avg .", "MC-CNN employs a CNN model to encode sentence.", "MC-LSTM last employs LSTM model, and use the hidden state of the last token as the representation of a given sentence.", "MC-LSTM avg also employs LSTM model, but use the average of all hidden states as the representation of a given sentence.", "5 Multi-class classification means a classification task with more than two classes, but each sample belongs to only one class.", "multi-class is different from multi-label.", "In this section, we introduce the dataset, evaluation metrics and the settings of hyper parameters.", "Our experiments are conducted on ACE 2005 dataset.", "Following the evaluation of previous work(Li et al., 2013; Chen et al., 2015; Nguyen and Grishman, 2016; Liu et al., 2017), we randomly selected 30 articles from different genres as the development set, and subsequently conducted a blind test on a separate set of 40 ACE 2005 newswire documents.", "We used the remaining 529 articles as our training set.", "This work focuses on detecting events without triggers.", "Therefore, we remove trigger annotations from the corpus.", "Specifically, we employ Stanford CoreNLP Toolkit to split each document into sentences, and assign each sentence with a set of labels according to the original annotations in ACE 2005 corpus.", "If a sentence does not contain any event, we assign it with a special label, NA .", "If a sentence contains multiple events of the same type (less than 3% in ACE corpus), we only keep one label for each type.", "Table 2 shows several samples of the our corpus.", "Following previous work (Liao and Grishman, 2010; Li et al., 2013; Chen et al., 2015; Liu et al., 2017), we use precision (P), recall (R) and F 1 measure (F 1 ) to evaluate the results.", "Precision: the proportion of correctly predicted events in total predicted events.", "Recall: the proportion of correctly predicted events in total gold events of the dataset.", "Hyper parameters are tuned on the development dataset via grid search.", "In all experiments, we Figure 4: Experimental results on development dataset with different setting of .", "set the dimension of word embeddings as 200, the dimension of entity type embeddings as 50, batch size as 100, the hyper parameter for the L 2 norm as 10 5 , in the bias term as 1.0.", "Furthermore, we also tune in Equation 3 on the development dataset.", "Figure 4 illustrates experimental results with different settings of , finally we set as 0.25.", "And in all the CNN-based baseline systems, the sizes of filter windows are set to 1, 2, 3 with 100 feature maps each.", "Table 3 illustrates the experimental results, where methods with name MC-* are based on multi-class classification, and methods with name BC-* are based on binary classification.", "According to the strategy to encode a sentence, methods in Table 3 are grouped into three parts.", "From the table, we make the following observations: In each group, binary classification based approach significantly outperforms multi-class classification based approach.", "The reason is that BC-* can solve the multi-label problem, but MC-* can not.", "Moreover, MC-* achieve much lower recall than BC-* , because they predict at most one event for each sentence.", "The reason is that trigger words are important clues to event detection, and CNN is good at extracting such local features.", "In this section, we illustrates the results of the proposed approach (see Table 4).", "The results of baseline systems are listed in the first group.", "Methods in the second group are the proposed approaches.", "They have the same model structure as Figure", "1. In BC-LSTM att , (see Equation 3) is set as 1.0, which is designed to show the effects of the proposed attention strategy.", "In TBNNAM , is set as 0.25, which is designed to employ both local information (captured by the attention mechanism) and global information (captured by the last output of LSTM).", "Methods in the last group are state-of-the-art ED systems on ACE 2005 dataset.", "We give a brief introduction of them as follows: 1).", "Nguyen's CNN : the CNN model proposed by Nguyen and Grishman (2015) 2).", "Chen's DMCNN : the dynamic multi-pooling CNN model proposed by Chen et al. (2015) 3).", "Liu's PSL : the soft probabilistic soft logic model proposed by Liu et al. (2016b) 4).", "DS-DMCNN : the DMCNN model augmented with automatic labeled data, proposed by Chen et al. (2017) From the table, we make the following observations: Figure 5: Visualization of attention weight vector of sample instances learned by our model.", "BC-LSTM att outperforms all the baseline systems with remarkable gains, which demonstrates the effectiveness of the proposed attention mechanism.", "TBNNAM achieves better performance than BC-LSTM att (69.9% vs. 66.3%), which means that global information captured by the last state of LSTM is also important to this task.", "Such global information and local information captured by the attention mechanisms are complementary to each other.", "All state-of-the-art ED systems require annotated triggers.", "Without trigger annotations, our approach achieves competitive results, even outperforms some of them.", "Figure 5 shows several examples of the attention vector learned by our model.", "In the first case, died is the most significant keyword for the Death event, and our model succeeded to capture this feature by assigning it with a large attention score.", "Similarly, in the second case, fired is a key clue of Attack event, and our model also learned it and assigned it with a large attention score.", "Actually, died and fired are the trigger words of Death and Attack events, respectively.", "Therefore, we argue that, although annotated triggers are unavailable, our model still can exploit trigger information for this task.", "Moreover, our approach also could model the dependencies among different events, which has been demonstrated useful for this task(Liao and Grishman, 2010; Liu et al., 2016b).", "For example, Attack events often co-occur with Death events.", "In Case1 and Case2 (Figure 5), our approach models such information by paying attention on both words died and fired.", "Furthermore, the 3-rd case is a negative sample, thus methods P (%) R (%) F 1 (%) BC-LSTM att \\ Bias 74.5 57.2 64.7 BC-LSTM att 68.3 64.5 66.3 TBNNAM \\ Bias 76.6 59.8 67.2 TBNNAM 76.2 64.5 69.9 Table 5: Results of systems without/with bias term in loss function, where * \\ Bias do not use bias term.", "In this section, we illustrates the effectiveness of the bias term in Equation 4.", "Table 5 shows experimental results.", "Methods named with * \\ Bias do not use bias term.", "From the table, we observe that systems with bias term in loss function significantly outperform those without bias term, which demonstrates the correctness of our analysis in Section 3.7 that positive samples should be reinforced during training.", "Existing event detection approaches required annotated triggers, which limits their applications because of the expensive annotations.", "To reduce manual effort, we investigate performing this task without event triggers.", "In this setting, event detection task encounters two challenges: multi-label problem and trigger absence problem.", "We propose a simple but effective model to solve them, which computes the representation of a sentence according to the target event type.", "Experimental results demonstrate its effectiveness.", "Remarkably, the proposed approach even achieves competitive performances compared with state-of-the-arts that used annotated triggers." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "abstain", "objective", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain" ]
[ "Few-shot learning arises in important practical scenarios, such as when a natural language understanding system needs to learn new semantic labels for an emerging, resource-scarce domain.", "In this paper, we explore retrieval-based methods for intent classification and slot filling tasks in few-shot settings.", "Retrieval-based methods make predictions based on labeled examples in the retrieval index that are similar to the input, and thus can adapt to new domains simply by changing the index without having to retrain the model.", "However, it is nontrivial to apply such methods on tasks with a complex label space like slot filling.", "To this end, we propose a span-level retrieval method that learns similar contextualized representations for spans with the same label via a novel batch-softmax objective.", "At inference time, we use the labels of the retrieved spans to construct the final structure with the highest aggregated score.", "Our method outperforms previous systems in various few-shot settings on the CLINC and SNIPS benchmarks.", "Few-shot learning is a crucial problem for practical language understanding applications.", "In the few-shot setting, the model (typically trained on source domains with abundant data) needs to adapt to a set of unseen labels in the target domain with only a few examples.", "For instance, when developers introduce a new product feature, a query understanding model has to learn new semantic labels from a small dataset they manage to collect.", "Few-shot learning is challenging due to the imbalance in the amount of data between the source and target domains.", "Traditional classification methods, even with the recent advancement of pretrained language models (Peters et al., 2018; Devlin et al., 2019), could suffer from over-fitting (Snell work done during internship at Google Research et al., 2017; Triantafillou et al., 2019) or catastrophic forgetting (Wu et al., 2019) when incorporating the data-scarce target domain.", "On the other hand, metric learning methods (Weinberger et al., 2006; Vinyals et al., 2016; Snell et al., 2017) have been shown to work well in few-shot scenarios.", "These methods are based on modeling similarity between inputs, effectively allowing the model to be decoupled from the semantics of the output space.", "For example, a model would learn that the utterance I'd like to book a table at black horse tavern at 7 pm (from Figure", "1) is similar to make me a reservation at 8 and thus are likely to have similar semantic representations, even without knowing the semantic schema in use.", "Unlike learning output labels, which is difficult when examples are scarce, learning a similarity model can be done on the abundant source domain data, making such models data-efficient even in few-shot settings.", "While there are many instantiations of metric learning methods (see Section 3), we focus on retrieval-based methods, which maintain an explicit retrieval index of labeled examples.", "The most basic setting of retrieval-based model for few-shot learning is: after training a similarity model and encoding target domain data into the index, we can retrieve examples most similar to the given input, and then make a prediction based on their labels.", "Compared to methods that do not maintain an index, such as Prototypical Networks (Snell et al., 2017), retrieval-based methods are less sensitive to outliers with few data points, and are powerful when we have abundant data in the source domain (Triantafillou et al., 2019).", "However, applying retrieval-based models on tasks with a structured output space is non-trivial.", "For example, even if we know that the utterance in Figure 1 is similar to make me a reservation at 8 , we cannot directly use its slot values (e.g., the time slot has value 8 which is not in the input), and not all slots in the input (e.g., black [CLS] I'd like to book a table at black horse tavern at 7 pm ... [CLS] book a table at the vertex bar & grill on alaska day in slot name: restaurant_type [CLS] book a taverna that serves vichyssoise within walking distance slot name: restaurant_type [CLS] make me a reservation at a bar for a party of 7 in slot name: restaurant_type examples in the support set query 0.87 0.96 0.97 0.58 [CLS] book a taverna that serves vichyssoise within walking distance slot name: served_dish Figure 1: Illustration of span-level retrieval for slot filling. For each span (including spans that are not valid slots such as book a table ) in the input utterance, we retrieve its most similar span from the retrieval index, and then assign the slot name as the prediction with a similarity score.", "horse tavern ) have counterparts in the retrieved utterance.", "While previous works have exploited token-level similarity methods in a BIO-tagging framework, they had to separately simulate the label transition probabilities, which might still suffer from domain shift in few-shot settings (Wiseman and Stratos, 2019; Hou et al., 2020).", "In this work, we propose Retriever , a retrieval-based framework that tackles both classification and span-level prediction tasks.", "The core idea is to match token spans in an input to the most similar labeled spans in a retrieval index.", "For example, for the span 7 pm in the input utterance, the model retrieves 8 as a similar span (given their surrounding contexts), thus predicting that 7 pm has the same slot name time as 8 .", "During training, we fine-tune a two-tower model with BERT (Devlin et al., 2019) encoders, along with a novel batch softmax objective, to encourage high similarity between contextualized span representations sharing the same label.", "At inference time, we retrieve the most similar span from the few-shot examples for every potential input span, and then decode a structured output that has the highest average span similarity score.", "We show that our proposed method is effective on both few-shot intent classification and slot-filling tasks, when evaluated on CLINC (Larson et al., 2019) and SNIPS (Coucke et al., 2018) datasets, respectively.", "Experimental results show that Retriever achieves high accuracy on few-shot target domains without retraining on the target data.", "For example, it outperforms the strongest baseline by 4.45% on SNIPS for the slot-filling task.", "In addition to being more robust against overfit-ting and catastrophic forgetting problems, which are essential in few-shot learning settings, our proposed method has multiple advantages overs strong baselines.", "For instance, if the scheme is changed or some prediction bugs need to be fixed, there is minimum re-training required.", "More importantly, compared to classification or Prototypical Networks which require adding arbitrary number of instances to the training data and hope that the model will predict as expected (Yu et al., 2019; Liang et al., 2020), Retriever can guarantee the prediction when a similar query is encountered.", "At the same time, Retriever is more interpretable where the retrieved examples can serve as explanations.", "In addition, different from the simplified assumption that one utterance may only have one intent (Bunt, 2009; Yu and Yu, 2019), Retriever can be used to predict multiple labels.", "Lastly, because Retriever does not need to model transition probability, the decoding procedure can be paralleled and potentially be modified to be non-autoregressive for speedup.", "We can also tune threshold (explained in Section 5.2) to change precision and recall according to use case requirements.", "Few-shot metric learning Metric learning methods target at learning representations through distance functions.", "Koch et al. (2015) proposed Siamese Networks which differentiated input examples with contrastive and triplet loss functions (Schroff et al., 2015) on positive and negative pairs.", "While they are more data efficient for new classes than linear classifiers, Siamese Networks are hard to train due to weak pairs sampled from training batch (Gillick et al., 2019).", "In comparison, Prototypical Networks (Snell et al., 2017) proposed to compute class representations by averaging embeddings of support examples for each class.", "These methods have been mostly explored in computer vision and text classification (Geng et al., 2019; Yu et al., 2018), and consistently outperform Siamese Networks and retrieval-based methods such as k -nearest-neighbors, especially when there are more classes and fewer annotated examples (Triantafil-lou et al., 2019; Sun et al., 2019).", "However, newly added examples which are outliers may change the prototypical representations dramatically that can harm all predictions on the class.", "In addition, these methods do not perform well when there are more annotated data available per class (Triantafil-lou et al., 2019).", "Recently, Wang et al. (2019) showed that a simple nearest neighbor model with feature transformations can achieve competitive results with the state-of-the-art methods on image classification.", "Inspired by their work, we train our retrieval-based model with a novel batch softmax objective.", "Metric learning in language understanding Utilizing relevant examples to boost model performance has been applied to language modeling (Khandelwal et al., 2020), question answering (Guu et al., 2020; Lewis et al., 2020), machine translation (Zhang et al., 2018), and text generation (Peng et al., 2019).", "Recently, metric learning has been applied to intent classification (Sun et al., 2019; Krone et al., 2020).", "Ren and Xue (2020) trained Siamese Networks before learning a linear layer for intent classification and showed competitive results with traditional methods in the full-data setting.", "Similar ideas are also extended to sequence labeling tasks such as named entity recognition (NER, Wiseman and Stratos, 2019; Fritzler et al., 2019) by maximizing the similarity scores between contextual tokens representations sharing the same label.", "Krone et al. (2020) utilized Prototypical Networks to learn intent and slot name prototype representations and classified each token to its closest prototype.", "They showed better results than meta-learning, another prevalent few-shot learning method (Finn et al., 2017; Mishra et al., 2018).", "In order to consider label dependencies that are essential in slot tagging tasks (Huang et al., 2015), Hou et al. (2020) proposed a collapsed dependency transfer (CDT) mechanism by simulating transition scores for the target domain from transition probabilities among BIO labels in the source domain, outperforming previous methods on slot filling by a large margin.", "Yang and Katiyar (2020) further explored the transition probability by evenly distributing the collapsed transition scores to the target domain to maintain a valid distribution.", "However, this simulation is noisy and the difference between the source and target domains can result in biased transition probabilities.", "The most similar approach to ours is a concurrent work from Ziyadi et al. (2020), which learns span boundaries and sentence similarities before retrieving the most similar span, inspired by question-answering models.", "Even though this approach predicts spans before retrieving on the span level and thus bypasses the problem of transition probability in previous research, it only achieves unsatisfactory results.", "Different from these researches, we propose to learn span representations using a batch softmax objective without having to explicitly learn span boundaries.", "Our method achieves more accurate slot and intent prediction than previous methods in the few-shot setting.", "We consider two tasks where the input is an utterance x with tokens x 1 , . . . , x n and the output is some structure y .", "For the slot filling task, the output y is a set of non-overlapping labeled spans { ( r i , (cid:96) i ) } mi =1 where r i is a span of x (e.g., 7 pm ) and (cid:96) i is a slot name (e.g., time ).", "For the intent classification task, the output y is simply an intent label (cid:96) for the whole utterance x .", "For notational consistency, we view intent classification as predicting a labeled span ( r, (cid:96) ) where r = x 1: n .", "In the few-shot setup, examples ( x , y ) are divided into source and target domains.", "Examples in the target domain may contain some labels (cid:96) that are unseen in the source domain.", "The model will be given ample training data from the source domain, but only a few training examples from the target domain.", "For instance, the model receives only K = 5 examples for each unseen label.", "The model can be evaluated on test data from both domains.", "We propose a retrieval-based model, Retriever , for intent classification and slot filling in the few-shot setting.", "Figure 1 illustrates our approach.", "At a high level, from examples ( x , y ) in the target training data (and optionally the source training data), we construct a retrieval index consisting of labeled spans ( r, (cid:96) ) from y .", "Given a test utterance x , for each span of interest in x (all spans x i : j for slot filling; only x 1: n for intent classification), we retrieve the most similar labeled spans ( r, (cid:96) ) from the index, and then use them to decode an output y that maximizes the average span similarity score.", "The use of retrieval provides several benefits.", "For instance, we empirically show in Section 7.1 that the model does not suffer from catastrophic forgetting because both source and target data are present in the retrieval index.", "Class imbalance can also be directly mitigated in the retrieval index.", "Additionally, since the trained model is nonparametric, we could replace the retrieval index to handle different target domains without having to retrain the model.", "This also means that the model does not need access to target data during training, unlike traditional classification methods.", "The retriever is the only trainable component in our model.", "Given a query span r (cid:48) = x i : j from the input x , the retriever returns a set of labeled spans ( r, (cid:96) ) with the highest similarity scores s ( z , z (cid:48) ) , where z = E ( r ) and z (cid:48) = E ( r (cid:48) ) are the contextualized embedding vectors of r and r (cid:48) , respectively.", "Similarity score To compute the contextualized embeddings z and z (cid:48) of spans r and r (cid:48) , we first apply a Transformer model initialized with pretrained BERT on the utterances where r and r (cid:48) come from.", "For slot filling, we follow Toshniwal et al. (2020) and define the span embedding as the concatenated embeddings of the its first and last wordpieces.", "For intent classification, we use the embedding of the [CLS] token.", "We then define s ( z , z (cid:48) ) as the dot product 1 between z and z (cid:48) .", "Training with batch softmax We use examples from the source domain to train Retriever .", "Let (cid:96) 1 , . . . , (cid:96) N be the N class labels (slot or intent labels) in the source domain.", "To construct a training batch, for each class label (cid:96) i , we sample B spans r 1 i , . . . , r Bi from the training data with that label, and compute their embeddings z 1 i , . . . , z Bi .", "1 We experimented with affine transformation as well as cosine similarity but did not see any performance gain.", "For intent classification, using the [CLS] token achieves better results than averaging word embeddings.", "Then, for each query span r ji , we compute similarity scores against all other spans in the batch to form a B N similarity matrix: S ji = s ( z ji , z 11 ) s ( z ji , z 12 ) . . . s ( z ji , z 1 N ) s ( z ji , z 21 ) s ( z ji , z 22 ) . . . s ( z ji , z 2 N ) ... ... ... ... s ( z ji , z B 1 ) s ( z ji , z B 2 ) . . . s ( z ji , z BN ) .", "We now summarize the score between r ji and each label (cid:96) i (cid:48) by applying a reduction function (defined shortly) along each column to get a 1 N vector: ji (cid:104) ji ji ji (cid:105)", "We use the softmax of S ji as the model's probability distribution on the label of r ji .", "The model is then trained to optimize the cross-entropy loss on this distribution against the gold label (cid:96) i .", "s ( z ji , z i (cid:48) ) = 1 BB (cid:88) j (cid:48) =1 s ( z ji , z j (cid:48) i (cid:48) ) = s (cid:32) z ji , 1 BB (cid:88) j (cid:48) =1 z j (cid:48) i (cid:48) (cid:33) (3) s ( z ji , z i (cid:48) ) = max 1 j (cid:48) B ; j (cid:48) (cid:54) = j if i = i (cid:48) s ( z ji , z j (cid:48) i (cid:48) ) (4) s ( z ji , z i (cid:48) ) = min 1 j (cid:48) B s ( z ji , z j (cid:48) i (cid:48) ) , if i = i (cid:48) max 1 j (cid:48) B s ( z ji , z j (cid:48) i (cid:48) ) , otherwise (5)", "The mean reduction averages embeddings of the spans with the same label and is equivalent to Prototypical Networks.", "Similar to hard negative sampling to increase margins among classes (Schroff et al., 2015; Roth et al., 2020; Yang et al., 2019), max takes the most similar span to the query (ex-cluding the query itself) as the label representation, while min-max takes the least similar span when considering spans with the same label as the query.", "After training, we build a dense retrieval index where each entry ( r, (cid:96) ) is indexed by z = E ( r ) .", "The entries ( r, (cid:96) ) come from examples ( x , y ) in the support set which, depending on the setting, could be just the target training data or a mixture of source and target data.", "For each query span r (cid:48) of the input utterance x , we embed the span and compute the similarity scores against all index entries.", "Intent classification For intent classification, both index entries and query spans are restricted to the whole utterances.", "The entire process thus boils down to retrieving the most similar utterance based on the [CLS] token embedding.", "We simply output the intent label of the retrieved utterance.", "Slot filling In contrast to BIO decoding for token-level similarity models (Hou et al., 2020), decoding with span retrieval results poses unique challenges as gold span boundaries are not known a priori.", "Hence, we use a modified beam search procedure with simple heuristics to compose the spans.", "Specifically, for each of the n m spans in an utterance of length n (where the hyperparameter m is the maximum span length), we retrieve the most similar span from the retrieval index.", "Then we normalize the similarity scores by L2-norm so that they are within the range [0 , 1] .", "Since we do not explicitly predict span boundaries, all n m spans, including non-meaningful ones (e.g., book a ), will have a retrieved span.", "Such non-meaningful spans should be dissimilar to any labeled span in the retrieval index.", "We thus choose to filter the spans with a score threshold to get a smaller set of candidate spans.", "In addition, we adjust the threshold dynamically (by reducing the threshold for a few times) if no span is above the current threshold.", "Once we get candidate spans with similarity scores, we use beam search to decode a set of spans with maximum average scores.", "2 We go through the list of candidate spans in the descending order of their similarity scores.", "For each candidate span, we expand beam states if the span does not overlap with the existing spans in the beam.", "The search beams are pruned based on the average similarity score of the spans included so far.", "Lastly, we add spans in the filtered set which do not overlap with the final beam.", "Beam search can avoid suboptimal decisions that a greedy algorithm would make.", "For instance, if we greedily process the example in Figure 1, black and tavern would become two indepen-dent spans, even though their average similarity score is lower than the correct span black horse tavern .", "Nevertheless, beam search is prone to mixing up span boundaries and occasionally predicts consecutive partial spans such as black horse and tavern as individual slots.", "Since consecutive spans of the same slot label are rare in slot filling 2 We use beam search for simplicity.", "where r i : j and r j : k are two consecutive potential spans sharing the same label, and z (cid:48) and z (cid:48)(cid:48) are the embeddings of their retrieved spans, respectively ( r i : k indicates merging the two spans into one span; is the merge threshold where = 1 means always merge and = 0 means never merge).", "We evaluate our proposed approach on two datasets: CLINC (Larson et al., 2019) for intent classification and SNIPS (Coucke et al., 2018) for slot filling.", "Note that we use max (Eq.", "4) as the reduction function for both tasks since it empirically yields the best results.", "The effect of reduction functions will be analyzed later in Section 7.1.", "The CLINC intent classification dataset (Larson et al., 2019) contains utterances from 10 intent categories (e.g., travel), each containing 15 intents (e.g., flight_status, book_flight).", "To simulate the few-shot scenario where new domains and intents are introduced, we designate n c categories and n i intents per category as the source domain (with all 100 training examples per intent), and use the remaining 150 n c n i intents as the target domain.", "We experiment with ( n c , n i ) = (10 , 10) , (8 , 10) , and (5 , 15) .", "3 The target training data contains either 2 or 5 examples per target intent.", "We compare our proposed method Retriever with a classification model BERT fine-tune and a Prototypical Network model Proto .", "The former learns a linear classifier on top of BERT embeddings (Devlin et al., 2019), and the latter learns class representations based on Prototypical Networks.", "4 We also show results with the initial BERT checkpoint without training ( Proto frz , Retriever frz ).", "We use the same batch size for all models, and tune other hyperparameters on the development set before testing.", "3 ( n c , n i ) = (10 , 10) simulates the situation where all the categories are known, but we adapt to new intents in all 10 categories; ( n c , n i ) = (5 , 15) simulates the situation where we adapt to 5 entirely new intent categories.", "4 Previous work show that Prototypical Networks outperforms other optimization-based and metric-learning models such as MAML in (intent) classification tasks (Triantafillou et al., 2019; Krone et al., 2020).", "Evaluation We sample domains and intents three times for each ( n c , n i ) setting, and report average prediction accuracy.", "We report accuracy on intents from the target domain (tgt), source domain (src), and the macro average across all intents (avg).", "In addition to applying the model to the target domain after pre-training on the source domain without re-training ( Pre-train on src domain ), we also evaluate the model performance with fine-tuning.", "We re-train the model with either target domain data only ( Fine-tune on tgt domain ) or a combination of source and target domain data ( Fine-tune on tgt domain with src data) .", "Moreover, we evaluate the models with the following support set variations: with target domain data and all data in the source domain (sup-port_set=all), with equal number of examples (same as the few-shot number) per intent (sup-port_set=balance), and with only examples from the target domain (support_set=tgt).", "The last one serves as an upper-bound for the target domain accuracy.", "Results Table 1 shows the results for ( n c , n i ) = (10 , 10) and 5 examples per target intent; results on other settings exhibit the same patterns (See Appendix A.3).", "We observe that Retriever performs the best on the source domain (97.08%) before fine-tuning.", "Retriever also achieves the highest accuracy on the target domain (84.95%) after fine-tuning, while maintaining competitive performance on the source domain (95.41%) among all the methods.", "SNIPS (Coucke et al., 2018) is a slot filling dataset containing 39 slot names from 7 different domains: GetWeather (GW), PlayMusic (PM), AddToPlaylist (ATP), RateBook (RB), FindScreeningEvent (FSE), BookRestaurant (BR), and SearchCreativeWork (SCW).", "Following Hou et al. (2020), we train models on five source domains, use a sixth one for development, and test on the remaining domain.", "We directly use the K -shot split provided by Hou et al. (2020), where the support set consists of the minimum number of utterances such that at least K instances exist for each slot name.", "We also set K = 5 in our experiment.", "Appendix A.2 contains further details about the setup.", "We compare against two baselines and three models from the previous work.", "BERT Tagging is a BERT-based BIO tagging model (Devlin et al., 2019) fine-tuned on the testing domain GW PM ATP RB FSE BR SCW Average F1 Classification-based BERT Tagging 59.41 42.00 46.07 20.74 28.20 67.75 58.61 46.11 Token-level SimilarToken frz 53.46 54.13 42.81 75.54 57.10 55.30 32.38 52.96 MatchingToken 36.67 33.67 52.60 69.09 38.42 33.28 72.10 47.98 ProtoToken 67.82 55.99 46.02 72.17 73.59 60.18 66.89 63.24 L-TapNet+CDT+Proto ----67.27 L-Proto+CDT pw * 74.68 56.73 52.20 78.79 80.61 69.59 67.46 68.58 L-TapNet+CDT+Proto pw * 71.64 67.16 75.88 84.38 82.58 70.05 73.41 75.01 Span-level (ours) Proto frz 39.47 38.35 47.68 69.36 38.60 42.39 19.90 42.25 Proto 64.47 53.97 54.64 73.37 42.89 62.48 27.76 54.23 Retriever frz 63.39 46.01 51.11 79.65 62.42 62.13 33.85 56.94 Retriever 82.95 61.74 71.75 81.65 73.10 79.54 51.35 71.72 Table 2: Results on SNIPS test data with 5-shot support sets.", "after training on the source domains, while SimilarToken frz uses BERT embeddings to retrieve the most similar token based on cosine similarity without any training.", "MatchingToken and ProtoToken are two token-level methods that leveraged Matching Networks (Vinyals et al., 2016) and Prototypical Networks (Snell et al., 2017) respectively.", "L-TapNet+CDT+proto (Hou et al., 2020) is an adaptation of TapNet (Yoon et al., 2019) with label semantics, CDT transition probabilities, and Prototypical Networks.", "We experiment with several variants of our proposed method.", "Proto trains Prototypical Networks to compute span class representations.", "Retriever retrieves the most similar slot example for each span.", "Both methods use the same decoding method.", "Similar to SimilarToken frz , Proto frz and Retriever frz use the original BERT embeddings without any training.", "All models are trained on source domains and early stopped based on performance on the development domains.", "Evaluation We report F1 scores for each testing domain in a cross-validation episodic fashion.", "Following Hou et al. (2020), we evaluate each testing domain by sampling 100 different support sets and ten exclusive query utterances for each support set.", "We calculate F1 scores for each episode and report average F1 scores across 100 episodes.", "( Retriever ) achieves higher averaged F1 than all five baselines, outperforming the strongest token-level method ( L-TapNet+CDT+proto ) by 4.45%.", "This shows that our model is effective at span-level predictions.", "More importantly, the better performance suggests that our span-level Retriever model is more efficient at capturing span structures compared to simulated dependencies as our method does not suffer from the potential discrepancy in the transition probabilities between the target and source domains.", "Although Hou et al. (2020) showed that adding pairwise embeddings with cross-attention yielded much better performance, this method is expensive both in memory and computation at inference time, especially when the support set is large (Humeau et al., 2019).", "For fair comparison, we do not directly compare with methods using pairwise embeddings (methods with pw in Table 2).", "Note that our method with pre-computed support example embeddings even outperforms L-Proto+CDT pw with less memory and computation cost.", "Models without re-training The pre-train on src domain section in Table 1 shows the results of models that are only pre-trained on the source domains but not fine-tuned on the target domains.", "Classification models such as BERT fine-tune cannot make predictions on target domains in this setting.", "In contrast, even without seeing any target domain examples during training, retrieval-based models can still make predictions on new domains by simply including new examples in the support sets.", "With support_set=all, Retriever achieves 97.08% on the source domain while Proto performs worse than BERT fine-tune , consistent with previous findings (Triantafillou et al., 2019).", "Retriever achieves the best accuracy (75.93%) on target domains with a balanced support set on all intents (support_set=balance).", "More importantly, Retriever also achieves competitive accuracy on source domains (95.44%), demonstrating that our proposed model achieves the best of both worlds even without re-training on new domains.", "Varying the support set at inference time The construction of the support set is critical to retrieval-based methods.", "In Table 1, we present the model performances under different support settings (all, balance, tgt).", "The support_set=tgt setting serves as an upper bound for the target domain accuracy for both Retriever and Proto methods.", "In general, Retriever achieve the best performance on the source domain intents when we use full support sets (support_set=all).", "In comparison, if we use a balanced support set (support_set=balance), we can achieve much higher accuracy on the target domain while having a slight degradation on the source domain intents prediction.", "This is because full support sets have more source domain examples to increase confusion over target domains.", "Data for fine-tuning The Fine-tune on tgt domain section in Table 1 shows different model behaviors when fine-tuned on the target domain data directly.", "While BERT fine-tune achieves high accuracy (78.89%) on the target domain, it suffers from catastrophic forgetting on the source domain (43.91%).", "On the other hand, Proto and Retriever can get high accuracy on the target domain (80.44% and 79.20%) while maintaining high performance on the source domain.", "When we combine data from the source domain, we observe performance gains in all the models under the Fine-tune on tgt domain with src data section.", "Specifically, we add few-shot source domain examples as contrastive examples for the models to learn better utterance/class representations for Retriever and Proto .", "Results show that accuracy on the target domain increases by over 3% compared to only using target domain data.", "This tgt src avg BERT fine-tune -Proto +12.89 -0.51 +5.18 Retriever +14.60 -0.14 +6.11 Retriever min-max +10.79 -0.20 +4.47 Table 3: Improvement (%) over BERT fine-tune on target (tgt), source (src), and average (avg), after fine-tuning on the 5-shot support sets.", "suggests that unlike other retrieval-based methods such as k NN, Retriever does not require a large support set to guarantee prediction accuracy.", "Impact of reduction functions We compare the reduction functions proposed in Section 5.1 and found that max performs the best.", "Since mean is equivalent to Prototypical Networks, we compared to Proto directly in the experiments.", "min-max is more intuitive in contrasting with least similar examples within the same class compared to max .", "However, its performance is worse than max .", "We speculate the reason to be that we retrieve the example with the maximum score at inference time so that the boundary margin may not be utilized.", "Performance over different settings Table 3 shows the average improvement of our methods over the BERT fine-tune baseline, where all models are fine-tuned on the target domain with a balanced few-shot dataset after training on the source domain (same as Fine-tune on tgt domain with src data section in Table 1).", "Both Proto and Retriever outperforms the baseline on the target domains with a large margin, and Retriever has the best improvement on all intents on average.", "We note that Retriever outperforms the strongest baselines but reaches a low score on the SCW domain.", "This may be due to the bigger difference between the test (SCW) and the development domain (GW) including the size of the support set and their respective slot names.", "We also found that from all the correctly predicted slot spans, 96.73% predicted the correct slot names.", "This shows that the majority of the errors come from querying with invalid spans.", "We believe that span-based pretraining such as Span-BERT (Joshi et al., 2020) could make our proposed method achieve better results.", "Analyzing Proto From Table 2, Retriever outperforms Proto by 17% when training the span representations.", "We conjecture that this is caused by Proto learning noisy prototype.", "Compared to Retriever , the similarity scores between the spans and their corresponding class representations are low, indicating that the span-level prototypes may not be clearly separated.", "Ablation on decoding method Table 4 compares beam search to greedy search.", "Results suggest that beam search with larger beam sizes achieve better F1 scores.", "As discussed in Section 5.2, we merge same-label spans during inference based on a score threshold.", "As shown in Table 4, merging spans results in a 1.67% F1 gain (70.43% vs 72.10%) under the same beam size.", "Error Analysis We find that the main problem of our proposed model is that tokens surrounding the gold span may contain excessive contextual information so that these surrounding invalid spans retrieve corresponding spans with high similarities.", "For instance, in the query add my track to old school metal playlist, the token playlist retrieves an actual playlist span with a high similarity score.", "Another major issue is that the similarity score retrieved by a partial of the gold span sometimes is higher than that retrieved by the whole span.", "Our ablation results on merge threshold shown in Table 4 also suggest that partial spans may retrieve complete spans individually so that if we merge consecutive spans with the same slot name, we can achieve higher F1 scores.", "In this paper, we propose a retrieval-based method, Retriever , for few-shot intent classification and slot filling.", "We conduct extensive experiments to compare different model variants and baselines, and show that our proposed approach is effective in the few-shot learning scenario.", "We believe that our method can also work on open domain dialog tasks where annotations may be more scarce and other text classification tasks.", "In the future, we plan to extend our method to predict more complex structures with span-based retrieval.", "Our intended use case is few-shot domain adaption to new classes.", "Our experiments are done on English data, but the method is not English-specific.", "We use 8 Cloud TPUs V2 cores 5 for training and one V100 GPU for inference.", "Since our model does not have to be retrained for the new domains, it can reduce the resources needed when applying such systems.", "We claim that our proposed method outperforms baselines on few-shot slot filling and intent classification examples.", "Our experiments mainly focus on the 5-shot setting and the 2-shot setting, which are typical testing scenarios applied by previous work with the same claim.", "We thank Terry Koo and Emily Pitler from Google Research, and anonymous reviewers for their constructive suggestions." ]
[ "abstain", "objective", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "method", "other" ]
[ "We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue.", "We further offer DialogRE as a platform for studying cross-sentence RE as most facts span multiple sentences.", "We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.", "Considering the timeliness of communication in a dialogue, we design a new metric to evaluate the performance of RE methods in a conversational setting and investigate the performance of several representative RE methods on DialogRE.", "Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.", "DialogRE is available at https:// dataset.org/dialogre/ .", "Cross-sentence relation extraction, which aims to identify relations between two arguments that are not mentioned in the same sentence or relations that cannot be supported by any single sentence, is an essential step in building knowledge bases from large-scale corpora automatically (Ji et al., 2010; Swampillai and Stevenson, 2010; Surdeanu, 2013).", "It has yet to receive extensive study in natural language processing, however.", "In particular, although dialogues readily exhibit cross-sentence relations, most existing relation extraction tasks focus on texts from formal genres such as professionally written and edited news reports or well-edited websites (Elsahar et al., 2018; Yao et al., 2019; Equal contribution.", "Mesquita et al., 2019; Grishman, 2019), while dialogues have been under-studied.", "In this paper, we take an initial step towards studying relation extraction in dialogues by constructing the first human-annotated dialogue-based relation extraction dataset, DialogRE .", "Specifically, we annotate all occurrences of 36 possible relation types that exist between pairs of arguments in the 1,788 dialogues originating from the complete transcripts of Friends , a corpus that has been widely employed in dialogue research in recent years (Cati-zone et al., 2010; Chen and Choi, 2016; Chen et al., 2017; Zhou and Choi, 2018; Rashid and Blanco, 2018; Yang and Choi, 2019).", "Altogether, we annotate 10,168 relational triples.", "For each ( subject, relation type, object ) triple, we also annotate the minimal contiguous text span that most clearly expresses the relation; this may enable researchers to explore relation extraction methods that provide fine-grained explanations along with evidence sentences.", "For example, the bolded text span brother in Table 1 indicates the PER : SIBLINGS relation (R1 and R2) between speaker 2 (S2) and Frank .", "Our analysis of DialogRE indicates that the supporting text for most (approximately 96 . 0% ) annotated relational triples includes content from multiple sentences, making the dataset ideal for studying cross-sentence relation extraction.", "This is perhaps because of the higher person pronoun frequency (Biber, 1991) and lower information density (Wang and Liu, 2011) in conversational texts than those in formal written texts.", "In addition, 65 .", "9% of relational triples involve arguments that never appear in the same turn, suggesting that multi-turn information may play an important role in dialogue-based relation extraction.", "For example, to justify that Pheebs is an alternate name of S2 in Table 1, the response of S2 in the second turn is required as well as the first turn.", "We next conduct a thorough investigation of the similarities and differences between dialogue-based and traditional relation extraction tasks by comparing DialogRE and the Slot Filling dataset (McNamee and Dang, 2009; Ji et al., 2010, 2011; Surdeanu, 2013; Surdeanu and Ji, 2014), and we argue that a relation extraction system should be aware of speakers in dialogues.", "In particular, most relational triples in DialogRE ( 89 . 9% ) signify either an attribute of a speaker or a relation between two speakers.", "The same phenomenon occurs in an existing knowledge base constructed by encyclopedia collaborators, relevant to the same dialogue corpus we use for annotation (Section 3.2).", "Unfortunately, most previous work directly applies existing relation extraction systems to dialogues without explicitly considering the speakers involved (Yoshino et al., 2011; Wang and Cardie, 2012).", "Moreover, traditional relation extraction methods typically output a set of relations only after they have read the entire document and are free to rely on the existence of multiple mentions of a relation throughout the text to confirm its existence.", "However, these methods may be insufficient for powering a number of practical real-time dialogue-based applications such as chatbots, which would likely require recognition of a relation at its first mention in an interactive conversation.", "To encourage automated methods to identify the relationship between two arguments in a dialogue as early as possible, we further design a new performance evaluation metric for the conversational setting, which can be used as a supplement to the standard F 1 measure (Section 4.1).", "In addition to dataset creation and metric design, we adapt a number of strong, representative learning-based relation extraction methods (Zeng et al., 2014; Cai et al., 2016; Yao et al., 2019; Devlin et al., 2019) and evaluate them on DialogRE to establish baseline results on the dataset going forward.", "We also extend the best-performing method (Devlin et al., 2019) among them by letting the model be aware of the existence of arguments that are dialogue participants (Section 4.2).", "Experiments on DialogRE demonstrate that this simple extension nevertheless yields substantial gains on both standard and conversational RE evaluation metrics, supporting our assumption regarding the critical role of tracking speakers in dialogue-based relation extraction (Section 5).", "The primary contributions of this work are as follows: ( i ) we construct the first human-annotated dialogue-based relation extraction dataset and thoroughly investigate the similarities and differences between dialogue-based and traditional relation extraction tasks, ( ii ) we design a new conversational evaluation metric that features the timeliness aspect of interactive communications in dialogue, and ( iii ) we establish a set of baseline relation extraction results on DialogRE using standard learning-based techniques and further demonstrate the importance of explicit recognition of speaker arguments in dialogue-based relation extraction.", "We use the transcripts of all ten seasons ( 263 episodes in total) of an American television situation comedy Friends , covering a range of topics.", "We remove all content (usually in parentheses or square brackets) that describes non-verbal information such as behaviors and scene information.", "We follow the slot descriptions 1 of the Slot Filling (SF) task in the Text Analysis Conference Knowledge Base Population (TAC-KBP) (McNamee and Dang, 2009; Ji et al., 2010, 2011; Surdeanu, 2013; Surdeanu and Ji, 2014), which primarily focuses on biographical attributes of person (PER) entities and important attributes of organization (ORG) entities.", "As the range of topics in Friends is relatively restricted compared to large-scale news corpora such as Gigaword (Parker et al., 2011), some relation types (e.g., PER : CHARGES , and ORG : SUBSIDIARIES ) seldom appear in the texts.", "Additionally, we consider new relation types such as PER : GIRL / BOYFRIEND and PER : NEIGHBOR that 1 http://surdeanu.info/kbp2014/def.php.", "frequently appear in Friends .", "We list all 36 relation types that have at least one relational instance in the transcripts in Table 2 and provide definitions and examples of new relation types in Appendix A.1.", "We focus on the annotation of relational triples (i.e., ( subject, relation type, object )) in which at least one of the arguments is a named entity.", "We regard an uninterrupted stream of speech from one speaker and the name of this speaker as a turn .", "As we follow the TAC-KBP guideline to annotate relation types and design new types, we use internal annotators (two authors of this paper) who are familiar with this task.", "For a pilot annotation, annotator A annotates relational triples in each scene in all transcripts and form a dialogue by extracting the shortest snippet of contiguous turns that covers all annotated relational triples and sufficient supportive contexts in this scene.", "The guidelines are adjusted during the annotation.", "2 We prefer to use speaker name (i.e., the first word or phrase of a turn, followed by a colon) as one argument of a speaker-related triple if the corresponding full names or alternate names of the speaker name also appear in the same dialogue, except for relation PER : ALTERNATE NAMES in which both mentions should be regarded as arguments.", "For an argument pair (i.e., ( subject, object )), there may exist multiple relations between them, and we annotate all instances of all of them.", "For each 2 As the pilot annotation only involves one annotator, we admit there may exist a certain degree of bias in defining new relation types and labeling argument pairs.", "triple, we also annotate its trigger : the smallest extent (i.e., span) of contiguous text in the dialogue that most clearly indicates the existence of the relation between two arguments.", "If there exist multiple spans that can serve as triggers, we only keep one for each triple.", "For relation types such as PER : TITLE and PER : ALTERNATE NAMES , it is difficult to identify such supportive contexts, and therefore we leave their triggers empty.", "For each relational triple, we annotate its inverse triple if its corresponding inverse relation type exists in the schema (e.g., PER : CHILDREN and PER : PARENTS ) while the trigger remains unchanged.", "In the second process, annotator B annotates the possible relations between candidate pairs annotated by annotator A (previous relation labels are hidden).", "Cohen's kappa among annotators is around 0 .", "87 .", "We remove the cases when annotators cannot reach a consensus.", "On average, each dialogue in DialogRE contains 4 .", "5 relational triples and 12 .", "9 turns, as shown in Table 3.", "See Table 1 for relational triple examples (R1, R2, and R3).", "After our first round of annotation, we use any two annotated arguments associated with each dialogue to generate candidate relational triples, in which the relation between two arguments is unanswerable based on the given dialogue or beyond our relation schema.", "We manually filter out candidate triples for which there is obviously no relation between an argument pair in consideration of aspects such as argument type constraints (e.g., relation PER : SCHOOLS ATTENDED can only exist between a PER name and an ORG name).", "After filtering, we keep 2,100 triples in total, whose two arguments are in no relation, and we finally have 10,168 triples for 1,788 dialogues.", "We randomly split them at the dialogue level, with 60% for training, 20% for development, and 20% for testing.", "relations between argument pairs based on a dialogue, rather than exploiting information in DialogRE beyond the given dialogue or leveraging external knowledge to predict the relations between arguments (e.g., characters) specific to a particular television show.", "Therefore, we anonymize all speaker names (Section 2.2) in each dialogue and annotated triples and rename them in chronological order within the given dialogue.", "For example, S 1 and S 2 in Table 1 represent the original speaker names Rachel and Phoebe , respectively.", "As a pilot study, we examine the similarities and differences between dialogue-based and traditional relation extraction datasets that are manually annotated.", "We compare DialogRE with the official SF (2013-2014) dataset (Surdeanu, 2013; Surdeanu and Ji, 2014) as 47 .", "2% of relation types in DialogRE originate from the SF relation types (Section 2.1), and 92 .", "2% of the source documents in it that contain ground truth relational triples are formally written newswire reports ( 72 . 8% ) or well-edited web documents ( 19 . 4% ) compared to the remaining documents from discussion fora.", "We show the relation distributions in DialogRE and SF in Figure 1 and Figure 2 (Appendix A.2), respectively.", "Half of the top ten relation types in DialogRE are newly defined ( PER : GIRL / BOYFRIEND , PER : POSITIVE ( NEGATIVE ) IMPRESSION , PER : FRIENDS , and PER : ROOMMATE ), partially justifying the need for new relation types.", "Argument Type : Based on the predefined SF and DialogRE relation types, a subject is expected to be an entity of type PER, ORG, or geo-political entity (GPE).", "Notably, subjects of most relational triples ( 96 . 8% vs. 69 . 7% in the SF dataset) in DialogRE are person names.", "The coarse-grained object type is entity, string, or value (i.e., a numerical value or a date).", "As shown in Table 4, we observe that a higher proportion ( 80 . 1% ) of objects are entities in DialogRE compared to that in SF ( 65 . 3% ).", "In particular, the subjects of 77 .", "3% of relational triples are speaker names, and more than 90 .", "0% of relational triples contain at least one speaker argument.", "The high percentage of speaker-centric relational triples and the low percentage of ORG and GPE arguments in DialogRE is perhaps because the transcripts for annotation are from a single situation comedy that involves a small group of characters in a very limited number of scenes (see more discussions in Section 5.3).", "Distance Between Argument Pairs : It has been shown that there is a longer distance between two arguments in the SF dataset (Surdeanu, 2013; Huang et al., 2017) compared to that in many widely used human-annotated relation extraction datasets such as ACE (Doddington et al., 2004) and SemEval (Hendrickx et al., 2010).", "However, it is not trivial to compute an accurate distance between two arguments in a dialogue, especially for cases containing arguments that are speaker names.", "We instead consider different types of distances (e.g., average and minimum) between two argument mentions in a dialogue.", "We argue that DialogRE exhibits a similar level of difficulty as SF from the perspective of the distance between two arguments.", "41 .", "3% of arguments are separated by at least seven words even considering the minimum distance, and the percentage can reach as high as 96 .", "5% considering the average distance, contrast with 46 .", "0 % in SF (Huang et al., 2017) and 59 .", "8% in a recently released cross-sentence relation extraction dataset DocRED, in which Wikipedia articles serve as documents (Yao et al., 2019).", "Note that the provenance/evidence sentences in SF and DocRED are provided by automated systems or annotators.", "Also, 95 .", "6% of relational triples from an annotated subset of DialogRE (Section 5.2) require reasoning over multiple sentences in a dialogue, compared with 40 .", "7% in DocRED (Table 7).", "See Figure 3 in Appendix A.3 for more details.", "We also collect 2,341 relational triples related to Friends , which are summarized by a community of contributors, from a collaborative encyclopedia.", "3 We remove triples of content-independent relation types such as DIRECTED BY , GUEST STARS , and NUMBER OF EPISODES .", "3 https://friends.fandom.com/wiki/Friends.", "We find that 93 .", "8% of all 224 relation types in these triples can be mapped to one of the 36 relation types in our relation schema (e.g., HUSBAND , EX-HUSBAND , and WIFE can be mapped to PER : SPOUSE ) except for the remaining relatively rare or implicit relation types such as PROM DATE and GENDER , and KISSED , demonstrating the relation schema we use for annotation is capable of covering most of the important relation types labeled by the encyclopedia community of contributors.", "On the other hand, the relatively small number of the existing triples and the moderate size of our annotated triples in DialogRE may suggest the low information density (Wang and Liu, 2011) in conversational speech in terms of relation extraction.", "For example, the average annotated triple per sentence in DialogRE is merely 0 .", "21 , compared to other exhaustively annotated datasets ACE ( 0 . 73 ) and KnowledgeNet (Mesquita et al., 2019) ( 1 . 44 ), in which corpora are formal written news reports and Wikipedia articles, respectively.", "As annotated triggers are rarely available in existing relation extraction datasets (Aguilar et al., 2014), the connections between different relation types and trigger existence are under-investigated.", "Relation Type : In DialogRE, 49 .", "6% of all relational triples are annotated with triggers.", "We find that argument pairs are frequently accompanied by triggers when (1) arguments have the same type such as PER : FRIENDS , (2) strong emotions are involved (e.g., PER : POSITIVE ( NEGATIVE ) IMPRESSION ), or (3) the relation type is related to death or birth (e.g., GPE : BIRTHS IN PLACE ).", "In comparison, a relation between two arguments of different types (e.g., PER : ORIGIN and PER : AGE ) is more likely to be implicitly expressed instead of relying on triggers.", "This is perhaps because there exist fewer possible relations between such an argument pair compared to arguments of the same type, and a relatively short distance between such an argument pair might be sufficient to help the listeners understand the message correctly.", "For each relation type, we report the percentage of relational triples with triggers in Table 2.", "Argument Distance : We assume the existence of triggers may allow a longer distance between argument pairs in a text as they help to decrease ambiguity.", "This assumption may be empirically validated by the longer average distance ( 68 . 3 tokens) between argument pairs with triggers in a dialogue, compared to the distance ( 61 . 2 tokens) between argument pairs without any triggers.", "Given a dialogue D = s 1 : t 1 , s 2 : t 2 , . . . , s m : t m and an argument pair ( a 1 , a 2 ) , where s i and t i denote the speaker ID and text of the i th turn, respectively, and m is the total number of turns, we evaluate the performance of approaches in extracting relations between a 1 and a 2 that appear in D in the following two settings.", "Standard Setting : As the standard setting of relation extraction tasks, we regard dialogue D as document d .", "The input is a 1 , a 2 , and d , and the expected output is the relation type(s) between a 1 and a 2 based on d .", "We adopt F 1 , which is the harmonic mean of precision (P) and recall (R), for evaluation.", "Conversational Setting : Instead of only considering the entire dialogue, here we can regard the first i m turns of the dialogue as d .", "Accordingly, we propose a new metric F 1 c , the harmonic mean of conversational precision (P c ) and recall (R c ), as a supplement to the standard F 1 .", "We start by introducing some notation that will be used in the definition of F 1 c .", "Let O i denote the set of predicted relation types when the input is a 1 , a 2 , and the first i turns (i.e., d = s 1 : t 1 , s 2 : t 2 , . . . , s i : t i ).", "For an argument pair ( a 1 , a 2 ), let L denote its corresponding set of relation types that are manually annotated based on the full dialogue.", "R represents the set of 36 relation types.", "By definition, O i , L R .", "We define that auxiliary function ( x ) returns m if x does not appear in D .", "Otherwise, it returns the index of the turn where x first appears.", "We define auxiliary function ( r ) as:", "(i) For each relation type r L , if there exists an annotated trigger for r , ( r ) = ( r ) where r denotes the trigger.", "Otherwise, ( r ) = m .", "(ii) For each r R \\ L , ( r ) = 1 .", "We define the set of relation types that are evaluable based on the first i turns by E i : E i = { r | i max { ( a 1 ) , ( a 2 ) , ( r ) }} (1) The interpretation of Equation 1 is that given d containing the first i turns in a dialogue, relation type r associated with a 1 and a 2 is evaluable if a 1 , a 2 , and the trigger for r have all been mentioned in d .", "The definition is based on our assumption that we can roughly estimate how many turns we require to predict the relations between two arguments based on the positions of the arguments and triggers, which most clearly express relations.", "See Section 5.2 for more discussions.", "The conversational precision and recall for an input instance D , a 1 , and a 2 are defined as: P c ( D, a 1 , a 2 ) = (cid:80) mi =1 | O i L E i | (cid:80) mi =1 | O i E i | (2) R c ( D, a 1 , a 2 ) = (cid:80) mi =1 | O i L E i | (cid:80) mi =1 | L E i | (3) We average the conversational precision/recall scores of all instances to obtain the final conversational precision/recall.", "P c = (cid:80) D (cid:48) ,a (cid:48) 1 ,a (cid:48) 2 P c ( D (cid:48) , a (cid:48) 1 , a (cid:48) 2 ) (cid:80) D (cid:48) ,a (cid:48) 1 ,a (cid:48) 2 1 (4) R c = (cid:80) D (cid:48) ,a (cid:48) 1 ,a (cid:48) 2 R c ( D (cid:48) , a (cid:48) 1 , a (cid:48) 2 ) (cid:80) D (cid:48) ,a (cid:48) 1 ,a (cid:48) 2 1 (5) and F 1 c = 2 P c R c / ( P c + R c ) .", "Majority : If a given argument pair does not appear in the training set, output the majority relation type in the training set as the prediction.", "Otherwise, output the most frequent relation type associated with the two arguments in the training set.", "CNN, LSTM, and BiLSTM : Following previous work (Yao et al., 2019), we adapt three baselines (Zeng et al., 2014; Cai et al., 2016) that use different document encoders.", "We refer readers to Yao et al. (2019) for more details.", "BERT : We follow the framework of fine-tuning a pre-trained language model on a downstream task (Radford et al., 2018) and use BERT (De-vlin et al., 2019) as the pre-trained model.", "We concatenate the given d and ( a 1 , a 2 ) with classification token [CLS] and separator token [SEP] in BERT as the input sequence [CLS] d [SEP] a 1 [SEP] a 2 [SEP] .", "We denote the final hidden vector corresponding to [CLS] as C RH , where H is the hidden size.", "For each relation type i , we introduce a vector W i RH and obtain the probability P i of the existence of i between a 1 and a 2 based on d by P i = sigmoid ( CW Ti ) .", "The cross-entropy loss is used.", "BERTS : We propose a modification to the input sequence of the above BERT baseline with two motivations: (1) help a model locate the start positions of relevant turns based on the arguments that are speaker names, and (2) prevent a model from overfitting to the training data.", "Formally, given an argument pair ( a 1 , a 2 ) and its associated document d = s 1 : t 1 , s 2 : t 2 , . . . , s n : t n , we construct d = s 1 : t 1 , s 2 : t 2 , . . . , s n : t n , where s i is: s i = [S 1 ] if s i = a 1 [S 2 ] if s i = a 2 s i otherwise (6) where [S 1 ] and [S 2 ] are two newly-introduced special tokens.", "In addition, we define a k ( k { 1 , 2 } ) to be [S k ] if i ( s i = a k ) , and a k otherwise.", "The modified input sequence to BERT is [CLS] d [SEP] a 1 [SEP] a 2 [SEP] .", "In Appendix A.4, we investigate in three alternative input sequences.", "It is worth mentioning that a modification that does not disambiguate speaker arguments from other arguments performs substantially worse than the above speaker-aware modification.", "CNN, LSTM, and BiLSTM Baselines : The CNN/LSTM/BiLSTM encoder takes as features GloVe word embeddings (Pennington et al., 2014), mention embeddings, and type embeddings.", "We assign the same mention embedding to mentions of the same argument and obtain the type embeddings based on named entity types of the two arguments.", "We use spaCy 4 for entity typing.", "Language Model Fine-Tuning : We use the uncased base model of BERT released by Devlin et al. (2019).", "We truncate a document when the input sequence length exceeds 512 and fine-tune BERT using a batch size of 24 and a learning rate of 3 10 5 4 https://spacy.io/.", "for 20 epochs.", "Other parameters remain unchanged.", "The embeddings of newly-introduced special tokens (e.g., [S 1 ] ) are initialized randomly.", "We report the performance of all baselines in both the standard and conversational settings in Table 5.", "We run each experiment five times and report the average F 1 and F 1 c along with standard deviation ( ).", "The fine-tuned BERT method already outperform other baselines (e.g., BiLSTM that achieves 51 .", "1% in F 1 on DocRED (Yao et al., 2019)), and our speaker-aware extension to the BERT baseline further leads to 2 .", "7% and 2 .", "2% improvements in F 1 and F 1 c , respectively, on the test set of DialogRE, demonstrating the importance of tracking speakers in dialogue-based relation extraction.", "Conversational Metric : We randomly select 269 and 256 instances, which are associated with 50 dialogues from each of the dev and test sets, respectively.", "For each of relational instances ( 188 in total) that are previously labeled with triggers in the subsets, annotator A labels the smallest turn i such that the first i turns contain sufficient information to justify a relation.", "The average distance between i and our estimation max { ( a 1 ) , ( a 2 ) , ( r ) } in Equation (1) (Section 4.1) is only 0 .", "9 turn, supporting our hypothesis that the positions of arguments and triggers may be good indicators for estimating the minimum turns for humans to make predictions.", "For convenience, we use BERT for the following discussions and comparisons.", "Ground Truth Argument Types : Methods in Table 5 are not provided with ground truth argument types considering the unavailability of this kind of annotation in practical use.", "To study the impacts of argument types on DialogRE, we report the performance of four methods, each of which additionally takes as input the ground truth argument types as previous work (Zhang et al., 2017; Yao et al., 2019).", "We adopt the same baseline for a direct comparison except that the input sequence is changed.", "In Method 1 , we simply extend the original input sequence of BERT (Section 4.2) with newly-introduced special tokens that represent argument types.", "The input sequence is [CLS] d [SEP] 1 a 1 [SEP] 2 a 2 [SEP] , where i is a special token representing the argument type of a i ( i { 1 , 2 } ).", "For example, given a 1 of type PER and a 2 of type STRING, 1 is [PER] and 2 is [STRING] .", "In Method 2 , we extend the input sequence of BERTS with i defined in Method 1 (i.e., [CLS] d [SEP] 1 a 1 [SEP] 1 a 2 [SEP] ).", "We also follow the input sequence of previous single-sentence relation extraction methods (Shi and Lin, 2019; Joshi et al., 2020) and refer them as Method 3 and 4 , respectively.", "We provide the implementation details in Appendix A.5.", "As shown in Table 6, the best performance achieved by Method 2 is not superior to that of BERTS , which does not leverage ground truth argument types.", "Therefore, we guess that ground truth argument types may only provide a limited, if at all positive, contribution to the performance on DialogRE.", "Ground Truth Triggers : We investigate what performance would be ideally attainable if the model could identify all triggers correctly.", "We append the ground truth triggers to the input sequence on the baseline, and the F 1 of this model is 74 .", "9% , a 16 .", "4% absolute improvement compared to the BERT baseline.", "In particular, through the introduction of triggers, we observe a 22 .", "9% absolute improvement in F 1 on relation types whose inverse relation types are themselves (e.g., PER : ROOMMATE and PER : SPOUSE ).", "These experimental results show the critical role of triggers in dialogue-based relation extraction.", "However, trigger identification is perhaps as difficult as relation extraction, and it is labor-intensive to annotate large-scale datasets with triggers.", "Future research may explore how to identify triggers based on a small amount of human-annotated triggers as seeds (Bronstein et al., 2015; Yu and Ji, 2016).", "We analyze the outputs on the dev set and find that BERT tends to make more mistakes when there exists an asymmetric inverse relation of the relation to be predicted compared to those that have symmetric inverse relations.", "For example, the baseline mistakenly predicts S2 as the subordinate of S1 based on the following dialogue: . . . S2: Oh. Well, I wish I could say no, but you can't stay my assistant forever. Neither can you Sophie, but for different reasons. S1: God, I am so glad you don't have a problem with this, because if you did, I wouldn't even consider applying . . . .", "Introducing triggers into the input sequence leads to a relatively small gain ( 11 . 0% in F 1 on all types with an asymmetric inverse relation) perhaps because inverse relation types share the same triggers (e.g., my assistant serves as the trigger for both PER : BOSS and PER : SUBORDINATE ).", "One possible solution may be the use of directed syntactic graphs constructed from the given dialogue, though the performance of coreference resolution and dependency parsing in dialogues may be relatively unsatisfying.", "A major limitation in DialogRE is that all transcripts for annotation are from Friends , which may limit the diversity of scenarios and generality of the relation distributions.", "It may be useful to leverage existing triples in knowledge bases (e.g., Fandom ) for thousands of movies or TV shows using distant supervision (Mintz et al., 2009), considering the time-consuming manual annotation process.", "In addition, dialogues in Friends presents less variation based on linguistic features (Biber, 1991) than natural conversations; nonetheless, compared to other registers such as personal letters and prepared speeches, there are noticeable linguistic similarities between natural conversations and television dialogues in Friends (Quaglio, 2009).", "Different from the sentence-level relation extraction (RE) datasets (Roth and Yih, 2004; Hendrickx et al., 2010; Riedel et al., 2010; Zhang and Wang, 2015; Zhang et al., 2017; Han et al., 2018), in which relations are between two arguments in the same sentence, we focus on cross-sentence RE tasks (Ji et al., 2011; Surdeanu, 2013; Surdeanu and Ji, 2014) and present the first dialogue-based RE dataset, in which dialogues serve as input contexts instead of formally written sentences or documents.", "We compare DialogRE and existing cross-sentence RE datasets (Li et al., 2016; Quirk and Poon, 2017; Yao et al., 2019; Mesquita et al., 2019) in Table 7.", "In this paper, we do not consider relations that take relations or events as arguments and are also likely to span multiple sentences (Pustejovsky and Verha-gen, 2009; Do et al., 2012; Moschitti et al., 2013).", "Relation Extraction Approaches Over the past few years, neural models have achieved remarkable success in RE (Nguyen and Grishman, 2015b,a; Adel et al., 2016; Yin et al., 2017; Levy et al., 2017; Su et al., 2018; Song et al., 2018; Luo et al., 2019), in which the input representation usually comes from shallow neural networks over pre-trained word and character embeddings (Xu et al., 2015; Zeng et al., 2015; Lin et al., 2016).", "Deep contextualized word representations such as the ELMo (Pe-ters et al., 2018) are also applied as additional input features to boost the performance (Luan et al., 2018).", "A recent thread is to fine-tune pre-trained deep language models on downstream tasks (Rad-ford et al., 2018; Devlin et al., 2019), leading to further performance gains on many RE tasks (Alt et al., 2019; Shi and Lin, 2019; Baldini Soares et al., 2019; Peters et al., 2019; Wadden et al., 2019).", "We propose an improved method that explicitly considers speaker arguments, which are seldom investigated in previous RE methods.", "Dialogue-Based Natural Language Understanding To advance progress in spoken language understanding, researchers have studied dialogue-based tasks such as argument extraction (Swanson et al., 2015), named entity recognition (Chen and Choi, 2016; Choi and Chen, 2018; Bowden et al., 2018), coreference resolution (Chen et al., 2017; Zhou and Choi, 2018), emotion detection (Zahiri and Choi, 2018), and machine reading comprehension (Ma et al., 2018; Sun et al., 2019; Yang and Choi, 2019).", "Besides, some pioneer studies focus on participating in dialogues (Yoshino et al., 2011; Hixon et al., 2015) by asking users relation-related questions or using outputs of existing RE methods as inputs of other tasks (Kluwer et al., 2010; Wang and Cardie, 2012).", "In comparison, we focus on extracting relation triples from human-human dialogues, which is still under investigation.", "We present the first human-annotated dialogue-based RE dataset DialogRE.", "We also design a new metric to evaluate the performance of RE methods in a conversational setting and argue that tracking speakers play a critical role in this task.", "We investigate the performance of several RE methods, and experimental results demonstrate that a speaker-aware extension on the best-performing model leads to substantial gains in both the standard and conversational settings.", "In the future, we are interested in investigating the generality of our defined schema for other comedies and different conversational registers, identifying the temporal intervals when relations are valid (Surdeanu, 2013) in a dialogue, and joint dialogue-based information extraction as well as its potential combinations with multimodal signals from images, speech, and videos.", "We would like to thank the anonymous reviewers for their constructive comments and suggestions." ]
[ "objective", "method", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "objective", "result", "abstain", "objective", "other", "method", "method", "method", "abstain", "other", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "other", "abstain", "other", "abstain", "other", "abstain", "method", "objective", "objective", "objective", "abstain", "other" ]
[ "Models pre-trained on large-scale regular text corpora often do not work well for user-generated data where the language styles differ significantly from the mainstream text.", "Here we present Context-Aware Rule Injection (CARI), an innovative method for formality style transfer (FST).", "CARI injects multiple rules into an end-to-end BERT-based encoder and decoder model.", "It learns to select optimal rules based on context.", "The intrinsic evaluation showed that CARI achieved the new highest performance on the FST benchmark dataset.", "Our extrinsic evaluation showed that CARI can greatly improve the regular pre-trained models' performance on several tweet sentiment analysis tasks.", "Many user-generated data deviate from standard language in vocabulary, grammar, and language style.", "For example, abbreviations, phonetic substitutions, Hashtags, acronyms, internet language, ellipsis, and spelling errors, etc are common in tweets (Ghani et al., 2019; Muller et al., 2019; Han et al., 2013; Liu et al., 2020).", "Such irregularity leads to a significant challenge in applying existing language models pre-trained on large-scale corpora dominated with regular vocabulary and grammar.", "One solution is using formality style transfer (FST) (Rao and Tetreault, 2018), which aims to transfer the input text's style from the informal domain to the formal domain.", "This may improve the downstream NLP applications such as information extraction, text classification and question answering.", "A common challenge for FST is low resource (Wu et al., 2020; Malmi et al., 2020; Wang et al., 2020).", "Therefore, approaches that integrate external knowledge, such as rules, have been developed.", "However, existing work (Rao and Tetreault, 2018; Wang et al., 2019) deploy context-insensitive rule injection methods (CIRI).", "As shown in Figure 1, when we try to use CIRI-based FST as the preprocessing for user-generated data in the sentiment classification task, according to the rule detection system, extro has two suggested changes extra or extrovert and intro corresponds to either introduction or", "introvert. The existing CIRI-based FST models would arbitrarily choose rules following first come first served (FCFS).", "As such, the input always, always they think I an extro, but Im a big intro actually could be translated wrongly as they always think I am an extra, but actually, I am a big", "introduction. This leads to the wrong sentiment classification since the FST result completely destroys the original input's semantic meaning.", "In this work, we propose Context-Aware Rule Injection (CARI), an end-to-end BERT-based encoder and decoder model that is able to learn to select optimal rules based on context.", "As shown in Figure 1, CARI chooses rules based on context.", "With CARI-based FST, pre-trained models can perform better on the downstream natural language processing (NLP) tasks.", "In this case, CARI outputs the correctly translated text they always think I am an extrovert, but actually, I am a big introvert, which helps the BERT-based classification model have the correct sentiment classification.", "In this study, we performed both intrinsic and extrinsic evaluation of existing FST models and compared them with the CARI model.", "The intrinsic evaluation results showed that CARI improved the state-of-the-art results from 72.7 and 77.2 to 74.31 and 78.05, respectively, on two domains of a FST benchmark dataset.", "For the extrinsic evaluation, we introduced several tweet sentiment analysis tasks.", "Considering that tweet data is typical informal user-generated data, and regular pre-trained models are usually pre-trained on formal English corpora, using FST as a preprocessing step of tweet data is expected to improve the performance of reg-User-generated input: always, always they think I an extro, but Im a big intro actually Rule Detection System extro extra extro extrovert Im I am intro introduction intro introvert Context-Insensitive Rule Injection ( CIRI ) Context-Aware Rule Injection ( CARI ) always, always they think I an extro, but Im a big intro actually always, always they think I an extra, but Im a big intro actually always, always they think I an extra, but I am a big intro actually always, always they think I an extra, but I am a big introduction actually Info1: I an extra , but Info2: I an extrovert , but Info3: , but I am a big Info4: a big introduction actually Info5: a big introvert actually Info1 + Info2 + Info3 + Info4 + Info5 Encoder-DecoderModel User-generated input + CIRI output User-generated input + CARI output Input + CIRI output: always, always they think I an extro, but Im a big intro actually [SEP] always, always they think I an extra, but I am a big introduction actually CIRI FST result: they always think I am an extra, but actually, I am a big introduction (incorrect FST result which fails the downstream tasks) Input + CARI output: always, always they think I an extro, but Im a big intro actually [SEP] I an extra , but [SEP] I an extrovert , but [SEP] , but I am a big [SEP] a big introduction actually [SEP] a big introvert actually CARI FST result: they always think I am an extrovert, but actually, I am a big introvert (correct FST result which helps the downstream tasks) CARI FST result CIRI FST result (context window size = 2) encode encode decode decode BERT-based Tweet ClassificationModel Downstream NLP Tasks (e.g. tweet classification) CIRI output CARI output User-generated input CIRI FST result CARI FST result input input output output incorrect classification correct classification incorrect classification input output Figure 1: An example of using Context-Insensitive Rule Injection ( CIRI ) and Context-Aware Rule Injection ( CARI ) FST models.", "ular pre-trained models on tweet downstream tasks.", "We regard measuring such improvement as the extrinsic evaluation.", "The extrinsic evaluation results showed that using CARI model as the prepocess-ing step improved the performance for both BERT and RoBERTa on several downstream tweet sentiment classification tasks.", "Our contributions are as follows:", "1. We propose a new method, CARI, to integrate rules for pre-trained language models.", "CARI is context-aware and can be trained end-to-end with the downstream NLP applications.", "2. We have achieved new state-of-the-art results for FST on the benchmark GYAFC dataset.", "3. We are the first to evaluate FST methods with extrinsic evaluation and we show that CARI outperformed existing rule-based FST approaches for sentiment classification.", "Rule-based Formality Style Transfer In the past few years, style-transfer generation has attracted increasing attention in NLP research.", "Early work transfers between modern English and the Shakespeare style with a phrase-based machine translation system (Xu et al., 2012).", "Recently, style transfer has been more recognized as a controllable text generation problem (Hu et al., 2017), where the style may be designated as sentiment (Fu et al., 2018), tense (Hu et al., 2017), or even general syntax (Bao et al., 2019; Chen et al., 2019).", "Formality style transfer has been mostly driven by the Grammarly's Yahoo Answers Formality Corpus (GYAFC) (Rao and Tetreault, 2018).", "Since it is a parallel corpus, FST usually takes a seq2seq-like approach (Niu et al., 2018; Xu et al., 2019).", "Existing research attempts to integrate the rules into the model because the GYAFC is low resource.", "However, rule matching and selection are context insensitive in previous methods (Wang et al., 2019).", "This paper focuses on developing methods for context-aware rule selection.", "Evaluating Style Transfer Previous work on style transfer (Xu et al., 2012; Jhamtani et al., 2017; Niu et al., 2017; Sennrich et al., 2016a) has repurposed the machine translation metric BLEU (Pa-pineni et al., 2002) and the paraphrase metric PINC (Chen and Dolan, 2011) for evaluation.", "Xu et al. (2012) introduced three evaluation metrics based on cosine similarity, language model and logistic regression.", "They also introduced human judgments for adequacy, fluency and style (Xu et al., 2012; Niu et al., 2017).", "Rao and Tetreault (2018) evaluated formality, fluency and meaning on the GYAFC dataset.", "Recent work on the GYAFC dataset (Wang et al., 2019; Zhang et al., 2020) mostly used BLEU as the evaluation metrics for FST.", "However, all aforementioned work focused on intrinsic evaluations.", "Our work has in addition evaluated FST extrinsically for downstream NLP applications.", "Lexical Normalisation Lexical normalisation (Han and Baldwin, 2011; Baldwin et al., 2015) is the task of translating non-canonical words into canonical ones.", "Like FST, lexical normalisation can also be used to preprocess user-generated data.", "The MoNoise model (van der Goot and van Noord, 2017) is a state-of-the-art model based on feature-based Random Forest.", "The model ranks candidates provided by modules such as a spelling checker (aspell), a n-gram based language model and word embeddings trained on millions of tweets.", "Unlike FST, MoNoise and other lexical normalisation models can not change data's language style.", "In this study, we explore the importance of language style transfer for user-generated data by comparing the results of MoNoise and FST models on tweets NLP downstream tasks.", "Improving language models' performance for user-generated data User-generated data often deviate from standard language.", "In addition to the formality style transfer, there are some other ways to solve this problem (Eisenstein, 2013).", "Fine-tuning on downstream tasks with a user-generated dataset is most straightforward, but this is not easy for many supervised tasks without a large amount of accurately labeled data.", "Another method is to fine-tune pre-trained models on the target domain corpora (Gururangan et al., 2020).", "However, it also requires sizable training data, which could be resource expensive (Sohoni et al., 2019; Dai et al., 2019; Yao et al., 2020).", "For the downstream NLP tasks where input is user-generated data, we first used the FST model for preprocessing, and then fine-tuned the pre-trained models (BERT and RoBERTa) with both the original data D ori and the FST data DFST , which were concatenated with a special token [ SEP ] , forming an input like ( D ori [ SEP ] DFST ) .", "For the formality style transfer task, we use the BERT-initialized encoder paired with the BERT-initialized decoder (Rothe et al., 2020) as the Seq2Seq model.", "All weights were initialized from a public BERT-Base checkpoint (Devlin et al., 2019).", "The only variable that was initialized randomly is the encoder-decoder attention.", "Here, we describe CARI and several baseline methods of injecting rules into the Seq2Seq model.", "First we fine-tuned the BERT model with only the original user-generated input.", "Given an informal input x i and formal output y i , we fine-tuned the model with { ( x i , y i ) } Mi =0 , where M is the number of data.", "For baseline models, we experimented with two state-of-the-art methods for injecting rules.", "We followed Rao and Tetreault (2018) to create a set of rules to convert original data x i to prepossessed data x (cid:48) i by rules, and then fine-tune the model with parallel data { ( x (cid:48) i , y i ) } Mi =0 .", "This is called Rule Base (RB) method.", "The prepossessed data, however, serves as a Markov blanket, i.e., the system is unaware of the original data, provided that only the prepossessed one is given.", "Therefore, the rule detection system could easily make mistakes and introduce noise.", "Wang et al. (2019) improved the RB by concatenating the original text x i with the text processed by rules x (cid:48) i with a special token [ SEP ] in between, forming a input like ( x i [ SEP ] x (cid:48) i ) .", "In this way, the model can make use of a rule detection system but also recognize its errors during the fine-tuning.", "This is called Rule Concatenation (RCAT) method.", "However, both RB and RCAT methods are context insensitive, the rules were selected arbitrarily.", "In Figure 1 CIRI part, extra and introduction were incorrectly selected.", "This greatly limits the performance of the rule-based methods.", "As shown in Figure 1, the input of CARI consists of the original sentence x i and supplementary information.", "Suppose that r i is an exhaustive list of the rules that are successfully matched on x i .", "We make r i = { ( t i,j , c i,j , a i,j ) } Nj =0 , where N is the total number of matched rules in r i .", "Here, t i,j and c i,j are the corresponding matched text and context in the original sentence, respectively, for every matched rule in r i , and a i,j are the corresponding alternative texts for every matched rule in r i .", "Each supplementary information is composed of one alternative text a i,j and its corresponding context c i,j .", "We connect all the supplementary information with the special token [ SEP ] and then connect it after the original input.", "In this way, we form an input like ( x i [ SEP ] a i, 1 , c i, 1 [ SEP ] ... [ SEP ] a i,j , c i,j ) .", "Finally, the concatenated sequence and the corresponding formal reference y i serve as a parallel text pair to fine-tune the Seq2Seq model.", "Like RCAT, CARI can also use rule detection system and recognize its errors during the fine-tuning.", "Furthermore, since we keep all rules in the input, CARI is able to dynamically identify which rule to use, maximizing the use of the rule detection system.", "For the intrinsic evaluation, we used the GYAFC dataset.", "1 It consists of handcrafted informal-formal sentence pairs in two domains, namely, Entertainment & Music (E&M) and Family & Relationship (F&R).", "Table 1 shows the statistics of the training, validation, and test sets for the GYAFC dataset.", "In the validation and test sets of GYAFC, each sentence has four references.", "For better exploring the data requirements of different methods to combine rules, we followed Zhang et al. (2020) and used the back translation method (Sennrich et al., 2016b) to obtain additional 100,000 data for training.", "For rule detection system, we used the grammarbot API, 2 , and Grammarly 3 to help us create a set of rules.", "For the extrinsic evaluation, we used two datasets for sentiment classification: SemEval-2018 Task 1: Affect in Tweets EI-oc (Mohammad et al., 2018), and Task 3: Irony Detection in English Tweets (Van Hee et al., 2018).", "Table 1 shows the statistics of the training, validation, and test set for the two datasets.", "We normalized two tweet NLP classification datasets by translating word tokens of user mentions and web/url links into special tokens @USER and HTTPURL, respectively, and converting emotion icon tokens into corresponding strings.", "We employed the transformers library (Wolf et al., 2019) to independently fine-tune the BERT-based encoder and decoder model for each method in 20,000 steps (intrinsic evaluation), and fine-tune the BERT-based and RoBERTa-based classification models for each tweet sentiment analysis task in 10,000 steps (extrinsic evaluation).", "We used the Adam algorithm (Kingma and Ba, 2014) to train our model with a batch size 32.", "We set the learning rate to 1e-5 and stop training if validation loss increases in two successive epoch.", "We computed the task performance every 1,000 steps on the validation set.", "Finally, we selected the best model checkpoint to compute the performance score on the test set.", "We repeated this fine-tuning process three times with different random seeds and reported each final test result as an average over the test scores from the three runs.", "During inference, we use beam search with a beam size of 4 and beam 3 0 k 4 0 k 5 0 k 6 0 k 7 0 k 8 0 k 9 0 k 1 0 0 k 1 1 0 k 1 2 0 k 1 3 0 k 1 4 0 k 1 5 0 k training size 0.66 0.68 0.7 0.72 0.74 BLEU Entertainment & Music (E&M) NR RB RCAT CARI 3 0 k 4 0 k 5 0 k 6 0 k 7 0 k 8 0 k 9 0 k 1 0 0 k 1 1 0 k 1 2 0 k 1 3 0 k 1 4 0 k 1 5 0 k training size 0.64 0.66 0.68 0.70 0.72 0.74 0.76 0.78 BLEU Family & Relationship (F&R) NR RB RCAT CARI Figure 2: The performance (BLEU) of different rule injection methods with different training size.", "width of 6 to generate sentences.", "The whole experiment is carried out on 1 TITANX GPU.", "Each FST model finished training within 12 hours.", "We used two state-of-the-art models, which were also relevant to our methods, as the strong intrinsic baseline models.", "ruleGPT Like RCAT, Wang et al. (2019) aimed to solve the problem of information loss and noise caused by directly using rules as normalization in preprocessing.", "They put forward the GPT (Radford et al., 2019) based methods to concatenate the original input sentence and the sentence preprocessed by the rule detection system.", "Like the CIRI methods (RB, RCAT), their methods could not make full use of rules since they were also context-insensitive when selecting rules.", "BT + M-Task + F-Dis Zhang et al. (2020) used three data augmentation methods, Back translation (Sennrich et al., 2016b), Formality discrimination, and Multi-task transfer to solve the low-resource problem.", "In our experiments, we also use the back translation method to obtain additional data because we want to verify the impact on the amount of training data required when using different methods to combine rules.", "4.4 Extrinsic Evaluation Baselines BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) are two typical regular language models pre-trained on large-scale regular formal text corpora, like BooksCorpus (Zhu et al., 2015) and English Wikipedia.", "The user-generated data, such as tweets, deviate from the formal text in vocabulary, grammar, and language style.", "As a result, regular language models often perform poorly on user-generated data.", "FST aims to generate a formal sentence given an informal one, while keeping its semantic meaning.", "A good FST result is expected to make regular language models perform better on user-generated data.", "For the extrinsic evaluation, we chose BERT and RoBERTa as the ba-sic model.", "We introduced several tweet sentiment analysis tasks to explore the FST models' ability to transfer the user-generated data from the informal domain to the formal domain.", "Ideally, FST results for tweet data can improve the performance of BERT and RoBERTa on tweet sentiment analysis tasks.", "We regard measuring such improvement as the extrinsic evaluations.", "Besides, tweet data have much unique information, like Emoji, Hashtags, ellipsis, etc., which are not available in the GYAFC dataset.", "So in the extrinsic evaluation result analysis, although the final scores of FST-BERT and FST-RoBERTa were good, we paid more attention to the improvement of their performance before and after using FST, rather than the scores.", "We used two different kinds of state-of-the-art Irony Detection (evaluation metrics: F1) UCDCC BERT MoNoise RCAT CARI RoBERTa MoNoise RCAT CARI Irony-a 72.4 71.8 72.2 72.2 72.5 72.6 72.6 73.1 73.7 Irony-b 50.7 48.6 48.8 50.2 50.9 51.2 51 53.3 53.8 Affect in Tweets EI-oc (evaluation metrics: Pearson r) SeerNet BERT MoNoise RCAT CARI RoBERTa MoNoise RCAT CARI Joy 72 69.1 68.6 69.7 70.4 71.8 71.5 72.9 73.5 Anger 70.6 71.6 71.7 71.9 72 72 71.7 72.3 72.2 Sad 71.7 66.8 66.4 67.4 68.3 68.2 68 69.1 70.1 Fear 63.7 66.9 66.8 67.1 69.2 69.8 69.4 70.5 71.4 Table 2: The extrinsic evaluation results on tweet sentiment analysis tasks.", "SeerNet and UCDCC We used the best results in the SemEval-2018 workshop as the first comparison method.", "For the task Affect in Tweets EI-o, the baseline is SeerNet (Duppada et al., 2018), and for the task Irony Detection in English Tweets, the baseline is UCDCC (Ghosh and Veale, 2018).", "MoNoise MoNoise (van der Goot and van No-ord, 2017) is the state-of-the-art model for the lexical normalization (Baldwin et al., 2015), which aimed to translate non-canonical words into canonical ones.", "Like the FST model, MoNoise can also be used as the prepossessing step in tweet classification tasks to normalize tweet input.", "So we used MoNoise as another comparison method.", "Figure 2 showed the validation performance on both the E&M and the F&R domain.", "Compared to the NR, the RB did not significantly improve.", "As we discussed above, even though the rule detection system will bring some useful information, it will also make mistakes and introduce noise.", "RB has no access to the original data, so it cannot distinguish helpful information from noise and mistakes.", "On the contrary, both RCAT and CARI have access to the original data, so their results improved a lot compared with RB.", "CARI had a better result compared to the RCAT.", "This is because RCAT is context insensitive while CARI is context-aware when selecting rules to modify the original input.", "Therefore, CARI is able to learn to select optimal rules based on context, while RCAT may miss using many correct rules with its pipeline prepossessing step for rules.", "Figure 2 also showed the relationship between the different methods and the different training size.", "Compared with the NR method, the three methods which use rules can reach their best performance with smaller training size.", "This result showed the positive effect of adding rules in the low-resource situation of the GYAFC dataset.", "Moreover, CARI used larger training set to reach its best performance than RB and RCAT, since it needed more data to learn how to dynamically identify which rule to use.", "In Table 4, we explored how large the context window size was appropriate for the CARI method on GYAFC dataset.", "The results showed that for both domains when the window size reaches two (taking two tokens each from the text before and after), Seq2Seq model can well match all rules with the corresponding position in the original input and context window size for CARI 0 1 2 3 4 5 E&M 68.1 72.5 74.2 74.6 74.3 74.5 F&R 70.5 74.3 76.9 77.5 76.8 77.3 Table 4: CARI performance (BLEU) by different context window size.", "Table 2 showed the effectiveness of using the CARI as the preprocessing step for user-generated data on applying regular pre-trained models (BERT and RoBERTa) on the downstream NLP tasks.", "Compared with the previous state-of-the-art results (UCDCC and SeerNet), the results of using BERT and RoBERTa directly were often very poor, since BERT and RoBERTa were only pre-trained on regular text corpora.", "Tweet data has the very different vocabulary, grammar, and language style from the regular text corpora, so it is hard for BERT and RoBERTa to have good performance with small amount of fine-tuning data.", "The results of RCAT and CARI showed that FST can help BERT and RoBERTa improve their performance on tweet data, because they can transfer tweets into more formal text while keeping the original intention as much as possible.", "CARI performed better than RCAT, which was also in line with the results of intrinsic evaluation.", "This result also showed the rationality of our extrinsic evaluation metrics.", "Comparing the results of MoNoise with BERT and RoBERTa, the input prepossessed by MoNoise can not help the pre-trained model to improve effectively.", "We think that this is because the lexical normalization models represented by MoNoise only translate non-canonical words on tweet data into canonical ones.", "Therefore, MoNoise can basically solve the problem of different vocabulary between regular text corpora and user-generated data, but it can not effectively solve the problem of different grammar and language style.", "As a result, for BERT and RoBERTa, even though there is no Out-of-Vocabulary (OOV) problem in the input data processed by MoNoise, they still can not accurately understand the meaning of the input.", "translation task (Owoputi et al., 2013; Nguyen et al., 2020).", "On the contrary, the positive results of the FST methods also showed that FST is more suitable as the downstream task prepossessing step of user-generated data.", "Because FST models need to transfer the informal language style to a formal one while keeping its semantic meaning, which makes a good FST model can ideally handle all the problems from vocabulary, grammar, and language style.", "This can help most language models pre-trained on the regular corpus, like BERT and RoBERTa, perform better on user-generated data.", "The prior evaluation results reveal the relative performance differences between approaches.", "Here, we identify trends per and between approaches.", "We sample 50 informal sentences total from the datasets and then analyze the outputs from each model.", "We present several representative results in Table 5.", "Examples 1 and 2 showed that, for BERT and RoBERTa, FST models are more suitable for preprocessing user-generated data than lexical normalization models.", "In example 1, both methods can effectively deal with the problem at the vocabulary level (2 to to, ur to your, and U to you).", "However, in example 2, FST can further transform source data into a more familiar language style for BERT and RoBERTa, which is not available in the current lexical normalization methods such as MoNoise.", "Example 3 showed the importance of injecting rules into the FST models.", "The word idiodic is a misspelling of idiotic, which is an OOV.", "Therefore, without the help of rules, the model can not understand the source data's meanings and produced the wrong final output I do not understand your", "question. Example 4 showed the importance of context for rule selection.", "The word concern provides the required context to understand that exo refers to an extra ticket.", "So the CARI-based model can choose the right one (exo to extra).", "Examples 5 and 6 showed the shortcomings of CARI.", "In example 5, the rule detection system did not provide the information that the fidy center should be 50 Cent (American rapper), so CARI delivered the wrong result.", "Even though CARI helps mitigate the data low resource challenge, it faces the challenge on its own.", "CARI depends Example 1: Source: explain 2 ur parents that u really want 2 act", "In example 6, CARI mistakenly selected the rule eat me, but not eat", "it. This example also demonstrates the data sparsity that CARI faces.", "Here eat me is more commonly used than eat", "it. 6 Conclusions In this work, we proposed the Context-Aware Rule Injection(CARI), an innovative method for formality style transfer (FST) by injecting multiple rules into an end-to-end BERT-based encoder and decoder model.", "The intrinsic evaluation showed our CARI method achieved the highest performance with previous metrics on the FST benchmark dataset.", "Besides, we were the first to evaluate FST methods with extrinsic evaluation and specifically on the sentiment classification tasks.", "The extrinsic evaluation results showed that using the CARI-based FST as the preprocessing step outperformed existing rule-based FST approaches.", "Our results showed the rationality of adding such extensive evaluation.", "The authors are grateful to Hadi Amiri (Univer-sity of Massachusetts, Lowell) for his expert help in processing Twitter data, and to UMass BioNLP Group for lots of meaningful discussions.", "This work was supported in part by the Center for Intelligent Information Retrieval.", "Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor." ]
[ "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "other", "other", "other" ]
[ "Simultaneous translation has many important application scenarios and attracts much attention from both academia and industry recently.", "Most existing frameworks, however, have dif-ficulties in balancing between the translation quality and latency, i.e., the decoding policy is usually either too aggressive or too conser-vative.", "We propose an opportunistic decoding technique with timely correction ability, which always (over-)generates a certain mount of extra words at each step to keep the audience on track with the latest information.", "At the same time, it also corrects, in a timely fashion, the mistakes in the former overgenerated words when observing more source context to ensure high translation quality.", "Experiments show our technique achieves substantial reduction in latency and up to +3.1 increase in BLEU, with revision rate under 8% in Chinese-to-English and English-to-Chinese translation.", "Simultaneous translation, which starts translation before the speaker finishes, is extremely useful in many scenarios, such as international conferences, travels, and so on.", "In order to achieve low latency, it is often inevitable to generate target words with insufficient source information, which makes this task extremely challenging.", "Recently, there are many efforts towards balancing the translation latency and quality with mainly two types of approaches.", "On one hand, Ma et al. (2019a) propose very simple frameworks that decode following a fixed-latency policy such as wait-k .", "On the other hand, there are many attempts to learn an adaptive policy which enables the model to decide READ or WRITE action on the fly using various techniques such as reinforcement learning (Gu et al., 2017; Alinejad et al., 2018; Grissom II These authors contributed equally.", "et al., 2014), supervised learning over pseudo-oracles (Zheng et al., 2019a), imitation learning (Zheng et al., 2019b), model ensemble (Zheng et al., 2020) or monotonic attention (Ma et al., 2019d; Arivazhagan et al., 2019).", "Though the existing efforts improve the performance in both translation latency and quality with more powerful frameworks, it is still difficult to choose an appropriate policy to explore the optimal balance between latency and quality in practice, especially when the policy is trained and applied in different domains.", "Furthermore, all existing approaches are incapable of correcting the mistakes from previous steps.", "When the former steps commit errors, they will be propagated to the later steps, inducing more mistakes to the future.", "Inspired by our previous work on speculative beam search (Zheng et al., 2019c), we propose an opportunistic decoding technique with timely correction mechanism to address the above problems.", "As shown in Fig. 1, our proposed method always decodes more words than the original policy at each step to catch up with the speaker and reduce the latency.", "At the same time, it also employs a timely correction mechanism to review the extra outputs from previous steps with more source context, and revises these outputs with current preference when there is a disagreement.", "Our algorithm can be used in both speech-to-text and speech-to-speech simultaneous translation (Oda et al., 2014; Bangalore et al., 2012; Yarmoham-madi et al., 2013).", "In the former case, the audience will not be overwhelmed by the modifications since we only review and modify the last few output words with a relatively low revision rate.", "In the later case, the revisable extra words can be used in look-ahead window in incremental TTS (Ma et al., 2019b).", "By contrast, the alternative retranslation strategy (Arivazhagan et al., 2020) will cause non-local revisions which makes it impossible to be used in incremental TTS.", "We also define, for the first time, two metrics for revision-enabled simultaneous translation: a more general latency metric Revision-aware Average Lagging (RAL) as well as the revision rate .", "We demonstrate the effectiveness of our proposed technique using fixed (Ma et al., 2019a) and adaptive (Zheng et al., 2019a) policies in both Chinese-to-English and English-to-Chinese translation.", "Full-sentence NMT.", "The conventional full-sentence NMT processes the source sentence x = ( x 1 , ..., x n ) with an encoder, where x i represents an input token.", "The decoder on the target side (greedily) selects the highest-scoring word y t given source representation h and previously generated target tokens, y <t = ( y 1 , ..., y t 1 ) , and the final hypothesis y = ( y 1 , ..., y t ) with y t = <eos> has the highest probability: p ( y | x ) = (cid:81) | y | t =1 p ( y t | x , y <t ) (1) Simultaneous Translation.", "Without loss of generality, regardless the actual design of policy, simultaneous translation is represented as: p g ( y | x ) = (cid:81) | y | t =1 p ( y t | x (cid:54) g ( t ) , y <t ) (2) where g ( t ) can be used to represent any arbitrary fixed or adaptive policy.", "For simplicity, we assume the policy is given and does not distinguish the difference between two types of policies.", "Opportunistic Decoding.", "For simplicity, we first apply this method to fixed policies.", "We define the original decoded word sequence at time step t with y t , which represents the word that is decoded in time step t with original model.", "We denote the additional decoded words at time step t as y (cid:54) w t = ( y 1 t , ..., y wt ) , where w denote the number of extra decoded words.", "In our setting, the decoding process is as follows: p g ( y t y (cid:54) w t | x (cid:54) g ( t ) ) = p g ( y t | x (cid:54) g ( t ) ) (cid:81) wi =1 p g ( y it | x (cid:54) g ( t ) , y t y <it ) (3) where is the string concatenation operator.", "We treat the procedure for generating the extra decoded sequence as opportunistic decoding, which prefers to generate more tokens based on current context.", "When we have enough information, this opportunistic decoding eliminates unnecessary latency and keep the audience on track.", "With a certain chance, when the opportunistic decoding tends to aggressive and generates inappropriate tokens, we need to fix the inaccurate token immediately.", "Timely Correction.", "In order to deliver the correct information to the audience promptly and fix previous mistakes as soon as possible, we also need to review and modify the previous outputs.", "At step t + 1 , when encoder obtains more information from x (cid:54) g ( t ) to x (cid:54) g ( t +1) , the decoder is capable to generate more appropriate candidates and may revise and replace the previous outputs from opportunistic decoding.", "More precisely, y (cid:54) w t and y t +1 y (cid:54) w 1 t +1 are two different hypothesis over the same time chunk.", "When there is a disagreement, our model always uses the hypothesis from later step to replace the previous commits.", "Note our model does not change any word in y t from previous step and it only revise the words in y (cid:54) w t .", "Modification for Adaptive Policy.", "For adaptive policies, the only difference is, instead of committing a single word, the model is capable of generating multiple irreversible words.", "Thus our proposed methods can be easily applied to adaptive policies.", "Correction with Beam Search.", "When the model is committing more than one word at a time, we can use beam search to further improve the translation quality and reduce revision rate (Mur-ray and Chiang, 2018; Ma et al., 2019c).", "The decoder maintains a beam B kt of size b at step t , which is ordered list of pairs bsh (cid:2802)(cid:1355) Bush z ngt ng (cid:3035)(cid:5497) President de (cid:4913) of Ji ng (cid:4079) Jiang f yn (cid:1870)(cid:6522) speech bi osh (cid:6418)(cid:5135) express Zmn (cid:4162)(cid:4038) Zemin di (cid:2642) to Jiang Zemin expressed his welcome to his agreement to President 1 2 3 4 5 6 7 8 9 zntng (cid:6761)(cid:1910) agreement decoding time t = 4 t = 5 expressed (cid:1157) bngqi (cid:2842)(cid:1260) and 10 11 Jiang Zemin his to President t = 6 expressed Jiang Zemin Bush agreement Figure 2: The decoder generates target word y 4 = his and two extra words welcome to at step t = 4 when input x 9 = z`antong (agreement) is not available yet.", "(cid:104) hypothesis, probability (cid:105) , where k denotes the k th step in beam search.", "At each step, there is an initial beam B 0 t = [ (cid:104) y t 1 , 1 (cid:105) ] .", "We denote one-step transition from the previous beam to the next as B k +1 t = next b 1 ( B kt ) = b top {(cid:104) y (cid:48) v, u p ( v | x (cid:54) g ( t ) , y (cid:48) ) (cid:105) | (cid:104) y (cid:48) , u (cid:105) B k t } where top b ( ) returns the top-scoring b pairs.", "Note we do not distinguish the revisable and non-revisable output in y (cid:48) for simplicity.", "We also define the multi-step advance beam search function with recursive fashion as follows: next bi ( B kt )=next b 1 (next bi 1 ( B kt )) When the opportunistic decoding window is w at decoding step t , we define the beam search over w + 1 (include the original output) as follows: (cid:104) y (cid:48) t , u t (cid:105) = top 1 (cid:0) next bn + w ( B 0 t ) (cid:1) (4) where next bn + w ( ) performs a beam search with n + w steps, and generate y (cid:48) t as the outputs which include both original and opportunistic decoded words.", "n represents the length of y t 4 Revision-aware AL and Revision Rate We define, for the first time, two metrics for revision-enabled simultaneous translation.", "ALAL is introduced in (Ma et al., 2019a) to measure the average delay for simultaneous translation.", "Besides the limitations that are mentioned in (Cherry and Foster, 2019), AL is also not sensitive to the modifications to the committed words.", "Furthermore, in the case of re-translation, AL is incapable to measure the meaningful latency anymore.", "We hereby propose a new latency, Revision-aware AL (RAL), which can be applied to any kind of translation scenarios, i.e., full-sentence translation, use re-translation as simultaneous translation, fixed and adaptive policy simultaneous translation.", "Note that for latency and revision rate calculation, we count the target side difference respect to the growth of source side.", "As it is shown in Fig. 3, there might be multiple changes for each output words during the translation, and we only start to calculate the latency for this word once it agrees with the final results.", "Therefore, it is necessary to locate the last change for each word.", "For a given source side time s , we denote the t th outputs on target side as f ( x (cid:54) s ) t .", "Then we are able to find the Last Revision (LR) for the t th word on target side as follows: LR ( t ) = argmax s< | x | (cid:0) f ( x (cid:54) ( s 1) ) t (cid:54) = f ( x (cid:54) s ) t (cid:1) From the audience point of view, once the former words are changed, the audience also needs to take the efforts to read the following as well.", "Then we also penalize the later words even there are no changes, which is shown with blue arrow in Fig. 3.", "We then re-formulate the LR ( t ) as follows (assume LR (0) = 0 ): 5 10 15 Revision-aware Average Lagging (zh en) 25.0 27.5 30.0 32.5 35.0 37.5 40.0 42.5 45.0 4 r e f BLEU w =5, b >1 w =3, b >1 w =1, b >1 w =5, b =1 w =3, b =1 w =1, b =1 w =0, b =1 29.6 0 5 10 15 Revision-aware Average Lagging (en zh) 10 12 14 16 18 20 22 24 1 r e f BLEU w =5, b >1 w =3, b >1 w =1, b >1 w =5, b =1 w =3, b =1 w =1, b =1 w =0, b =1 38.3 Figure 4: BLEU against RAL using waitk polocies.", "LR ( t ) = max { LR ( t 1) , LR ( t ) } (5) The above definition can be visualized as the thick black line in Fig. 3.", "Similar with original AL, our proposed RAL is defined as follows: RAL( x , y ) = 1 ( | x | ) ( | x | ) (cid:88) t =1 LR ( t ) t 1 r (6) where ( | x | ) denotes the cut-off step, and r = | y | / | x | is the target-to-source length ratio.", "Since each modification on the target side would cost extra effort for the audience to read, we penalize all the revisions during the translation.", "We define the revision rate as follows: (cid:16) | x | 1 (cid:88) s =1 dist (cid:16) f ( x (cid:54) s ) , f ( x (cid:54) s +1 ) (cid:17)(cid:17)(cid:46)(cid:16) | x | (cid:88) s =1 | f ( x (cid:54) s ) | (cid:17) where dist can be arbitrary distance measurement between two sequences.", "For simplicity, we design a modified Hamming Distance to measure the difference: dist ( a, b ) = hamming (cid:0) a, b | a | (cid:104) pad (cid:105) max( | a || b | , 0) (cid:1) where (cid:104) pad (cid:105) is a padding symbol in case b is shorter than a .", "Datasets and Implementation We evaluate our work on Chinese-to-English and English-to-Chinese simultaneous translation tasks.", "We use the NIST corpus (2M sentence pairs) as the training data.", "We first apply BPE (Sennrich et al., 2015) on all texts to reduce the vocabulary sizes.", "For evaluation, we use NIST 2006 and NIST 2008 as our dev and test sets with 4 English references.", "We re-implement waitk model (Ma et al., 2019a) and adaptive policy (Zheng et al., 2019a).", "We use Transformer (Vaswani et al., 2017) based wait-k model and pre-trained full-sentence model for learning adaptive policy.", "Performance on Waitk Policy We perform experiments using opportunistic decoding on wait-k policies with k { 1 , 3 , 5 , 7 , 9 } , opportunistic window w { 1 , 3 , 5 } and beam size b { 1 , 3 , 5 , 7 , 10 , 15 } .", "We select the best beam size for each policy and window pair on dev-set.", "We compare our proposed method with a baseline called re-translation which uses a full-sentence NMT model to re-decode the whole target sentence once a new source word is observed.", "The final output sentences of this method are identical to the full sentence translation output with the same model but the latency is reduced.", "Fig. 4 (left) shows the Chinese-to-English results of our proposed algorithm.", "Since our greedy opportunistic decoding doesn't change the final output, there is no difference in BLEU compared with normal decoding, but the latency is reduced.", "However, by applying beam search, we can achieve 3.1 BLEU improvement and 2.4 latency reduction on wait-7 policy.", "Fig. 4 (right) shows the English-to-Chinese results.", "Compare to the Chinese-to-English translation results in previous section, there is comparatively less latency reduction by using beam search because the output translations are slightly longer which hurts the latency.", "As shown in Fig. 5(right), the revision rate is still controlled under 8%.", "Fig. 5 shows the revision rate with different window size on waitk policies.", "In general, with opportunity window w 5 , the revision rate of our proposed approach is under 8% , which is much lower than re-translation.", "Performance on Adaptive Policy Fig. 6 shows the performance of the proposed algorithm on adaptive policies.", "We use threshold { 0 .", "55 , 0 .", "53 , 0 .", "5 , 0 .", "47 , 0 .", "45 } .", "We vary beam size b { 1 , 3 , 5 , 7 , 10 } and select the best one on dev-set.", "Comparing with conventional beam search on consecutive writes, our decoding algorithm achieves even much higher BLEU and less latency.", "5.1 Revision Rate vs. Window Size 1 3 5 7 10 15 Beam Size (zh en) 0.5 1.0 1.5 2.0 2.5 3.0 3.5 R e v i s i o n R a t e wait-1 wait-3 wait-5 wait-7 wait-9 Figure 7: Revision rate against beam size with window size of 3 and different waitk policies.", "We further investigate the revision rate with different beam sizes on waitk policies.", "Fig. 7 shows that the revision rate is higher with lower waitk policies.", "This makes sense because the low k policies are always more aggressive and easy to make mistakes.", "Moreover, we can find that the revision rate is not very sensitive to beam size.", "We have proposed an opportunistic decoding timely correction technique which improves the latency and quality for simultaneous translation.", "We also defined two metrics for revision-enabled simultaneous translation for the first time.", "L. H. was supported in part by NSF IIS-1817231." ]
[ "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "other" ]
[ "Given the claims of improved text generation quality across various pre-trained neural models, we consider the coherence evaluation of machine generated text to be one of the principal applications of coherence models that needs to be investigated.", "Prior work in neural coherence modeling has primarily focused on devising new architectures for solving the permuted document task.", "We instead use a basic model architecture and show significant improvements over state of the art within the same training regime.", "We then design a harder self-supervision objective by increasing the ratio of negative samples within a contrastive learning setup, and enhance the model further through automatic hard negative mining coupled with a large global negative queue encoded by a momentum encoder.", "We show empirically that increasing the density of negative samples improves the basic model, and using a global negative queue further improves and stabilizes the model while training with hard negative samples.", "We evaluate the coherence model on task-independent test sets that resemble real-world applications and show significant improvements in coherence evaluations of downstream tasks.", "1 1 Introduction Coherence is a property of a well-written text that makes it different from a random set of sentences: sentences in a coherent text are connected in systematic ways such that each sentence follows naturally from previous ones and leads into the following ones (Halliday and Hasan, 1976; Grosz and Sidner, 1986).", "Coherence models (Barzilay and Lapata, 2005) that can distinguish a coherent text from incoherent ones have a wide range of applications in language generation, summarization, and coherence assessment tasks such as essay scoring and sentence ordering.", "1 Our code and data are available at https://ntunlpsg.github.io/project/coherence-paradigm With recent advancements in neural methods, claims of fluency in summarization (Liu et al., 2017; Celikyilmaz et al., 2018), language modeling (Radford et al., 2019; Brown et al., 2020), response generation (Zhang et al., 2020; Hosseini-Asl et al., 2020) and human parity in machine translation (Hassan et al., 2018) have led to calls for finer-grained discourse-level evaluations (Lubli et al., 2018; Sharma et al., 2019; Popel et al., 2020), since traditional metrics such as BLEU and ROUGE are unable to measure text quality and readability (Paulus et al., 2018; Reiter, 2018).", "Coherence models that can evaluate machine-generated text have become the need of the hour.", "A majority of coherence models proposed optimize their learning objectives on the permuted document task using the Penn Treebank (WSJ) corpus.", "An original article is considered a positive' sample of a coherent document, while a permutation of its sentences is considered a negative' or incoherent sample (see Appendix A.1 for an ex-ample).", "Models are usually trained in a pairwise ranking fashion to distinguish the two.", "The basic entity-grid model proposed by Barzilay and Lapata (2005, 2008) was extended to incorporate entity-specific features (Elsner and Char-niak, 2011), multiple ranks (Feng and Hirst, 2012), and coherence relations (Lin et al., 2011; Feng et al., 2014).", "Their neural extensions have also been proposed (Nguyen and Joty, 2017; Mohiuddin et al., 2018).", "More recent state-of-the-art models like the Transferable Neural model (Xu et al., 2019) consider coherence at a local level by training a forward and backward model only on adjacent sentences, in addition to generative pre-training of the sentence encoders.", "The Unified Coherence model (Moon et al., 2019) uses bi-linear layer and lightweight convolution-pooling in a Siamese framework to capture discourse relations and topic structures, along with an explicit language model loss to capture syntactic patterns.", "Mohiuddin et al. (2021) recently tested these state-of-the-art models by conducting coherence evaluations on the WSJ permuted document task, machine translation, summarization and next utterance ranking tasks.", "They found that while models performed well on the permuted document task, when tested off-the-shelf, models generalized poorly to downstream evaluation tasks.", "They call for more comprehensive evaluations of coherence models.", "Pishdad et al. (2020) also reached a similar conclusion.", "They retrained several neural coherence models for tasks analogous to coherence modeling such as detecting connective substitution and topic switching.", "They found that performance on the permuted document task is only partially indicative of coherence modeling capabilities.", "In light of these recent findings, our aim is to propose a coherence model that generalizes well to downstream tasks.", "We train our model purely through self-supervision , without tailoring the model architecture specifically to the permuted document task or any other form of supervision.", "Li and Jurafsky (2017) point out that coherence models are exposed to a limited number of incoherent samples in the pairwise setup, since only a small sample of all possible incoherent permutations of a document are used to train models.", "Learning with more negatives can better maximize the mutual information between their representations (van den Oord et al., 2018).", "By using a contrastive learning (Gutmann and Hyvrinen, 2010) setup, where each positive' document is compared with multiple negative' documents, we increase the proportion of negative samples that the model is exposed to, and show that the coherence model shows significant improvements in performance.", "Wu et al. (2020) show that the difficulty of the negative samples used for contrastive training can strongly influence model success for visual representation learning.", "Guided by this principle, we train the model with automatically mined hard negatives, coupled with a large global negative queue encoded by a momentum encoder (He et al., 2019).", "In summary, our contributions are: A neural coherence model trained purely through well-designed self-supervision tasks that generalizes well to downstream applications.", "Evaluation on multiple independent test sets that are more indicative of real-world performance of the coherence model.", "Empirical results demonstrating that increase in the density and quality of negative samples leads to better generalization for coherence models.", "To ensure that our coherence model is useful for evaluation in downstream applications, we use a selection of task-independent test sets that cover a variety of domains and genres, including machine generated text from summarization systems and language models.", "Following Pishdad et al. (2020), we also evaluate the models on a commonsense reasoning narrative dataset.", "We train (and validate) the coherence models on standard WSJ data, while using the rest as independent test sets to indicate the generalizability of the trained models.", "All evaluations on downstream tasks are conducted in a pairwise setting to enable a fair comparison.", "2.1 Training Data WSJ The Wall Street Journal (WSJ) corpus consists of news articles divided into 1240/138/1053 documents for train-ing/development/testing in the standard setup.", "We exclude documents with < 4 sentences and truncate them to a maximum length of 600 tokens.", "To maximally utilize documents which are otherwise truncated due to GPU memory constraints, we partition documents with 20+ sentences into blocks of 10 sentences and consider each block as a separate positive document.", "This increases the number of coherent documents' that we can use to generate a larger training set.", "Moon et al. (2019) use 20 permutations of a document for training; since their setup is pairwise, it means the original positive document is repeated 20x.", "We regenerate the permuted documents similarly, sampling a larger set of permutations for our contrastive learning setup.", "2 This gives us 46 , 522 instances of positive and corresponding negative documents for training and 4 , 522 instances for development.", "We use the original pairwise test set used by Moon et al. (2019) with 20 , 411 pairs for testing.", "SUMMEVAL Fabbri et al. (2020) conduct a manual coherence evaluation of the summaries generated by 16 different summarization systems for", "2 We ensure that the generated permuted documents are not repeated.", "For example, our contrastive learning setup requires 5 negative samples per instance; because each positive document appears 20 times in the original dataset, 100 unique permutations would be generated and divided accordingly.", "100 source articles based on the CNN/DailyMail (Hermann et al., 2015) dataset.", "Likert-style coherence ratings from 3 expert annotators are available for each summarized text.", "We adapt this to the pairwise setting by creating pairs of summaries from every system for each unique source article.", "The summary with the higher average coherence rating is designated as the positive document, while the summary with the lower rating is the negative document for that pair.", "This results in (cid:0) 162 (cid:1) 100 = 12 , 000 pairs for evaluation.", "LMVLM To cover a wider variety of machine generated text, we generated texts from various language models using prompts taken from the validation and test sets of the WritingPrompts dataset (Fan et al., 2018).", "Four language models were chosen for this purpose: GPT2-Small, GPT2-XL, CTRL and GPT3.", "The continuations produced by these models for each prompt were truncated at approximately 150 tokens and paired together.", "Using these texts, we conducted a user study on Amazon Mechanical Turk.", "Workers were instructed about the concept of coherence and shown examples of coherent and incoherent texts.", "Given the prompt, they were asked to choose the more coherent text out of two given language model outputs; they were also given an option to choose neither in case the texts were equally coherent/incoherent (see Appendix A.3 for more details such as the study interface).", "After removing the samples with low agreements and ties, a total of 1 , 046 pairs with judgments from 3 annotators each were collected.", "The Krippendorff's alpha coefficient (Krippendorff, 2011) between the annotators was 0.84 .", "We calculate the agreements of the coherence model ranking with these judgments, designated LMVLM.", "INSTED Shen et al. (2021) propose a sentence intrusion detection task in order to test the coherence modeling capabilities of pre-trained language models.", "Incoherent documents are created by substituting a sentence from a document with another sentence from a different document, ensuring that the replacement sentence is similar to the original document to make the task sufficiently hard.", "We adapt their task to the pairwise setting by pairing the original coherent and the corrupted incoherent document, giving us 7 , 168 instances from their CNN test set (INSTED-CNN) and 3 , 666 instances from their Wikipedia test set (INSTED-WIKI ) for evaluation.", "Shen et al. (2021) also create a handcrafted linguistic probe test set, where incoherence is manually inserted based on a range of linguistic phenomena; we use this test set for analysis (4).", "STORYCLOZE The STORYCLOZE dataset (created from ROCSTORIES (Sharma et al., 2018)) consists of a short narrative-style text with two possible endings, one of which is implausible.", "The test set labels are not public so we use the validation set.", "We designate the text with the correct ending as the positive document and the text with the incorrect ending as the negative document, resulting in a total of 1 , 571 pairs for evaluation.", "Previous work on coherence modeling proposed elaborate architectures to capture various aspects of coherence (see 1).", "However, our key hypothesis is that large-scale pre-trained models are expressive enough to model coherence given the right self-supervision.", "Effective bi-directional encoding through large Transformer networks (Vaswani et al., 2017) can consider longer language context, while language modeling objectives enforce syntactic and local coherence patterns in the model.", "In our work, we adopt XLNet (Yang et al., 2019) as the backbone model.", "It is trained using a permuted language modeling objective, in which the expected log-likelihood of a sequence with respect to all permutations of the factorization order is maximized.", "This allows the modeling of bi-directional context, while maintaining the auto-regressive property and avoiding the pretrain-finetune discrepancy.", "In addition, XLNet also incorporates segment recurrence (or memory) and the relative encoding scheme of Transformer-XL (Dai et al., 2019), which makes it effective in modeling longer text sequences.", "This makes it suitable for our purpose of coherence modeling.", "Given a document D with n sentences ( s 1 , s 2 , . . . , s n ) as input, our model uses the representations obtained through XLNet (parameter-ized by ) to assign a coherence score to the model.", "Specifically, for each sentence s i with k tokens ( w 1 , w 2 . . . w k ) , XLNet maps each token w t to its vector representation v t R d where d is the dimension of the embedding.", "In addition, the complete input D is also mapped to a document representation z R d ( i.e., the representation of the [ CLS ] 6046 token).", "We simply add a linear layer to convert document representation z to obtain the final coherence score: f ( D ) = w z + b , where w and b are the weight and bias of the linear layer with = { , w , b } being the entire parameter set of the model (see the upper part of Figure 1).", "Setup.", "Traditionally, coherence model training has been done in a pairwise ranking setup.", "In this setup, the model is trained to score the coherent or positive document higher than the incoherent or negative document, using a pairwise ranking loss (Collobert et al., 2011) defined as follows: L = max (cid:0) 0 , f ( D + ) + f ( D ) (cid:1) (1) where f ( D + ) is the coherence score of the positive document, f ( D ) is the coherence score of the negative document and is the margin.", "Baselines.", "We compare our models against all three versions of the L ocal C oherence D iscriminator or LCD model (Xu et al., 2019) 3 : ( i ) LCD-G, that uses GloVe (Pennington et al., 2014) representations, ( ii ) LCD-I, that uses InferSent (Conneau et al., 2017) representations, and ( iii ) LCD-L, that uses representations from an RNN-based language model trained on the training data.", "We also compare against the Un ified C oherence model or UNC (Moon et al., 2019) 4 , which is the previous SOTA on the WSJ permuted document task.", "Results from evaluation of existing coherence models by Pishdad et al. (2020) and Mohiuddin et al. (2021) indicate that UNC and LCD are the best-performing models (see Appendix A.4 for a full comparison).", "We retrain their models with our training data for comparison.", "In addition, to ascertain the contribution of the pre-trained XLNet embeddings, we train our pairwise model without fine-tuning the representations, i.e., only the score-producing linear layer weights w and b are trained on the pairwise ranking task.", "Results.", "The results for the baseline models are given in Table 1 (see top five rows).", "We see that despite accuracies of more than 90% on the WSJ permuted document task, the LCD models perform only a little above a random baseline of 50% on most of the independent test sets, with LCD-G being the best generalizing model out of the three.", "3 https://github.com/BorealisAI/cross_domain_coherence 4 https://github.com/taasnim/unified-coherence-model Similarly, despite a relatively high performance on the WSJ test set (94.11%), UNC's performance on the independent test sets is quite poor, even failing to do better than the random baseline of 50% in two out of five cases.", "Both the LCD and UNC models have slightly better success on the INSTED-CNN dataset, which is the same domain (news) as the training data, with the UNC model reaching 67.21% accuracy.", "Our XLNet-Pairwise model trained without fine-tuning the representations (No FT) performs no better than the baseline models.", "This shows that both the LCD-G and the UNC models are in fact strong baselines despite using GloVe and ELMo (Peters et al., 2018) pre-trained representations respectively.", "Our fully-trained XLNet-Pairwise model not only outperforms the UNC model on the standard WSJ permuted document task, but also significantly outperforms all baseline models on the independent test sets, showing an absolute improvement of 15-20% on the SUMMEVAL , INSTED-CNN, INSTED-WIKI and the STORYCLOZE datasets.", "On LMVLM, the UNC model has a better performance; we suspect that its explicit conditional language modeling loss might provide an additional advantage for this particular task.", "Overall, our results are consistent with observations from Mohiuddin et al. (2021) that show poor generalizability in the previous SOTA model.", "Setup.", "In pairwise ranking, each positive sample is only compared to one negative at a time.", "Contrastive learning (Gutmann and Hyvrinen, 2010) makes it general, where a single positive sample can be compared to multiple negatives, which can be particularly useful in the permuted document task where the number of possible incoherent samples per coherent document can be very large.", "The number of negatives considered and their quality can affect model performance (Arora et al., 2019).", "Wu et al. (2020) show that contrastive loss maximizes a lower bound on the mutual information between representations.", "A larger number of negatives increases the tightness of the bound; learning with more negatives can better maximise the mutual information.", "We train our model with a margin-based contrastive loss defined as: L = log (cid:16) e f ( D + ) e f ( D + ) + (cid:80) Nj =1 e ( f ( D j ) ) (cid:17) (2) 6047 Model WSJ SUMMEVALLMVLM INSTED-CNN INSTED-WIKISTORYCLOZELCD-G 90 .", "where f ( D + ) is the coherence score of the positive document, f ( D 1 ) , , f ( D N ) are the scores of the N negative documents, and is the margin.", "Training.", "We use the same training data as the baseline models to train our contrastive model; the positive documents remain the same, while we use 5 negative documents per instance (instead of only 1 in the pairwise setup).", "Effectively, the model sees the same number of positive or coherent documents, but five times as many negative samples during training compared to the pairwise setting.", "Appendix A.5 gives the full set of hyperparameters.", "Results.", "From the results in Table 1, we see that the contrastive model (second to last row) further improves the results across all the independent test sets; the results on the LMVLM dataset also improve, surpassing the UNC model performance.", "Although the improvement on the WSJ permuted document task is small, the improvement in the generalizability of the model is more significant.", "It has been shown that the difficulty of the negative samples used for contrastive training can strongly influence model success (Wu et al., 2020; Huang et al., 2020).", "We therefore automatically mine hard negatives during training.", "For the permuted document task, we can take advantage of the fact that the negative sample space can be huge; for a document with n sentences, the candidate pool of permutations has n !", "1 incoherent documents from which we can mine hard negatives.", "For the problem of dense text retrieval, Xiong et al. (2020) find global hard negatives by computing document encodings using a recent checkpoint to build an asynchronous index of the entire corpus, and sampling negative documents from the index.", "However, the huge candidate pool for permuted documents also makes it infeasible to mine global negatives in our case.", "Instead, we perform local negative sample ranking.", "For each positive instance in the training data, we sample a larger number of permuted documents ( h ) per instance than we need for training ( i.e., h > N ).", "We score these negative documents using the model updated thus far and use the highest ranking negatives for training.", "Specifically, the model is first trained with x instances ( x is a hyperparam-eter) of data, by using 5 negative samples randomly chosen out of h .", "The updated model is then used to score all the h negative samples each for another set of x instances from the training data.", "The scores of the h negative samples are ranked and the top scoring 5 negative samples for each instance are used to train the model for the next x gradient steps.", "This process is repeated throughout training; the model therefore iteratively mines harder and harder negative samples as it improves.", "See Algorithm 1 in Appendix A.2 for the pseudocode.", "In practice however, we find that using hard negative samples directly leads to instability in model training (see 4.1).", "We therefore use hard negative training in combination with a momentum encoder, which we describe in the next subsection.", "While increasing the number of negative samples per instance has been shown to be effective for constrastive learning, resource constraints can limit the number of negatives that can be considered per instance.", "One solution is to consider other positive instances in the same training batch as nega-6048 Document Representations Negative Positive Encoders Slice Inputs Queue CoherenceScores Loss Momentum Encoder Linear BaseEncoder Inner Product Figure 1: Our coherence model with the auxiliary momentum encoder.", "tives (Karpukhin et al., 2020; Chen et al., 2020).", "However, it is not suitable for the permuted document task since the negatives are instance-specific.", "While a permuted document is still independently incoherent, training with permutations of other documents will not provide the same cues for coherence modeling as the original self-supervision.", "Another solution is to maintain a large global queue of negative samples that are independent of the current training instance.", "During training, negative samples (specifically, their representations) from the latest batch are enqueued to build a queue upto some size l .", "As training continues, the negative samples from the oldest batch are dequeued to accommodate newer samples.", "However, representations of the documents will evolve through training as the model parameters get updated; this will make the negative samples in the queue inconsistent with each other and the training instances in the current batch.", "Moreover, the issue of mismatched self-supervision with negatives that are permuted versions of other documents still remains.", "Momentum Encoder.", "To address these issues, we add an auxiliary momentum encoder (He et al., 2019), which is also XLNet (Yang et al., 2019).", "Figure 1 shows the overall architecture.", "Keeping the base contrastive setup the same (the upper part), we add an additional contrastive objective based on representations from the momentum encoder.", "Specifically, we re-encode the positive and negative samples through the momentum encoder; the negative samples thus encoded are used to build the queue.", "We train the model to promote the similarity between the positive representations from the momentum encoder and the positive representations from our base encoder over the similarity with the negative samples from the queue, Q .", "Specifically, we define a momentum loss L mom as: c + = ( z + ) ( z + m ) || z + || || z + m || ; c j = ( z + m ) q j || z + m || || q j || ; L mom = log (cid:16) e c + e c + + (cid:80) lj =1 e ( c j ) (cid:17) (3) where z + and z + m are the positive representations from the base encoder ( ) and the momentum encoder ( ) respectively, q 1 , . . . , q l indexed by j are the negative representations from in the queue, and is the margin.", "The momentum encoder is updated based on the base encoder as: + (1 ) (4) where [0 , 1) is the momentum coefficient; only is updated through backpropagation.", "Our full model is trained with a combination of the original contrastive objective (Eq. 2) and the momentum encoded contrastive similarity objective (Eq. 3): L = L + (1 ) L mom (5) where is a weighting hyperparameter.", "Note that the momentum encoder can be considered as a temporal ensemble model consisting of exponential-moving-average versions of the base model.", "Due to this, the gradients from the momentum loss (Eq. 3) also help in stabilising the overall training (4).", "Length Invariance.", "In the permuted document task, both the positive and the negative samples have the same number of sentences.", "This is not necessarily the case for downstream applications.", "To incorporate length invariance into our model, we encode a random contiguous slice of the positive document through the momentum encoder .", "5 The global negatives queue Q is constructed from the mined hard negative samples used for training.", "Our model is therefore trained to rely not only on comparative coherence cues from the traditional permuted document setup, but also to recognize more independent cues for coherence through the global queue, which is additionally enhanced by incorporating length invariance and automatically mined hard negative samples.", "Training.", "We train the model with the same training data, this time sampling h = 50 negatives 6 per instance for hard negative ranking, and setting the training steps (or instances) x = 200 .", "We use a queue size of l = 1000 and set our momentum coefficient = 0 .", "9999999 , with loss weighting parameter = 0 .", "85 .", "Due to GPU memory constraints (24GB, Quadro RTX 6000), we train our model with a batch size of 1.", "See Appendix A.5 for the full set of hyperparameters.", "Results.", "The results in Table 1 (last row) show that our momentum encoder model with hard negative mining outperforms all previous models across the independent testsets.", "This improvement comes despite a very similar performance on the WSJ test set; we believe that our model truly improves in generalizability without overfitting to the permuted document task.", "The improvements on the out-of-domain test sets, particularly on LMVLM and STORYCLOZE , support this conclusion.", "We only train our complete model ( i.e., base contrastive plus momentum model) by mining hard", "Minimum is 4 and maximum is full document.", "6 As previously described in 2, we ensure the sampled negative documents are unique even when the positive documents are repeated.", "This ensures that a much larger sample of the overall candidate pool is considered during training.", "Since we sample and rank 50 negative documents per positive instance, accounting for 20 repetitions of the positive documents, 20 50 = 1000 total negative documents are considered for hard negative mining.", "This is 10 times larger than the contrastive setup (100 unique negatives) and 50 times larger than the pairwise setup (only 20 unique negatives).", "negative samples (3.5), because we find that training the base contrastive model directly with hard negatives leads to instability during training.", "Figure 2a plots development set accuracies of our base model trained with and without hard negative mining, and our complete model trained with hard negative mining (evaluated every 1000 steps).", "As seen in the figure, the contrastive model displays significant volatility when trained with hard negatives only, while the complete model is quite stable.", "This is inline with the finding of Xuan et al. (2020) who show that training with the hardest negative samples leads to bad local minima.", "This can be explained with the gradient analysis of such negatives which have a larger gradient norm (Xiong et al., 2020), resulting in abrupt gradient steps.", "The momentum encoder being a temporal ensemble of the base models has a regularizing effect, addressing this issue and leading to stable and improved results (see 3.5).", "Number of Ranked Negatives.", "Figure 2b shows the results across the test sets for different num-bers of negative samples considered for ranking ( h ) during hard negative mining.", "We see that increasing the number of negatives considered improves results across the board, with results on out-of-domain test sets LMVLM and STORYCLOZE showing particular improvement.", "Momentum Coefficient.", "Figure 2c shows the variation in the model performance across the test sets for different values of the momentum coefficient .", "We see that apart from a slight drop on the INSTED-WIKI dataset at = 0 .", "9999999 , overall an increasing value leads to better generalization on the independent test sets, presumably due to a more consistent global negative queue.", "Queue Size.", "Figure 2d shows the variation in model performance across different test sets for various sizes of the global negative queue Q .", "We see that while increasing the queue size generally leads to an improvement in scores, at high queue sizes the improvement is limited to test sets from the same domain (WSJ, SUMMEVAL and INSTED-CNN), and the model's generalizability is affected.", "So far, we have reported the results of training our model on the permuted document task using documents from the WSJ corpus as was done by", "most prior work (Elsner and Charniak, 2011; Moon et al., 2019).", "We now test the effectiveness of other datasets, by varying the task itself and by using a different dataset for the permuted document task.", "Sentence Intrusion.", "As described in 2.3, Shen et al. (2021) propose a sentence intrusion task to test coherence modeling capabilities of pre-trained language models.", "We adapt their dataset to the pairwise setting by pairing the original coherent document (positive) with the corrupted (negative) document; setting aside 10% of the data for development gives us 25,852 positive-negative training pairs for INSTED-CNN and 41,135 pairs for INSTED-WIKI .", "We train our pairwise (3.2) model on this task.", "From the results in Table 2 (first two rows), we see that the performance on the same domain/task (as the training) and the performance on the LMVLM dataset is high, but the models trained on this task generalize poorly to the other independent test sets.", "Permuted Document Task with INSTED.", "We train our model on the permuted document task using the INSTED datasets.", "We generate 52,607 and 66,679 positive-negative pairs for INSTED-CNN and INSTED-WIKI respectively by sampling permutations, similar to our training data (see 2.1), and train our pairwise model with this data.", "Specifically for machine generated texts, results in Table 2 show that the sentence intrusion task training does better on the LMVLM dataset.", "On the other hand, the permuted document task training does better on SUMMEVAL .", "This could be because the documents in SUMMEVAL are summaries of the same source article and therefore similar in content (detecting incoherence through permutations might help here), while the text generated by language models even for the same prompt tends to differ in content more significantly (detecting intruder sentences might help here).", "Additionally, the performance of our WSJ model on the INSTED-CNN 6051 Train Dataset Neg.", "Type Model WSJ SUMMEVALLMVLM INSTED-CNN INSTED-WIKISTORYCLOZEINSTED-WIKI Intrusion Pairwise 95 .", "24 0 .", "37 53 .", "03 1 .", "49 0 .", "490 0 .", "01 94 .", "07 0 .", "29 82 .", "01 0 .", "24 64 .", "21 1 .", "98 INSTED-CNN Intrusion Pairwise 95 .", "48 0 .", "47 57 .", "85 2 .", "47 0 .", "502 0 .", "01 97 .", "83 0 .", "15 73 .", "52 1 .", "17 71 .", "75 1 .", "81 INSTED-WIKI Permuted Pairwise 96 .", "89 0 .", "23 64 .", "53 0 .", "82 0 .", "491 0 .", "01 84 .", "17 1 .", "50 71 .", "35 0 .", "88 69 .", "09 2 .", "29 INSTED-CNN Permuted Pairwise 97 .", "03 0 .", "12 66 .", "63 0 .", "97 0 .", "483 0 .", "01 92 .", "61 0 .", "62 69 .", "88 0 .", "64 68 .", "95 1 .", "02 WSJ Permuted Pairwise 98 .", "23 0 .", "20 64 .", "83 1 .", "03 0 .", "458 0 .", "02 91 .", "96 1 .", "09 70 .", "85 1 .", "85 71 .", "84 2 .", "33 Table 2: Results on the WSJ permuted document test set and other independent test sets of the pairwise model trained on different datasets.", "and INSTED-WIKI datasets is comparable to the performance of the respective in-domain pairwise models, while outperforming both the other models on the STORYCLOZE dataset.", "Overall, the model trained on the WSJ permuted document task generalizes well.", "Shen et al. (2021) create 8 hand-crafted linguistic probe test sets by manually modifying words in coherent texts based on various linguistic phenomena, ensuring the incoherent text produced as a result remains syntactically correct.", "Except for the words targeted by the probe, the rest of the text remains identical.", "Each test set has 100 samples each.", "7 We evaluate the best performing LCD-G, UNC and our full models on these test sets.", "The results are shown in Table 3 along with some examples from the dataset.", "The LCD-G model has mixed success across the test sets.", "The UNC model has the most success with the tense agreement test set and is moderately successful on the pronoun test sets.", "We see that our model has perfect accuracy on all pronoun-related test sets and near-perfect accuracy on the tense agreement test set.", "This shows that our model is indeed capturing the discourse-level phenomena that constitute coherence.", "Where our model falters is in cases which may require 7 Except for Single Determiner Flip, which has 95.", "commonsense knowledge, such as identifying that 6.7 wins is not possible.", "Overall, our model is quite successful in detecting several kinds of incoherence.", "We show empirically that increasing the ratio and quality of negative samples improves the generalizability of the coherence model.", "We also test our model on a wide-ranging collection of independent test sets that resemble downstream applications, including machine generated text, on which our model significantly outperforms the previous SOTA model.", "Our work thus also sets a new evaluation standard for future research in coherence modeling.", "We open source our code base to encourage research in a new paradigm of coherence modeling.", "We would like to thank the Senior Area Chairs of ACL 2022 for evaluating our paper on its merits, and the reviewers and meta-reviewer of ARR for their reviews.", "We would also like to thank our colleagues Mathieu Ravaut and Han Cheol Moon for their valuable inputs.", "A description of the data pre-processing is provided in 2.1.", "Datasets that we created will be open-sourced.", "In the case of the WSJ dataset, the data is licensed for use only to members by the Linguistic Data Consortium.", "Consequently, we only release scripts to generate the data we use and not the data itself.", "We highlight however that the permuted document self-supervision task that we train on is independent of the dataset used and the task can be reproduced on any other corpus; see also 4.3.", "All other datasets we use are licensed freely for academic use.", "We conduct a user study to collect pairwise coherence judgments on our language model output dataset.", "As part of our crowd-sourced user study on Amazon Mechanical Turk to collect these coherence judgements, we do not collect any personal information from the participants.", "Based on the average time spent to perform the tasks, participants were paid the equivalent of 16 USD per hour for their work.", "The annotation instructions and interface provided to the participants are included in Appendix A.3.", "One potential issue is that the language model output that we generate from prompts may lead to malicious text generation by the models.", "We flagged the task to warn the workers that there may be potentially offensive content, and manually checked the final dataset post curation.", "All our experiments are conducted using data for the English language.", "However, as coherence and discourse relations in text are a universal concept, and our training data is automatically generated, we expect the permuted document task to be easily extensible to other languages." ]
[ "objective", "abstain", "result", "objective", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "result", "abstain", "method", "objective", "objective", "objective", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "objective", "other", "other", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method" ]
[ "Word translation or bilingual lexicon induction (BLI) is a key cross-lingual task, aiming to bridge the lexical gap between different languages.", "In this work, we propose a robust and effective two-stage contrastive learning framework for the BLI task.", "At Stage C1, we propose to refine standard cross-lingual linear maps between static word embeddings (WEs) via a contrastive learning objective; we also show how to integrate it into the self-learning procedure for even more refined cross-lingual maps.", "In Stage C2, we conduct BLI-oriented contrastive fine-tuning of mBERT, unlocking its word translation capability.", "We also show that static WEs induced from the C2-tuned' mBERT complement static WEs from Stage C1.", "Comprehensive experiments on standard BLI datasets for diverse languages and different experimental setups demonstrate substantial gains achieved by our framework.", "While the BLI method from Stage C1 already yields substantial gains over all state-of-the-art BLI methods in our comparison, even stronger improvements are met with the full two-stage framework: e.g., we report gains for 112/112 BLI setups, spanning 28 language pairs.", "Bilingual lexicon induction (BLI) or word translation is one of the seminal and long-standing tasks in multilingual NLP (Rapp, 1995; Gaussier et al., 2004; Heyman et al., 2017; Shi et al., 2021, inter alia ).", "Its main goal is learning translation correspondences across languages, with applications of BLI ranging from language learning and acquisition (Yuan et al., 2020; Akyurek and Andreas, 2021) to machine translation (Qi et al., 2018; Duan et al., 2020; Chronopoulou et al., 2021) and the development of language technology in low-resource languages and domains (Irvine and Callison-Burch, 2017; Heyman et al., 2018).", "A large body of recent BLI work has focused on the so-called mapping-based methods (Mikolov et al., 2013; Artetxe et al., mBERT/mT5 Static Word Embeddings f 1 ( y n i ) f 1 ( x m i ) f 1 ( x j ) f 2 ( y n i ) f 2 ( x j ) f 2 ( x m i ) Seed Dictionary C1 Alignment C2 Alignment (1 ) Procrustes Attract RepelNegative Sampling Figure 1: An illustration of the proposed two-stage BLI approach (see 2).", "2018; Ruder et al., 2019).", "1 Such methods are particularly suitable for low-resource languages and weakly supervised learning setups: they support BLI with only as much as few thousand word translation pairs (e.g., 1k or at most 5k) as the only bilingual supervision (Ruder et al., 2019).", "2 Unlike for many other tasks in multilingual NLP (Doddapaneni et al., 2021; Chau and Smith, 2021; Ansell et al., 2021), state-of-the-art (SotA) BLI results are still achieved via static word embeddings (WEs) (Vulic et al., 2020b; Liu et al., 2021b).", "A typical modus operandi of mapping-based approaches is to first train monolingual WEs independently on monolingual corpora and then map them to a shared cross-lingual space via linear (Mikolov et al., 2013; 1 They are also referred to as projection-based or alignment-based methods (Glava et al., 2019; Ruder et al., 2019).", "2 In the extreme, fully unsupervised mapping-based BLI methods can leverage monolingual data only without any bilingual supervision (Lample et al., 2018; Artetxe et al., 2018; Hoshen and Wolf, 2018; Mohiuddin and Joty, 2019; Ren et al., 2020, inter alia ).", "However, comparative empirical analyses (Vulic et al., 2019) show that, with all other components equal, using seed sets of only 500-1,000 translation pairs, always outperforms fully unsupervised BLI methods.", "Therefore, in this work we focus on this more pragmatic (weakly) supervised BLI setup (Artetxe et al., 2020); we assume the existence of at least 1,000 seed translations per each language pair.", "Glava et al., 2019) or non-linear mapping functions (Mohiuddin et al., 2020).", "In order to achieve even better results, many BLI methods also apply a self-learning loop where training dictionaries are iteratively (and gradually) refined, and improved mappings are then learned in each iteration (Artetxe et al., 2018; Karan et al., 2020).", "However, there is still ample room for improvement, especially for lower-resource languages and dissimilar language pairs (Vulic et al., 2019; Nasution et al., 2021).", "On the other hand, another line of recent research has demonstrated that a wealth of lexical semantic information is encoded in large multilingual pretrained language models (LMs) such as mBERT (Devlin et al., 2019), but 1) it is not straightforward to transform the LMs into multilingual lexical encoders (Liu et al., 2021b), 2) extract word-level information from them (Vulic et al., 2020b, 2021), and 3) word representations extracted from these LMs still cannot surpass static WEs in the BLI task (Vulic et al., 2020b; Zhang et al., 2021).", "Motivated by these insights, in this work we investigate the following research questions: (RQ1) Can we further improve (weakly supervised) mapping-based BLI methods based on static WEs?", "(RQ2)", "How can we extract more useful crosslingual word representations from pretrained multilingual LMs such as mBERT or mT5?", "Inspired by the wide success of contrastive learning techniques in sentence-level representation learning (Reimers and Gurevych, 2019; Carlsson et al., 2021; Gao et al., 2021), we propose a two-stage contrastive learning framework for effective word translation in (weakly) supervised setups; it leverages and combines multilingual knowledge from static WEs and pretrained multilingual LMs.", "Stage C1 operates solely on static WEs: in short, it is a mapping-based approach with self-learning, where in each step we additionally fine-tune linear maps with contrastive learning that operates on gradually refined positive examples (i.e., true translation pairs), and hard negative samples.", "Stage C2 fine-tunes a pretrained multilingual LM (e.g., mBERT), again with a contrastive learning objective, using positive examples as well as negative examples extracted from the output of C1.", "Finally, we extract word representations from the multilingual LM fine-tuned in Stage C2, and combine them with static cross-lingual WEs from Stage C1; the combined representations are then used for BLI.", "We run a comprehensive set of BLI experiments on the standard BLI benchmark (Glava et al., 2019), comprising 8 diverse languages, in several setups.", "Our results indicate large gains over state-of-the-art BLI models: e.g., +8 Precision@1 points on average, +10 points for many language pairs, gains for 107/112 BLI setups already after Stage C1 (cf., RQ1), and for all 112/112 BLI setups after Stage C2 (cf., RQ2 and RQ3).", "Moreover, our findings also extend to BLI for lower-resource languages from another BLI benchmark (Vulic et al., 2019).", "Finally, as hinted in recent work (Zhang et al., 2021), our findings validate that multilingual lexical knowledge in LMs, when exposed and extracted as in our contrastive learning framework, can complement the knowledge in static cross-lingual WEs (RQ3), and benefit BLI.", "We release the code and share the data at: https: //github.com/cambridgeltl/ContrastiveBLI .", "Preliminaries and Task Formulation.", "In BLI, we assume two vocabularies X = { w x 1 , . . . , w x |X| } and Y = { w y 1 , . . . , w y |Y| } associated with two respective languages L x and L y .", "We also assume that each vocabulary word is assigned its (static) type-level word embedding (WE); that is, the respective WE matrices for each vocabulary are X R |X| d , Y R |Y| d .", "Each WE is a d -dim row vector, with typical values d =300 for static WEs (e.g., fastText) (Bojanowski et al., 2017), and d =768 for mBERT.", "3 We also assume a set of seed translation pairs D 0 = { ( w xm 1 , w yn 1 ) , ..., ( w xm | D 0 | , w yn | D 0 | ) } for training (Mikolov et al., 2013; Glava et al., 2019), where 1 m i |X | , 1 n i |Y| .", "Typical values for the seed dictionary size |D 0 | are 5 k pairs and 1 k pairs (Vulic et al., 2019), often referred to as supervised (5k) and semi-supervised or weakly supervised settings (1k) (Artetxe et al., 2018).", "Given another test lexicon DT = { ( w xt 1 , w yg 1 ) , ..., ( w xt |D T | , w yg |D T | ) } , where D 0 DT = , for each L x test word w xt i in DT the goal is to retrieve its correct translation from L y 's vocabulary Y , and evaluate it against the gold L y translation w yg i from the pair.", "Method in a Nutshell.", "We propose a novel 3 We also tried XLM ( d =1 , 280 ) and mT5 small ( d =512 ); mBERT is the best-performing pretrained LM in our preliminary investigation.", "two-stage contrastive learning (CL) method, with both stages C1 and C2 realised via contrastive learning objectives (see Figure 1).", "Stage C1 (2.1) operates solely on static WEs, and can be seen as a contrastive extension of mapping-based BLI approaches with static WEs.", "In practice, we blend contrastive learning with the standard SotA mapping-based framework with self-learning: VecMap (Artetxe et al., 2018), with some modifi-cations.", "Stage C1 operates solely on static WEs in exactly the same BLI setup as prior work, and thus it can be evaluated independently.", "In Stage C2 (2.2), we propose to leverage pretrained multilingual LMs for BLI: we contrastively fine-tune them for BLI and extract static decontextualised' WEs from the tuned LMs.", "These LM-based WEs can be combined with WEs obtained in Stage C1 (2.3).", "Stage C1 is based on the VecMap framework (Artetxe et al., 2018) which features 1) dual linear mapping , where two separate linear transformation matrices map respective source and target WEs to a shared cross-lingual space; and 2) a self-learning procedure that, in each iteration i refines the training dictionary and iteratively improves the mapping.", "We extend and refine VecMap's self-learning for supervised and semi-supervised settings via CL.", "Initial Advanced Mapping.", "After (cid:96) 2 -normalising word embeddings, 4 the two mapping matrices, denoted as W x for the source language L x and W y for L y , are computed via the Advanced Mapping (AM) procedure based on the training dictionary, as fully described in Appendix A.1; while VecMap leverages whitening, orthogonal mapping, re-weighting and de-whitening operations to derive mapped WEs, we compute W x and W y such that a one-off matrix multiplication produces the same result (see Appendix A.1 for the details).", "Contrastive Fine-Tuning.", "At each iteration i , after the initial AM step, the two mapping matrices W x and W y are then further contrastively fine-tuned via the InfoNCE loss (Oord et al., 2018), a standard and robust choice of a loss function in CL research (Musgrave et al., 2020; Liu et al., 2021c,b).", "The core idea is to attract' aligned WEs of positive examples (i.e., true translation pairs) coming from the dictionary D i 1 , and repel' hard negative samples , that is, words which are semantically similar 4 Unlike VecMap, we do not mean-center WEs as this yielded slightly better results in our preliminary experiments.", "These hard negative samples are extracted as follows.", "Let us suppose that ( w xm i , w yn i ) is a translation pair in the current dictionary D i 1 , with its constituent words associated with static WEs x m i , y n i R 1 d .", "We then retrieve the nearest neighbours of y n i W y from XW x and derive w xm i X ( w xm i excluded) , a set of hard negative samples of size N neg .", "In a similar (symmetric) manner, we also derive the set of negatives w yn i Y ( w yn i ex-cluded).", "We use D to denote a collection of all hard negative set pairs over all training pairs in the current iteration i .", "We then fine-tune W x and W y by optimising the following contrastive objective: s i,j = exp(cos( x i W x , y j W y ) / ) , (1) p i = s m i ,n i (cid:80) w yj { w yni } (cid:83) w yni s m i ,j + (cid:80) w xj w xmi s j,n i , (2) min W x , W y E ( w xmi ,w yni ) D CL log( p i ) .", "denotes a standard temperature parameter.", "The objective, formulated here for a single positive example, spans all positive examples from the current dictionary, along with the respective sets of negative examples computed as described above.", "Self-Learning.", "The application of", "(a) initial mapping via AM and", "(b) contrastive fine-tuning can be repeated iteratively.", "Such self-learning loops typically yield more robust and better-performing BLI methods (Artetxe et al., 2018; Vulic et al., 2019).", "At each iteration i , a set of automatically extracted high-confidence translation pairs D add are added to the seed dictionary D 0 , and this dictionary D i = D 0 D add is then used in the next iteration i + 1 .", "Our dictionary augmentation method slightly deviates from the one used by VecMap.", "We leverage the most frequent N freq source and target vocabulary words, and conduct forward and backward dictionary induction (Artetxe et al., 2018).", "Unlike VecMap, we do not add stochasticity to the process, and simply select the top N aug high-confidence 4355 word pairs from forward (i.e., source-to-target) induction and another N aug pairs from the backward induction.", "In practice, we retrieve the 2 N aug pairs with the highest Cross-domain Similarity Local Scaling (CSLS) scores (Lample et al., 2018), 5 remove duplicate pairs and those that contradict with ground truth in D 0 , and then add the rest into D add .", "For the initial AM step, we always use the augmented dictionary D 0 D add ; the same augmented dictionary is used for contrastive fine-tuning in weakly supervised setups.", "6 We repeat the self-learning loop for N iter times: in each iteration, we optimise the contrastive loss NCL times; that is, we go NCL times over all the positive pairs from the training dictionary (at this iteration).", "N iter and NCL are tunable hyper-parameters.", "Self-learning in Stage C1 is summarised in Algorithm 1. 2.2 Stage C2 Previous work tried to prompt off-the-shelf multilingual LMs for word translation knowledge via masked natural language templates (Gonen et al., 2020), averaging over their contextual encodings in a large corpus (Vulic et al., 2020b; Zhang et al., 2021), or extracting type-level WEs from the LMs directly without context (Vulic et al., 2020a, 2021).", "However, even sophisticated templates and WE extraction strategies still typically result in BLI performance inferior to fastText (Vulic et al., 2021).", "(BLI-Oriented)", "Contrastive Fine-Tuning.", "Here, we propose to fine-tune off-the-shelf multilingual LMs relying on the supervised BLI signal: the aim is to expose type-level word translation knowledge directly from the LM, without any external corpora.", "In practice, we first prepare a dictionary of positive examples for contrastive fine-tuning:", "(a) DCL = D 0 when |D 0 | spans 5 k pairs, or", "(b) when |D 0 | =1 k , we add the N aug =4 k automatically extracted highest-confidence pairs from Stage C1 (based on their CSLS scores, not present in D 0 ) to D 0 (i.e., DCL spans 1 k + 4 k word pairs).", "We then extract N neg hard negatives in the same way as in 2.1, relying on the shared cross-lingual space derived as the output of Stage C1.", "Our hypothesis is that a difficult task of discerning between true translation pairs and highly similar non-translations as hard negatives, formulated within a contrastive 5 Further details on the CSLS similarity and its relationship to cosine similarity are available in Appendix A.2.", "6 When starting with 5k pairs, we leverage only D 0 for contrastive fine-tuning, as D add might deteriorate the quality of the 5k-pairs seed dictionary due to potentially noisy input.", "learning objective, will enable mBERT to expose its word translation knowledge, and complement the knowledge already available after Stage C1.", "Throughout this work, we assume the use of pretrained mBERT base model with 12 Transformer layers and 768 -dim embeddings.", "Each raw word input w is tokenised, via mBERT's dedicated tokeniser, into the following sequence: [ CLS ][ sw 1 ] . . . [ sw M ][ SEP ] , M 1 , where [ sw 1 ] . . . [ sw M ] refers to the sequence of M constituent subwords/WordPieces of w , and [ CLS ] and [ SEP ] are special tokens (Vulic et al., 2020b).", "The sequence is then passed through mBERT as the encoder, its encoding function denoted as f ( ) : it extracts the representation of the [CLS] token in the last Transformer layer as the representation of the input word w .", "The full set of mBERT's parameters then gets contrastively fine-tuned in Stage C2, again relying on the InfoNCE CL loss: s (cid:48) i,j = exp(cos( f ( w xi ) , f ( w yj )) / ) , (4) p (cid:48) i = s (cid:48) m i ,n i (cid:80) w yj { w yni } (cid:83) w yni s (cid:48) m i ,j + (cid:80) w xj w xmi s (cid:48) j,n i , (5) min E ( w xmi ,w yni ) D CL log( p (cid:48) i ) .", "Type-level WE for each input word w is then obtained simply as f (cid:48) ( w ) , where (cid:48) refers to the parameters of the BLI-tuned' mBERT model.", "In order to combine the output WEs from Stage C1 and the mBERT-based WEs from Stage C2, we also need to map them into a shared' space: in other words, for each word w , its C1 WE and its C2 WE can be seen as two different views of the same data point.", "We thus learn an additional linear orthogonal mapping from the C1-induced cross-lingual WE space into the C2-induced cross-lingual WE space.", "It transforms (cid:96) 2 normed 300-dim C1-induced cross-lingual WEs into 768 -dim cross-lingual WEs.", "Learning of the linear map W R d 1 d 2 , where in our case d 1 =300 and d 2 =768 , is formulated as a Generalised Procrustes problem (Schnemann, 1966; Viklands, 2006) operating on all (i.e., both L x and L y ) words from the seed translation dictionary D 0 .", "7 7 Technical details of the learning procedure are described in Appendix A.3.", "It is important to note that in this case we do not use word translation pairs ( w xm i , w yn i ) directly to learn the mapping, but rather each word w xm i and w yn i is duplicated to create training pairs ( w xm i , w xm i ) and ( w yn i , w yn i ) , where the left word/item in each pair is assigned its WE from C1, and the right word/item is assigned its WE after C2.", "Unless noted otherwise, a final representation of an input word w is then a linear combination of", "(a) its C1-based vector v w mapped to a 768 dim representation via W , and", "(b) its 768 -dim encoding f (cid:48) ( w ) from BLI-tuned mBERT: (1 ) v w W (cid:107) v w W (cid:107) 2 + f (cid:48) ( w ) (cid:107) f (cid:48) ( w ) (cid:107) 2 , (7) where is a tunable interpolation hyper-parameter.", "Monolingual WEs and BLI Setup.", "We largely follow the standard BLI setup from prior work (Artetxe et al., 2018; Joulin et al., 2018; Glava et al., 2019; Karan et al., 2020, inter alia ).", "The main evaluation is based on the standard BLI dataset from Glava et al. (2019): it comprises 28 language pairs with a good balance of typologically similar and distant languages (Croatian: HR , English: EN , Finnish: FI , French: FR , German: DE , Italian: IT , Russian: RU , Turkish: TR ).", "Again following prior work, we rely on monolingual fastText vectors trained on full Wikipedias for each language (Bojanowski et al., 2017), where vocabularies in each language are trimmed to the 200K most frequent words (i.e., |X | =200 k and |Y| =200 k ).", "The same fastText WEs are used for our Stage C1 and in all baseline BLI models.", "mBERT in Stage C2 operates over the same vocabularies spanning 200 k word types in each language.", "We use 1k translation pairs (semi-supervised BLI mode) or 5k pairs (supervised) as seed dictionary D 0 ; test sets span 2 k pairs (Glava et al., 2019).", "With 56 BLI directions in total, 8 this yields a total of 112 BLI setups for each model in our comparison.", "The standard Precision@1 (P@1) BLI measure is reported, and we rely on CSLS ( k =10 ) to score word similarity (Lample et al., 2018).", "9 Training Setup and Hyperparameters.", "Since standard BLI datasets typically lack a validation set (Ruder et al., 2019), following prior work (Glava et al., 2019; Karan et al., 2020) we conduct hyper-parameter tuning on a single, randomly selected language pair EN TR, and apply those hyperpa-rameter values in all other BLI runs.", "both for L i L j and L j L i directions.", "9 The same trends in results are observed with Mean Reciprocal Rank (MRR) as another BLI evaluation measure (Glava et al., 2019); we omit MRR scores for clarity.", "Moreover, similar relative trends, but with slightly lower absolute BLI scores, are observed when replacing CSLS with the simpler cosine similarity measure: the results are available in the Appendix.", "In Stage C1, when |D 0 | =5 k , the hyperparam-eter values are N iter =2 , NCL =200 , N neg =150 , N freq =60 k , N aug =10 k .", "SGD optimiser is used, with a learning rate of 1 .", "5 and =0 .", "99 .", "When |D 0 | =1 k , the values are N iter =3 , NCL =50 , N neg =60 , N freq =20 k , and N aug =6 k ; SGD with a learning rate of 2 .", "0 , =1 .", "0 .", "=1 .", "0 and dropout is 0 in both cases, and the batch size for contrastive learning is always equal to the size of the current dictionary |D CL | (i.e., |D 0 | (5k case), or |D 0 D add | which varies over iterations (1k case); see 2.1).", "In Stage C2, N neg =28 and the maximum sequence length is 6 .", "We use AdamW (Loshchilov and Hut-ter, 2019) with learning rate of 2 e 5 and weight decay of 0 .", "01 .", "We fine-tune mBERT for 5 epochs, with a batch size of 100 ; dropout rate is 0 .", "1 and =0 .", "1 .", "Unless noted otherwise, is fixed to 0 .", "2 .", "Baseline Models.", "Our BLI method is evaluated against four strong SotA BLI models from recent literature, all of them with publicly available implementations.", "Here, we provide brief summaries: 10 RCSLS (Joulin et al., 2018) optimises a relaxed CSLS loss, learns a non-orthogonal mapping, and has been established as a strong BLI model in empirical comparative analyses as its objective function is directly BLI-oriented' (Glava et al., 2019).", "VecMap 's core components (Artetxe et al., 2018) have been outlined in 2.1.", "LNMap (Mohiuddin et al., 2020) non-linearly maps the original static WEs into two latent semantic spaces learned via non-linear autoencoders, 11 and then learns another non-linear mapping between the latent autoencoder-based spaces.", "FIPP (Sachidananda et al., 2021), in brief, first finds common (i.e., isomorphic) geometric structures in monolingual WE spaces of both languages, and then aligns the Gram matrices of the WEs found in those common structures.", "For all baselines, we have verified that the hy-perparameter values suggested in their respective repositories yield (near-)optimal BLI performance.", "Unless noted otherwise, we run VecMap, LNMap, and FIPP with their own self-learning procedures.", "12 10 For further technical details and descriptions of each BLI model, we refer to their respective publications.", "We used publicly available implementations of all the baseline models.", "11 This step is directed towards mitigating anisomorphism (Sgaard et al., 2018; Dubossarsky et al., 2020) between the original WE spaces, which should facilitate their alignment.", "12 RCSLS is packaged without self-learning; extending it to support self-learning is non-trivial and goes beyond the scope of this work.", "Model Variants.", "We denote the full two-stage BLI model as C2 (Mod) , where Mod refers to the actual model/method used to derive the shared crosslingual space used by Stage C2.", "For instance, C2 (C1) refers to the model variant which relies on our Stage C1, while C2 (RCSLS) relies on RCSLS as the base method.", "We also evaluate BLI performance of our Stage C1 BLI method alone.", "Multilingual LMs.", "We adopt mBERT as the default pretrained multilingual LM in Stage C2.", "Our supplementary experiments also cover the 1280 dim XLM model 13 (Lample and Conneau, 2019) and 512 -dim mT5 small (Xue et al., 2021).", "14 For clarity, we use C2 [LM] to denote C2 (C1) obtained from different LMs; when [LM] is not spec-ified, mBERT is used.", "We adopt a smaller batch size of 50 for C2 [XLM] considering the limit of GPU memory, and train C2 [mT5] with a larger learning rate of 6 e 4 for 6 epochs, since we found it much harder to train than C2 [mBERT] .", "The main results are provided in Table 1, while the full results per each individual language pair, and also with cosine similarity as the word retrieval function, are provided in Appendix E. The main findings are discussed in what follows.", "Stage C1 versus Baselines.", "First, we note that there is not a single strongest baseline among the four SotA BLI methods.", "For instance, RCSLS and VecMap are slightly better than LNMap and FIPP with 5k supervision pairs, while FIPP and VecMap come forth as the stronger baselines with 1k supervision.", "There are some score fluctuations over individual language pairs, but the average performance of all baseline models is within a relatively narrow interval: the average performance of all four baselines is within 3 P@1 points with 5k pairs (i.e., ranging from 38.22 to 41.22), and VecMap, FIPP, and LNMap are within 2 points with 1k pairs.", "Strikingly, contrastive learning in Stage C1 already yields substantial gains over all four SotA BLI models, which is typically much higher than the detected variations between the baselines.", "We mark that C1 improves over all baselines in 51/56 BLI setups (in the 5k case), and in all 56/56 BLI setups when D 0 spans 1k pairs.", "The average gains 13 We pick the XLM large model pretrained on 100 languages with masked language modeling (MLM) objective.", "with the C1 variant are 5 P@1 points over the SotA baselines with 5k pairs, and 6 P@1 points with 1k pairs (ignoring RCSLS in the 1k scenario).", "Note that all the models in comparison, each currently considered SotA in the BLI task, use exactly the same monolingual WEs and leverage exactly the same amount of bilingual supervision.", "The gains achieved with our Stage C1 thus strongly indicate the potential and usefulness of word-level contrastive fine-tuning when learning linear crosslingual maps with static WEs (see RQ1 from 1).", "Stage C1 + Stage C2.", "The scores improve further with the full two-stage procedure.", "The C2 (C1) BLI variant increases average P@1 by another 3.3 4358 [1k] Pairs BG CA CA HE HE BG VecMap 39.43 24.64 31.55 FIPP 34.29 20.63 26.38 C1 41.88 30.56 33.49 mBERT 1.64 1.28 0.88 mBERT (tuned) 13.90 3.43 4.76 C2 (C1) 44.28 33.99 37.78 [1k] Pairs ET HU HU EU EU ET VecMap 35.55 20.03 9.83 FIPP 30.30 11.58 8.22 C1 40.35 20.09 13.00 mBERT 15.40 16.97 23.70 mBERT (tuned) 20.59 22.30 28.62 C2 (C1) 44.64 28.26 21.35 C2 (C1, = 0 . 4 ) -34.62 36.70 Table 2: BLI scores on the Panlex-BLI sets.", "(5k) and 3 P@1 points (1k), and we observe gains for all language pairs in both translation directions, rendering Stage C2 universally useful.", "These gains indicate that mBERT does contain word translation knowledge in its parameters.", "However, the model must be fine-tuned (i.e., transformed) to unlock' the knowledge from its parameters: this is done through a BLI-guided contrastive fine-tuning procedure (see 2.2).", "Our findings thus further confirm the rewiring hypothesis' from prior work (Vulic et al., 2021; Liu et al., 2021b; Gao et al., 2021), here validated for the BLI task (see RQ2 from 1), which states that task-relevant knowledge at sentenceand word-level can be rewired'/exposed from the off-the-shelf LMs, even when leveraging very limited task supervision, e.g., with only 1k or 5k word translation pairs as in our experiments.", "Performance over Languages.", "The absolute BLI scores naturally depend on the actual source and target languages: e.g., the lowest absolute performance is observed for morphologically rich ( HR , RU , FI , TR ) and non-Indo-European languages ( FI , TR ).", "However, both C1 and C2 (C1) mode variants offer wide and substantial gains in performance for all language pairs, irrespective of the starting absolute score.", "This result further suggests wide applicability and robustness of our BLI method.", "Evaluation on Lower-Resource Languages.", "The robustness of our BLI method is further tested on another BLI evaluation set: PanLex-BLI (Vulic et al., 2019), which focuses on BLI evaluation for lower-resource language; 1k training pairs and 2k test pairs are derived from PanLex (Kamholz et al., 2014).", "The results for a subset of six languages (Basque: EU , Bulgarian: BG , Catalan: CA , Estonian: ET , Hebrew: HE , Hungarian: HU ) are [5k] Pairs DE TR TR HR HR RURCSLS 30.99 24.60 37.19 C2 (RCSLS) 36.52 33.17 44.77 VecMap 27.18 25.99 37.98 C2 (VecMap) 34.95 34.29 44.98 C1 34.69 32.37 41.66 C2 (C1) 38.86 36.32 46.40 [1k] Pairs DE TR TR HR HR RURCSLS 18.21 13.84 24.72 C2 (RCSLS) 25.40 22.52 33.88 VecMap 23.37 20.50 36.09 C2 (VecMap) 27.91 26.84 40.45 C1 32.03 27.00 39.40 C2 (C1) 34.85 32.16 42.14 Table 3: Stage C2 with different support' methods: RCSLS, VecMap, and C1.", "presented in Table 2. Overall, the results further confirm the efficacy of the C2 (C1) , with gains observed even with typologically distant language pairs (e.g., HE BG and EU ET ).", "Usefulness of Stage C2?", "The results in Table 1 have confirmed the effectiveness of our two-stage C2 (C1) BLI method (see RQ3 in 1).", "However, Stage C2 is in fact independent of our Stage C1, and thus can also be combined with other standard BLI methods.", "Therefore, we seek to validate whether combining exposed mBERT-based translation knowledge can also aid other BLI methods.", "In other words, instead of drawing positive and negative samples from Stage C1 (2.2) and combining C2 WEs with WEs from C1 (2.3), we replace C1 with our baseline models.", "The results of these C2 (RCSLS) and C2 (VecMap) BLI variants for a selection of language pairs are provided in Table 3. The gains achieved with all C2 ( ) variants clearly indicate that Stage C2 produces WEs which aid all BLI methods.", "In fact, combining it with RC-4359 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 10152025303540455055606570 A cc u r ac y ( P @ 1 ) DE TR TR HR HR RU RU IT IT FR 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 10152025303540455055606570 A cc u r ac y ( P @ 1 ) DE TR TR HR HR RU RU IT IT FR 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0510152025303540455055606570 A cc u r ac y ( P @ 1 ) BG CA CA HE HE BG ET HU HU EU EU ET Figure 2: BLI scores with different values: (left) |D 0 | = 5 k ; (middle) |D 0 | = 1 k ; (right) PanLex-BLI, |D 0 | = 1 k .", "SLS and VecMap yields even larger relative gains over the base models than combining it with our Stage C1.", "However, since Stage C1 (as the base model) performs better than RCSLS and VecMap, the final absolute scores with C2 (C1) still outperform C2 (RCSLS) and C2 (VecMap) .", "Different Multilingual LMs?", "Results on eight language pairs, shown in Table 4, indicate that C2 (C1) is also compatible with different LMs.", "The overall trend is that all three C2 [LM] variants derive some gains when compared to C1.", "C2 [mBERT] is the best-performing model and derives gains in all 112 / 112 BLI setups (also see Appendix E); C2 [mT5] outperforms C1 in all 16 / 16 cases, and the gains are observed for 14 / 16 cases with C2 [XLM] .", "It is also worth noticing that C2 [XLM] can surpass C2 [mBERT] on several pairs.", "Combining C1 and C2?", "The usefulness of combining the representations from two stages is measured through varying the value of for several BLI setups.", "The plots are shown in Figure 2, and indicate that Stage C1 is more beneficial to the performance, with slight gains achieved when allowing the influx' of mBERT knowledge (e.g., in the [0 . 0 0 . 3] interval).", "While mBERT-based WEs are not sufficient as standalone representations for BLI, they seem to be even more useful in the combined model for lower-resource languages on PanLex-BLI, with steeper increase in performance, and peak scores achieved with larger -s.", "Ablation Study , with results summarised in Table 5, displays several interesting trends.", "First, both CL and self-learning are key components in the 1k-setups: removing any of them yields substantial drops.", "In 5k-setups, self-learning becomes less important, and removing it yields only negligible drops, while CL remains a crucial component (see also Appendix F).", "Further, Table 5 complements the results from Figure 2 and again indicates that, while Stage C2 indeed boosts word translation capacity of mBERT, using mBERT features alone is still not sufficient to achieve competitive [5k] Pairs EN DE IT C1 w/o CL 41.58 39.30 42.67 C1 w/o SL 50.99 45.07 48.39 C1 51.31 46.14 48.92 mBERT 9.55 9.39 8.13 mBERT (tuned) 15.87 18.66 20.18 C1 + mBERT 51.55 46.25 48.91 C2 (C1) 54.31 48.86 51.91 [1k] Pairs EN DE IT C1 w/o CL 39.46 37.54 40.37 C1 w/o SL 39.31 32.59 36.45 C1 47.16 43.94 46.55 mBERT 9.55 9.39 8.13 mBERT (tuned) 17.29 20.92 23.29 C1 + mBERT 47.56 44.08 46.74 C2 (C1) 49.84 46.61 49.22 Table 5: Ablation study.", "After all, pretrained LMs are contex-tualised encoders designed for (long) sequences rather than individual words or tokens.", "Finally, Table 5 shows the importance of fine-tuning mBERT before combining it with C1-based WEs (2.3): directly adding WEs extracted from the off-the-shelf mBERT does not yield any benefits (see the scores for the C1+mBERT variant, where is also 0 . 2 ).", "On the other hand, contrastive fine-tuning reshapes the subspaces towards a shared (cross-lingual) space, the effects of which are then also reflected in mBERT's improved BLI capability (see Table 5 again).", "To understand the role of CL in Stage C1, we visualise static WEs mapped by C1 without CL (i.e., AM+SL, see 2.1) and also from the complete Stage C1, respectively.", "Figure 4 shows that C1 without CL already learns a sensible cross-lingual space.", "However, we note that advanced mapping (AM) in C1 without CL learns a (near-)orthogonal map, which might result in mismatches, especially with dissimilar language pairs.", "With TR-HR, the plot reveals that there exists a gap between C1-aligned WE spaces although the final BLI performance still gets improved: this might be due to repelling' negatives from each other during CL.", "This work is related to three topics, each with a large body of work; we can thus provide only a condensed summary of the most relevant research.", "Mapping-Based BLI.", "These BLI methods are highly popular due to reduced bilingual supervision requirements; consequently, they are applicable to low-resource languages and domains, learning linear (Lample et al., 2018; Artetxe et al., 2018; Joulin et al., 2018; Patra et al., 2019; Jawanpuria et al., 2019; Sachidananda et al., 2021) and non-linear maps (Mohiuddin et al., 2020; Glava and Vulic, 2020; Ganesan et al., 2021), typically using self-learning in weakly supervised setups.", "Contrastive Learning in NLP aims to learn a semantic space such that embeddings of similar text inputs are close to each other, while repelling' dissimilar ones.", "It has shown promising performance on training generic sentence encoders (Giorgi et al., 2021; Carlsson et al., 2021; Liu et al., 2021a; Gao et al., 2021) and downstream tasks like summarisation (Liu and Liu, 2021) or NER (Das et al., 2021).", "Exposing Lexical Knowledge from Pretrained LMs.", "Extracting lexical features from off-the-shelf multilingual LMs typically yields subpar performance in lexical tasks (Vulic et al., 2020b).", "To unlock the lexical knowledge encoded in PLMs, Liu et al. (2021a) and Vulic et al. (2021) fine-tune LMs via contrastive learning with manually cu-rated or automatically extracted phrase/word pairs to transform it into effective text encoders.", "Wang et al. (2021) and Liu et al. (2021c) apply similar techniques for phrase and word-in-context representation learning, respectively.", "The success of these methods suggests that LMs store a wealth of lexical knowledge: yet, as we confirm here for BLI, fine-tuning is typically needed to expose it.", "We have proposed a simple yet extremely effective and robust two-stage contrastive learning framework for improving bilingual lexicon induction (BLI).", "In Stage C1, we tune cross-lingual linear mappings between static word embeddings with a contrastive objective and achieve substantial gains in 107 out of 112 BLI setups on the standard BLI benchmark.", "In Stage C2, we further propose a contrastive fine-tuning procedure to harvest crosslingual lexical knowledge from multilingual pretrained language models.", "The representations from this process, when combined with Stage C1 embeddings, have resulted in further boosts in BLI performance, with large gains in all 112 setups.", "We have also conducted a series of finer-grained evaluations, analyses and ablation studies.", "(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70)(cid:70) We thank the anonymous reviewers for their valuable feedback.", "This work is supported by the ERC PoC Grant MultiConvAI (no. 957356) and a research donation from Huawei.", "YL and FL are supported by Grace & Thomas C. H. Chan Cambridge International Scholarship.", "Our research aims to benefit the efforts in delivering truly multilingual language technology also to under-resourced languages and cultures via bridging the lexical gap between languages, groups and cultures.", "As a key task in cross-lingual NLP, bilingual lexicon induction or word translation has broad applications in, e.g., machine translation, language acquisition and potentially protecting endangered languages.", "Furthermore, compared with many previous studies, we stress the importance of diversity in the sense that our experiments cover various language families and include six lower-resource languages from the PanLex-BLI dataset.", "Hoping that our work can contribute to extending modern NLP techniques to lower-resource and under-represented languages, we focus on semi-supervised settings and achieve significant improvements with self-learning techniques.", "The two BLI datasets we use are both publicly available.", "To our best knowledge, the data (i.e., word translation pairs) do not contain any sensitive information and have no foreseeable risk." ]
[ "abstain", "objective", "objective", "method", "result", "objective", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "other", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "objective", "objective", "abstain", "method", "other", "other", "other", "abstain", "abstain", "method", "abstain", "method", "abstain" ]
[ "Text generation from a knowledge base aims to translate knowledge triples to natural-language descriptions.", "Most existing methods ignore the faithfulness between a generated text description and the original table, leading to generated information that goes beyond the content of the table.", "In this paper, for the first time, we propose a novel Transformer-based generation framework to achieve the goal.", "The core techniques in our method to enforce faithfulness include a new table-text optimal-transport matching loss and a table-text embedding similarity loss based on the Transformer model.", "Furthermore, to evaluate faithfulness, we propose a new automatic metric specialized to the table-to-text generation problem.", "We also provide detailed analysis on each component of our model in our experiments.", "Automatic and human evaluations show that our framework can significantly outperform state-of-the-art by a large margin.", "Understanding structured knowledge, e.g. , information encoded in tables, and automatically generating natural-language descriptions is an important task in the area of Natural Language Generation.", "Table-to-text generation helps making knowledge elements and their connections in tables easier to comprehend by human.", "There have been a number of practical application scenarios in this field, for example, weather report generation, NBA news generation, biography generation and medical-record description generation (Liang et al., 2009; Barzilay and Lapata, 2005; Lebret et al., 2016a; Cawsey et al., 1997).", "Most existing methods for table-to-text generation are based on an encoder-decoder framework (Sutskever et al., 2014; Bahdanau et al., Zhenyi Wang was a research intern student at Tencent AI Lab in Bellevue, WA when doing this work.", "2015), most of which are RNN-based Sequence-to-Sequence (Seq2Seq) models (Lebret et al., 2016b; Liu et al., 2018; Wiseman et al., 2018; Ma et al., 2019; Wang et al., 2018; Liu et al., 2019a).", "Though significant progress has been achieved, we advocate two key problems in existing methods.", "Firstly, because of the intrinsic shortage of RNN, RNN-based models are not able to capture long-term dependencies, which would lose important information reflected in a table.", "This drawback prevents them from being applied to larger tables, for example, a table describing a large Knowledge Base (Wang et al., 2018).", "Secondly, little work has focused on generating faithful text descriptions, which is defined, in this paper, as the level of matching between a generated text sequence and the corresponding table content .", "An unfaithful generation example is illustrated in Figure 1.", "The training objectives and evaluation metrics of existing methods encourage generating texts to be as similar as possible to reference texts.", "One problem with this is that the reference text often contains extra information that is not presented in the table because human beings have external knowledge beyond the input table when writing the text, or it even misses some important information in the table (Dhingra et al., 2019) due to the noise from the dataset collection process.", "As a result, unconstrained training with such mis-matching information usually leads to hallucinated words or phrases in generated texts, making them unfaithful to the table and thus harmful in practical uses.", "In this paper, we aim to overcome the above problems to automatically generate faithful texts from tables.", "In other words, we aim to produce the writing that a human without any external knowledge would do given the same table data as input.", "In contrast to existing RNN-based models, we leverage the powerful attention-based Transformer model to capture long-term dependencies and generate more informative paragraph-level texts.", "To generate descriptions faithful to tables, two content-matching constraints are proposed.", "The first one is a latent-representation-level matching constraint encouraging the latent semantics of the whole text to be consistent with that of the whole table.", "The second one is an explicit entity-level matching scheme, which utilizes Optimal-Transport (OT) techniques to constrain key words of a table and the corresponding text to be as identical as possible.", "To evaluate the faithfulness, we also propose a new PARENT-T metric evaluating the content matching between texts and tables, based on the recently proposed PARENT (Dhingra et al., 2019) metric.", "We train and evaluate our model on a large-scale knowledge base dataset (Wang et al., 2018).", "Automatic and human evaluations both show that our method achieve the state-of-the-art performance, and can generates paragraph-level descriptions much more informative and faithful to input tables.", "The task of text generation for a knowledge base is to take the structured table, T = { ( t 1 , v 1 ) , ( t 2 , v 2 ) , , ( t m , v m ) } , as input, and outputs a natural-language description consisting of a", "sequence of words y = { y 1 , y 2 , , y n } that is faithful to the input table.", "Here, t i denotes the slot type for the i th row, and v i denotes the slot value for the i th row in a table.", "Our model adopts the powerful Transformer model (Vaswani et al., 2017) to translate a table to a text sequence.", "Specifically, the Transformer is a Seq2Seq model, consisting of an encoder and a decoder.", "Our proposed encoder-to-decoder Transformer model learns to estimate the conditional probability of a text sequence from a source table input in an autoregressive way: P ( y | T ; ) = n (cid:89) i =1 P ( y i | y <i , T ; ) , (1) where is the Transformer parameters and y <i denotes the decoded words from previous steps.", "Existing models for table-to-text generation either only focus on generating text to match the reference text (Liu et al., 2018; Ma et al., 2019), or only require a generated text sequence to be able to cover the input table (Wang et al., 2018).", "However, as the only input information is the table, the generated text should be faithful to the input table as much as possible.", "Therefore, we propose two constraint losses, including a table-text disagreement constraint loss and a constrained content matching loss with optimal transport, to encourage the model to learn to match between the generated text and the input table faithfully.", "Figure 2 illustrates the overall architecture of our model.", "In summary, our model loss contains three parts: 1) a maximum likelihood loss (green) that measures the matching between a model prediction and the reference text sequence; 2) a latent feature matching disagreement loss (orange) that measures the disagreement between a table encoding and the corresponding reference-text encoding; and 3) an optimal-transport loss (blue) matching the key words of an input table and the corresponding generated text.", "The entities of a table simply consists of Slot Type and Slot Value pairs.", "To apply the Transformer model, we first linearize input tables into sequences.", "Slot types and slot values are separated by special tokens < and > .", "As an example, the table in Figure 1 is converted into a sequence: { < Name ID >, Willie Burden , < date of birth > , July 21 1951 , } .", "We note that encoding a table in this way might lose some high-order structure information presented in the original knowledge graph.", "However, our knowledge graph is relatively simple.", "According to our preliminary studies, a naive combination of feature extracted with graph neural networks (Beck et al., 2018) does not seem helpful.", "As a result, we only rely on the sequence representation in this paper.", "Our base objective comes from the standard Transformer model, which is defined as the negative log-likelihood loss L mle of a target sentence y given its input T , i.e. ,", "One key element of our model is to enforce a generated text sequence to be consistent with (or faithful to) the table input.", "To achieve this, we propose to add some constraints so that a generated text sequence only contains information from the table.", "Our first idea is inspired by related work in machine translation (Yang et al., 2019).", "Specifically, we propose to constrain a table embedding to be close to the corresponding target sentence embedding.", "Since the embedding of a text sequence (or the table) in our model is also represented as a sequence, we propose to match the mean embeddings of both sequences.", "In fact, the mean embedding has been proved to be an effective representation for the whole sequence in machine translation (Yang et al., 2019; Wang et al., 2017).", "Let V table and V text be the mean embeddings of a table and the target text embeddings in our Transformer-based model, respectively.", "A table-target sentence disagreement loss L disagree is then defined as L disagree = (cid:107) V table V text (cid:107) 2 (3) 2.4 Faithfulness Modeling with Constrained Content Matching via Optimal Transport Our second strategy is to explicitly match the key words in a table and the corresponding generated text.", "In our case, key words are defined as nouns, which can be easily extracted with existing tools such as NLTK (Loper and Bird, 2002).", "To match key words, a mis-matching loss should be defined.", "Such a mis-matching loss could be non-differentiable, e.g. , when the loss is defined as the number of matched entities.", "In order to still be able to learn by gradient descent, one can adopt the policy gradient algorithm to deal with the non-differentiability.", "However, policy gradient is known to exhibit high variance.", "To overcome this issue, we instead propose to perform optimization via optimal transport (OT), inspired by the recent techniques in (Chen et al., 2019a).", "Optimal-Transport Distance In the context of text generation, a generated text sequence, y = ( y 1 , , y n ) , can be represented as a discrete distribution = (cid:80) ni =1 u i y i ( ) , where u i 0 and (cid:80) i u i = 1 , x ( ) denotes a spike distribution located at x .", "Given two discrete distributions and , written as = (cid:80) n i =1 u i x i and = (cid:80) mj =1 v j y j , respectively, the OT distance between and is defined as the solution of the following maximum network-flow problem: LOT = min U ( , ) n (cid:88) i =1 m (cid:88) j =1 U ij d ( x i , y j ) , (4) where d ( x , y ) is the cost of moving x to y (match-ing x and y ).", "In this paper, we use the cosine distance between the two word-embedding vectors of x and y , defined as d ( x , y ) = 1 xy (cid:107) x (cid:107) 2 (cid:107) y (cid:107) 2 .", "( , ) is the set of joint distributions such that the two marginal distributions equal to and , respectively.", "Exact minimization over U in the above problem is in general computational intractable (Genevay et al., 2018).", "Therefore, we adopt the recently proposed Inexact Proximal point method for Optimal Transport (IPOT) (Xie et al., 2018) as an approximation.", "The details of the IPOT algorithm are shown in Appendix C. Constrained Content Matching via OT To apply the OT distance to our setting, we need to first specify the atoms in the discrete distributions.", "Since nouns typically are more informative, we propose to match the nouns in both an input table and the decoded target sequence.", "We use NLTK (Loper and Bird, 2002) to extract the nouns that are then used for computing the OT loss.", "In this way, the computational cost can also be significantly reduced comparing to matching all words.", "The OT loss can be used as a metric to measure the goodness of the match between two sequences.", "To illustrate the motivation of applying the OT loss to our setting, we provide an example illustrated in Figure 3, where we try to match the table with the two generated text sequences.", "On the left plot, the generated text sequence contains California brand Grateful Dead, which is not presented in the input table.", "Similarly, and the phrases Seattle, Washington and Skokie Illinois in the table are not covered by the generated text.", "Consequently, the resulting OT loss will be high.", "By contrast, on the right plot, the table contains all information in the text, and all the phrases in the table are also covered well by the generated text, leading to a low OT loss.", "As a result, optimizing over the OT loss in (4) would enforce faithful matching between a table and its generated text.", "Optimization via OT When optimizing the OT loss with the IPOT algorithm, the gradients of the OT loss is required to be able to propagate back to the Transformer component.", "In other words, this requires gradients to flow back from a generated sentence.", "Note that a sentence is generated by sampling from a multinomial distribution, whose parameter is the Transformer decoder output represented as a logit vector S t for each word in the vocabulary.", "This sampling process is unfortunately non-differentiable.", "To enable back-propagation, we follow Chen et al. (2019a) and use the Soft-argmax trick to approximate each word with the corresponding soft-max output.", "To further reduce the number of parameters and improve the computational efficiency, we adopt the factorized embedding parameterization proposed recently (Lan et al., 2019).", "Specifically, we decompose a word embedding matrix of size V D into the product of two matrices of sizes V H and H D , respectively.", "In this way, the parameter number of the embedding matrices could be significantly reduced as long as H is to be much smaller than D .", "where and controls the relative importance of each component of the loss function.", "To enforce a generated sentence to stick to the words presented in the table as much as possible, we follow (See et al., 2017) to employ a copy mechanism when generating an output sequence.", "Specifically, let P vocab be the output of the Transformer decoder.", "P vocab is a discrete distribution over the vocabulary words and denotes the probabilities of generating the next word.", "The standard methods typically generate the next word by directly sampling from P vocab .", "In the copy mechanism, we instead generate the next word y i with the following discrete distribution: P ( y i ) = p g P vocab ( y i ) + (1 p g ) P att ( y i ) , where p g = ( W 1 h i + b 1 ) is the probability of switching sampling between P vocab and P att , with learnable parameters ( W 1 , b 1 ) and h i as the hidden state from the Transformer decoder for the i -th word.", "P att is the attention weights (probability) returned from the encoder-decoder attention module in the Transformer.", "Specifically, when generating the current word y i , the encoder-decoder attention module calculates the probability vector P att de-noting the probabilities of attending to each word in the input table.", "Note that the probabilities of the words not presented in the table are set to zero.", "We conduct experiments to verify the effectiveness and superiority of our proposed approach against related methods.", "Our model is evaluated on the large-scale knowledge-base Wikiperson dataset released by Wang et al. (2018).", "It contains 250,186, 30,487, and 29,982 table-text pairs for training, validation, and testing, respectively.", "Compared to the WikiBio dataset used in previous studies (Lebret et al., 2016b; Liu et al., 2018; Wiseman et al., 2018; Ma et al., 2019) whose reference text only contains one-sentence descriptions, this dataset contains multiple sentences for each table to cover as many facts encoded in the input structured knowledge base as possible.", "For automatic evaluation, we apply the widely used evaluation metrics including the standard BLEU-4 (Papineni et al., 2002), METEOR", "(Denkowski and Lavie, 2014) and ROUGE (Lin, 2004) scores to evaluate the generation quality.", "Since these metrics rely solely on the reference texts, they usually show poor correlations with human judgments when the references deviate too much from the table.", "To this end, we also apply the PARENT (Dhingra et al., 2019) metric that considers both the reference texts and table content in evaluations.", "To evaluate the faithfulness of the generated texts, we further modify the PARENT metric to measure the level of matching between generated texts and the corresponding tables.", "We denote this new metric as PARENT-T.", "Please see Appendix A for details.", "Note that the precision in PARENT-T corresponds to the percentage of words in a text sequence that co-occur in the table; and the recall corresponds to the percentage of words in a table that co-occur in the text.", "We compare our model with several strong baselines, including", "The vanilla Seq2Seq attention model (Bah-danau et al., 2015).", "The method in (Wang et al., 2018): The state-of-art model on the Wikiperson dataset.", "The method in (Liu et al., 2018): The state-of-the-art method on the WikiBio dataset.", "The pointer-generator (See et al., 2017): A Seq2Seq model with attention, copying and coverage mechanism.", "Our implementation is based on OpenNMT (Klein et al., 2017).", "We train our models end-to-end to minimize our objective function with/without the copy mechanism.", "The vocabulary is limited to BLEU METEOR ROUGE PARENT PARENT-T (Wang et al., 2018) 16.20 19.01 40.10 51.03 54.22 Seq2Seq (Bahdanau et al., 2015) 22.24 19.50 39.49 43.41 44.55 Pointer-Generator (See et al., 2017) 19.32 19.88 40.68 49.52 52.62 Structure-Aware Seq2Seq (Liu et al., 2018) 22.76 20.27 39.32 46.47 48.47 Ours 24.56 22.37 42.40 53.06 56.10 Table 1: Comparison of our model and baseline.", "the 50, 000 most common words in the training dataset.", "The hidden units of the multi-head component and the feed-forward layer are set to 2048.", "The baseline embedding size is 512.", "Following (Lan et al., 2019), the embedding size with embedding factorization is set to be 128.", "The number of heads is set to 8, and the number of Transformer blocks is 3.", "Beam size is set to be 5.", "Label smoothing is set to 0.1.", "For the optimal-transport based regularizer, we first train the model without OT for about 20,000 steps, then fine tune the network with OT for about 10,000 steps.", "We use the Adam (Kingma and Ba, 2015) optimizer to train the models.", "We set the hyper-parameters of Adam optimizer accordingly, including the learning rate = 0.00001, and the two momentum parameters, batch size = 4096 (to-kens) and 2 = 0 .", "998 .", "Table 1 and 2 show the experiment results in terms of different evaluation metrics compared with different baselines.", "Ours means our proposed model with components of copy mechanism, embedding factorization, OT-matching with nouns, and latent similarity loss 1 .", "We can see that our model outperforms existing models in all of the automatic evaluation scores, indicating high quality of the generated texts.", "The superiority of the PARENT-T scores (in terms of precision and recall) indicates that the generated text from our model is more faithful than others.", "Example out-1 The result of the method by (Wang et al., 2018) is different from the score reported in their paper, as we use their publicly released code https://github.com/EagleW/Describing a Knowledge Base and data that is three times larger than the original 106,216 table-text pair data used in the paper.", "We have confirmed the correctness of our results with the author.", "puts from different models are shown in Table 5 with an input table shown in Figure 4.", "In this example, our model covers all the entities in the input, while all other models miss some entities.", "Furthermore, other models hallucinate some information that does not appear in the input, while our model generates almost no extra information other than that in the input.", "These results indicate the faithfulness of our model.", "More examples are shown in Appendix E. 3.6 Ablation Study We also conduct extensive ablation studies to better understand each component of our model, including the copy mechanism, embedding factorization, optimal transport constraint loss, and latent similarity loss.", "Table 3 shows the results in different evaluation metrics.", "Effect of copy mechanism The first and second rows in Table 3 demonstrate the impacts of the copy mechanism.", "It is observed that with the copy mechanism, one can significantly improve the performance in all of the automatic metrics, especially on the faithfulness reflected by the PARENT-T score.", "Effect of embedding factorization We compare our model with the one without embedding factorization.", "The comparisons are shown in the second and third rows of Table 3.", "We can see that with embedding factorization, around half of the parameters can be reduced, while comparable performance can still be maintained.", "Effect of table-text embedding similarity loss We also test the model by removing the table-text embedding similarity loss component.", "The third and fourth rows in Table 3 summarize the results.", "With the table-text embedding similarity loss, the BLEU and METEOR scores drop a little, but the PARENT and PARENT-T scores improve over the model without the loss.", "This is reasonable because the loss aims at improving faithfulness of generated texts, reflected by the PARENT-T score.", "Effect of the OT constraint loss We further compare the performance of the model", "(a) without using OT loss,", "(b) with using the whole table and text to compute OT, and", "(c) with using the extracted nouns from both table and text to compute OT.", "Results are presented in the third, fifth, and sixth rows of Table 3, respectively.", "The model with the OT loss improve performance on almost all scores, especially on the PARENT-T score.", "Furthermore, with only using the nouns to compute the OT loss, one can obtain even better results.", "These results demonstrate the effectiveness of the proposed OT loss on enforcing the model to be faithful to the original table.", "Following (Wang et al., 2018; Tian et al., 2019), we conduct extensive human evaluation on the generated descriptions and compare the results to the state-of-the-art methods.", "We design our evaluation criteria based on (Wang et al., 2018; Tian et al., 2019), but our criteria differs from (Tian et al., 2019) in several aspects.", "Specifically, for each group of generated texts, we ask the human raters to evaluate the grammar, fluency, and faithfulness.", "The human evaluation metrics of faithfulness is defined in terms of precision, recall and F1-score with respect to the reconstructed Knowledge-base table from a generated text sequence.", "To ensure accurate human evaluation, the raters are trained with word instructions and text examples of the grading standard beforehand.", "During evaluation, we randomly sample 100 examples from the predictions of each model on the Wikiperson test set, and provide these examples to the raters for blind testing.", "More details about the human evaluation are provided in the Appendix B. The human evaluation results in Table 4 clearly show the superiority of our proposed method.", "Table-to-text generation has been widely studied, and Seq2Seq models have achieved promising performance.", "(Lebret et al., 2016b; Liu et al., 2018; Wiseman et al., 2018; Ma et al., 2019; Wang et al., 2018; Liu et al., 2019a).", "For Transformer-based methods, the Seq2Seq Transformer is used by Ma et al. (2019) for table-to-text generation in low-resource scenario.", "Thus, instead of encoding an entire table as in our approach, only the predicted key facts are encoded in (Ma et al., 2019).", "Extended transformer has been applied to game summary (Gong et al., 2019) and E2E NLG tasks (Gehrmann et al., 2018).", "However, their goals focus on matching the reference text instead of being faithful to the input.", "Another line of work attempts to use external knowledge to improve the quality of generated text (Chen et al., 2019b).", "These methods allow generation from an expanded external knowledge base that may contain information not relevant to the input table.", "Comparatively, our setting requires the generated text to be faithful to the input table.", "Nie et al. (2018) further study fidelity-data-to-text generation, where several executable symbolic operations are applied to guide text generation.", "Both models do not consider the matching between the input and generated output.", "Regarding datasets, most previous methods are trained and evaluated on much simpler datasets like WikiBio (Lebret et al., 2016b) that contains only one sentence as a reference description.", "Instead, we focus on the more complicated structured knowledge base dataset (Wang et al., 2018) that aims to generate multi-sentence texts.", "Wang et al. (2018) propose a model based on the pointer network that can copy facts directly from the input knowledge base.", "Our model uses a similar strategy but obtains much better performance.", "In terms of faithfulness, one related parallel work is Tian et al. (2019).", "However, our method is completely different from theirs.", "Specifically, Tian et al. (2019) develop a confidence oriented decoder that assigns a confidence score to each target position to reduce the unfaithful information in the generated text.", "Comparatively, our method enforces faithfulness by including the proposed table-text optimal-transport matching loss and table-text embedding similarity loss.", "Moreover, the faithfulness of Tian et al. (2019) only requires generated texts to be supported by either a table or the reference; whereas ours constrains generated texts to be faithful only to the table.", "Other related works are (Perez-Beltrachini and Lapata, 2018; Liu et al., 2019b).", "For (Perez-Beltrachini and Lapata, 2018), the content selection mechanism training with multi-task learning and reinforcement learning is proposed.", "For (Liu et al., 2019b), they propose force attention and reinforcement learning based method.", "Their learning methods are completely different from our method that simultaneously incorporates optimal-transport matching loss and embedding similarity loss.", "Moreover, the REINFORCE algorithm (Williams, 1992) and policy gradient method used in (Perez-Beltrachini and Lapata, 2018; Liu et al., 2019b) exhibits high variance when training the model.", "Finally, the content-matching constraints between text and table is inspired by ideas in machine translation (Yang et al., 2019) and Seq2Seq models (Chen et al., 2019a).", "In this paper, we propose a novel Transformer-based table-to-text generation framework to address the faithful text-generation problem.", "To enforce faithful generation, we propose a new table-text optimal-transport matching loss and a table-text embedding similarity loss.", "To evaluate the faithfulness of the generated texts, we further propose a new automatic evaluation metric specialized to the table-to-text generation problem.", "Extensive experiments are conducted to verify the proposed method.", "Both automatic and human evaluations show that our framework can significantly outperform the state-of-the-art methods.", "We sincerely thank all the reviewers for providing valuable feedback.", "We thank Linfeng Song, Dian Yu, Wei-yun Ma, and Ruiyi Zhang for the helpful discussions." ]
[ "abstain", "abstain", "objective", "objective", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "method", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "other", "abstain", "other", "abstain", "other", "objective", "other", "other", "other", "other", "method", "other", "other", "objective", "objective", "objective", "abstain", "result", "other", "other" ]
[ "With the great success of pre-trained language models, the pretrain-finetune paradigm now becomes the undoubtedly dominant solution for natural language understanding (NLU) tasks.", "At the fine-tune stage, target task data is usually introduced in a completely random order and treated equally.", "However, examples in NLU tasks can vary greatly in difficulty, and similar to human learning procedure, language models can benefit from an easy-to-difficult curriculum.", "Based on this idea, we propose our Curriculum Learning approach.", "By reviewing the trainset in a crossed way, we are able to distinguish easy examples from difficult ones, and arrange a curriculum for language models.", "Without any manual model architecture design or use of external data, our Curriculum Learning approach obtains significant and universal performance improvements on a wide range of NLU tasks.", "Natural Language Understanding (NLU), which requires machines to understand and reason with human language, is a crucial yet challenging problem.", "Recently, language model (LM) pre-training has achieved remarkable success in NLU.", "Pre-trained LMs learn universal language representations from large-scale unlabeled data, and can be simply fine-tuned with a few adjustments to adapt to various NLU tasks, showing consistent and significant improvements in these tasks (Radford et al., 2018; Devlin et al., 2018).", "While lots of attention has been devoted to designing better pre-training strategies (Yang et al., 2019; Liu et al., 2019; Raffel et al., 2019), it is also valuable to explore how to more effectively solve downstream NLU tasks in the fine-tuning stage.", "Most current approaches perform fine-tuning in a straightforward manner, i.e., all training examples are treated equally and presented in a completely random order during training.", "However, even in the same NLU task, the training examples could vary significantly in their difficulty levels, with some easily solvable by simple lexical clues while others requiring sophisticated reasoning.", "Table 1 shows some examples from the SST-2 sentiment classification task (Socher et al., 2013), which identifies sentiment polarities (positive or negative) of movie reviews.", "The easy cases can be solved directly by identifying sentiment words such as comfortable and unimaginative, while the hard ones further require reasoning with negations or verb qualifiers like supposedly and occasionally.", "Extensive research suggests that presenting training examples in a meaningful order, starting from easy ones and gradually moving on to hard ones, would benefit the learning process, not only for humans but also for machines (Skinner, 1958; Elman, 1993; Peterson, 2004; Krueger and Dayan, 2009).", "Such an organization of learning materials in human learning procedure is usually referred to as Curriculum .", "In this paper, we draw inspiration from similar ideas, and propose our approach for arranging a curriculum when learning NLU tasks.", "Curriculum Learning (CL) is first proposed by (Bengio et al., 2009) in machine learning area, where the definition of easy examples is established ahead, and an easy-to-difficult curriculum is arranged accordingly for the learning procedure.", "Recent developments have successfully applied CL in computer vision areas (Jiang et al., 2017; Guo et al., 2018; Hacohen and Weinshall, 2019).", "It is observed in these works that by excluding the negative impact of difficult or even noisy examples in early training stage, an appropriate CL strategy can guide learning towards a better local minima in parameter space, especially for highly non-convex deep models.", "We argue that language models like transformer, which is hard to train (Popel and Bojar, 2018), should also benefit from CL in the context of learning NLU tasks, and such idea still remains unexplored.", "The key challenge in designing a successful CL strategy lies in how to define easy/difficult examples.", "One straightforward way is to simply pre-define the difficulty in revised rules by observing the particular target task formation or training data structure accordingly (Guo et al., 2018; Platanios et al., 2019; Tay et al., 2019).", "For example, (Ben-gio et al., 2009) utilized an easier version of shape recognition trainset which comprised of less varied shapes, before the training of complex one started.", "More recently, (Tay et al., 2019) considered the paragraph length of a question answering example as its reflection of difficulty.", "However, such strategies are highly dependent on the target dataset itself and often fails to generalize to different tasks.", "To address this challenge, we propose our Cross Review method for evaluating difficulty.", "Specifi-cally, we define easy examples as those well solved by the exact model that we are to employ in the task.", "For different tasks, we adopt their corresponding golden metrics to calculate a difficulty score for each example in the trainset.", "Then based on these difficulty scores, we further design a re-arranging algorithm to construct the learning curriculum in an annealing style, which provides a soft transition from easy to difficult for the model.", "In general, our CL approach is not constrained to any particular task, and does not rely on human prior heuristics about the task or dataset.", "Experimental results show that our CL approach can greatly help language models learn in their finetune stage.", "Without any task-tailored model architecture design or use of external data, we are able to obtain significant and universal improvements on a wide range of downstream NLU tasks.", "Our contributions can be concluded as follows: We explore and demonstrate the effectiveness of CL in the context of finetuning LM on NLU tasks.", "To the best of our knowledge, this is one of the first times that CL strategy is proved to be extensively prospective in learning NLU tasks.", "We propose a novel CL framework that consists of a Difficulty Review method and a Curriculum Arrangement algorithm, which requires no human pre-design and is very generalizable to a lot of given tasks.", "We obtain universal performance gain on a wide range of NLU tasks including Machine Reading Comprehension (MRC) and Natural Language Inference.", "The improvements are especially significant on tasks that are more challenging.", "We describe our CL approach using BERT (De-vlin et al., 2018), the most influential pre-trained LM that achieved state-of-the-art results on a wide range of NLP tasks.", "BERT is pretrained using Masked Language Model task and Next sentence Prediction task via large scale corpora.", "It consists of a hierarchical stack of l self-attention layers, which takes an input of a sequence with no more than 512 tokens and output the contextual representation of a H -dimension vector for each token in position i , which we denote as h li RH .", "In natural language understanding tasks, the input sequences usually start with special token (cid:104) CLS (cid:105) , and end with (cid:104) SEP (cid:105) , for sequences consisting of two segments like in pairwise sentence tasks, another (cid:104) SEP (cid:105) is added in between for separating usage.", "For target benchmarks, we employ a wide range of NLU tasks, including machine reading comprehension, sequence classification and pairwise text similarity, etc..", "Following (Devlin et al., 2018), we adapt BERT for NLU tasks in the most straightforward way: simply add one necessary linear layer upon the final hidden outputs, then finetune the entire model altogether.", "Specifically, we brief the configurations and corresponding metrics for different tasks employed in our algorithms as follows: Machine Reading Comprehension In this work we consider the extractive MRC task.", "Given a passage P and a corresponding question Q , the goal is to extract a continuous span (cid:104) p start , p end (cid:105) from P as the answer A , where the start and end are its boundaries.", "We pass the concatenation of the question and paragraph [ (cid:104) CLS (cid:105) , Q, (cid:104) SEP (cid:105) , P, (cid:104) SEP (cid:105) ] to the pretrained LM and use a linear classifier on top of it to predict the answer span boundaries.", "For the i th input token, the probabilities that it is the start or end are calculated as: [ logit starti , logit endi ] T = WTMRC h li p starti = softmax( { logit starti } ) p endi = softmax( { logit endi } ) where WTMRC R 2 H is a trainable matrix.", "The training objective is the log-likelihood of the true start and end positions y start and y end : loss = (log( p starty start ) + log( p endy end )) For unanswerable questions, the probability is calculated as s un = p startcls + p endcls using (cid:104) CLS (cid:105) representation.", "We classify a question into unanswerable when s un > s i,j = max i j ( p starti + p endj ) .", "F1 is used as the golden metric.", "Sequence Classification We consider the final contextual embedding of (cid:104) CLS (cid:105) token h l 0 as the pooled representation of the whole input sequence S .", "The probability that the input sequence belongs to label c is calculated by a linear output layer with parameter matrix WSC RK H following a softmax: P ( c | S ) = softmax( h l 0 WTSC ) , where K is the number of classes.", "The log-likelihood is also used as the training objective for this task.", "Accuracy is considered as the golden metric.", "Pairwise Text Similarity Similar to sequence classification task, final embedding of (cid:104) CLS (cid:105) token h l 0 is used to represent the input text pair ( T 1 , T 2 ) .", "A parameter vector WPTS RH is introduced to compute the similarity score: Similarity( T 1 , T 2 ) = h l 0 WTPTS .", "where y is the similarity label in continuous score.", "We decompose our CL framework into two stages: Difficulty Evaluation and Curriculum Arrangement .", "For any target task, let D be the examples set used for training, and be our language model which is expected to fit D .", "In the first stage, the goal is to assign each example d j in D with a score c j which reflects its difficulty with respect to the model.", "We denote C as the whole difficulty score set corresponding to trainset D .", "In the second stage, based on these scores, D is organized into a sequence of ordered learning stages { S i : i = 1 , 2 , . . . , N } with an easy-to-difficult fashion, resulting in the final curriculum where the model will be trained on.", "We will elaborate these two stages in section 3.1 and 3.2 respectively.", "The difficulty of a textual example reflects itself in many ways, e.g., the length of the context, the usage of rare words, or the scale of learning target.", "Although such heuristics seems reasonable to human, the model itself may not see it the same way.", "So we argue that difficulty score as the intrinsic properties of an example should be decided by the model itself, and the best metric should be the golden metric of the target task, which can be accuracy, F1 score, etc., as we introduced in section", "2. To perform difficulty evaluation, we first scatter our trainset D into N shares uniformly as { (cid:101) D i : i = 1 , 2 , . . . , N } , and train N corresponding models { (cid:101) i : i = 1 , 2 , . . . , N } on them, which are all identical to (note that each model (cid:101) i will only see 1 /N of the entire trainset).", "We refer to these N models as teachers , and { (cid:101) D i } as meta-datasets for that they are attended only to collect information (i.e. the extent of difficulty) about the original trainset D .", "This preparing of teacher can be formulated as: (cid:101) i = argmin (cid:101) i (cid:88) d j (cid:101) D i L ( d j , (cid:101) i ) i = 1 , 2 , . . . , N where L indicates the loss function.", "After every teacher is respectively trained on its meta-dataset, the evaluation of trainset D should begin.", "For each example d j , it should be included in one and only one meta-dataset, let's assume it's (cid:101) D k , then we perform inference of d j on all teachers except teacher k , because the inference from teacher k is supposed to be isolated with the meta-dataset (cid:101) D k it has already seen during training.", "After all inferences finished, we calculate scores of d j in the target task's metric, resulting N 1 scores from N 1 different teachers: c ji = M ( (cid:101) i ( x j ) , y j ) where (cid:101) i ( ) represents the inference function, x j and y j is the input and label of example d j respectively, M is the metric calculation formula, which can be either F1, Accuracy or MSE for different tasks as introduced in section 2, and c ji is the score of d j from teacher (cid:101) i .", "Finally, we define the difficulty score of d j as the integration of all N 1 scores: c j = (cid:88) i (1 ,...,N ) , i (cid:54) = k c ji with all scores calculated, we obtain the final difficulty score set C as desired.", "We refer to our difficulty evaluation method as Cross Review (see Fig. 1) In the proposed method, the teacher models perform their inferences in a crossed way, which prevents the meta-dataset from contaminating the inference set.", "Besides, each example gets its score from multi teachers, thus the fluctuation of evaluation results is greatly alleviated.", "In general, our Cross Review method can address the difficulty evaluation problem in an elegant design.", "In this section we describe our method to arrange the training examples D into a learning curriculum according to their difficulty scores C .", "We design our curriculum in a multi-stage setting { S i : i = 1 , 2 , . . . , N } .", "Within each stage S i , the examples are still shuffled to keep local stochastics, and examples from different stages do not overlap in order to prevent overfitting.", "The sampling algorithm is built upon such prin-ciple: The proportion of difficult examples in each stage should start with 0, and gradually increase until it reachs how much it accounts for in the original dataset distribution.", "We first sort all examples by their difficulty score C , and divide them into N buckets: { C i : i = 1 , 2 , . . . , N } , so the examples are now collected into N different levels of difficulty, ranging from C 1 (the easiest) to CN (the hardest), with the proportion distribution as: num ( C 1 ) : num ( C 2 ) : : num ( CN ) For tasks with discrete metrics, such distribution is naturally formed by the difficulty score hierarchy, and directly reflects the intrinsic difficulty distribution of the dataset.", "For other tasks, we manually divide C uniformly 1 .", "Based on these buckets, we construct the learning curriculum one stage after another.", "For each learning stage S i , we sample examples from all antecedent buckets { C j : j = 1 , 2 , . . . , i } by the following proportion: 1 N num ( C 1 ) : 1 N num ( C 2 ) : : 1 N num ( C i ) and the final curriculum { S i : i = 1 , 2 , . . . , N } is formed as such.", "We refer to the arrangement algorithm as Annealing method for it provides a soft transition through multi learning stages.", "At each stage, the model is trained for one epoch.", "When the training reached SN , the model should be ready for the original distribution in trainset D , so we finally add another stage SN +1 which covers the entire trainset, and the model is trained on it until converges.", "In this section we briefly describe three popular NLU benchmarks on which we evaluate our CL approach: SQuAD 2.0 (Rajpurkar et al., 2018), NewsQA (Trischler et al., 2016) and GLUE (Wang et al., 2018), their scale and metrics are detailed in Table", "2. 1 Please refer to our implementation detail for selected tasks in section 4.2 SQuAD2.0 NewsQA MNLI-m QNLI QQP RTE SST-2 MRPC CoLA STS-B Train 130.3k 92.5k 392.7k 104.7k 363.8k 2.5k 67.3k 3.7k 8.6k 5.7k Dev 11.9k 5.2k 9.8k 5.5k 40.4k 277 872 408 1.0k 1.5k Test 8.9k 5.1k 9.8k 5.5k 39.1k 3.0k 1.8k 1.7k 1.0k 1.4k Metrics F1/EM F1/EM Accuracy Accuracy Accuracy Accuracy Accuracy F1 Matthew Pearson Table 2: The number of training, development, test examples and metrics of tasks used in this work.", "SQuAD The Stanford Question Answering Dataset (SQuAD), constructed using Wikipedia articles, is a well known extractive machine reading comprehension dataset with two tasks: SQuAD1.1 (Rajpurkar et al., 2016) and SQuAD 2.0 (Rajpurkar et al., 2018).", "The latest 2.0 version also introduced unanswerable questions, making it a more challenging and practical task.", "In this paper, We take SQuAD 2.0 as our testbed.", "NewsQA NewsQA (Trischler et al., 2016) is also a MRC dataset in extractive style but is much more challenging, with human performance at 0.694 F1 score.", "NewsQA is collected from news articles of CNN with two sets of crowdworkers, the ques-tioners is provided with the article's headline only, and answerers is supposed to find the answer in full article.", "We ignore examples flagged to be without annotator agreement for better evaluation following (Fisch et al., 2019).", "GLUE The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018) is a collection of nine 2 diverse sentence or sentence pair language understanding tasks including sentiment analysis, textual entailment, sentence similarity, etc.", "It is considered as a well-designed benchmark that can evaluate the generalization and robustness of NLU algorithms.", "The labels for GLUE test set is hidden, and users must upload their predictions to obtain evaluation results, the submission is limited to protect test set from overfitting.", "We use BERT Large (Devlin et al., 2018) as our pre-trained language model to demonstrate the ef-2", "ef-2 The benchmark consists of: Multi-Genre NLI (MNLI) (Williams et al., 2018), Quora Question Pairs (QQP) (Shankar Iyer, 2016), Question NLI (QNLI) (Rajpurkar et al., 2016), Stanford Sentiment Treebank (SST) (Socher et al., 2013), Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019), Semantic Textual Similarity Benchmark (STS-B) (Cer et al., 2017), Microsoft Research Paragraph Corpus (MRPC) (Dolan and Brockett, 2005), Recognizing Textual Entailment (RTE) (Bentivogli et al., 2009), Winograd NLI (WNLI) (Levesque et al., 2012)", "fectiveness of our CL approach.", "For MRC, we also test on BERT Base model for more comprehensive results.", "Besides reported results from literature, we also provide our re-implementation on all datasets, which form a more competitive baseline for comparison.", "The only difference between our re-implementation and our CL approach is the arrangement of curriculum, i.e., the order of training examples.", "To obtain a more comparable and stable difficulty score, we binarize the review results before sum them together if possible.", "For accuracy as metric, the score c ji is already binary in instance level, for F1 as metric, we count any review result c ji > 0 as correct.", "For other continuous metrics (MSE in this paper), we sum c ji directly.", "We empirically choose N = 10 as the number of meta-datasets for most tasks (also the number of difficulty level and the number of stages), for three datasets with rather limited scale (RTE, MRPC, and STS-B), we change it to N = 3 .", "The scale of all datasets employed in this work is provided in Table", "2. Intuitively, we shall get better results by searching for the best N , we leave it to future works due to limited computation resource.", "We implement our approach based on the Py-Torch implementation of BERT (Wolf et al., 2019).", "We use Adam (Kingma and Ba, 2014) optimizer MNLI-m QNLI QQP RTE SST-2 MRPC CoLA STS-B Avg results on dev BERT Large 86.6 92.3 91.3 70.4 93.2 88.0 60.6 90.0 84.1 BERT Large 86.6 92.5 91.5 74.4 93.8 91.7 63.5 90.2 85.5 BERT Large+CL 86.6 92.8 91.8 76.2 94.2 91.9 66.8 90.6 86.4 results on test BERT Large 86.7 91.1 89.3 70.1 94.9 89.3 60.5 87.6 83.7 BERT Large 86.3 92.2 89.5 70.2 94.4 89.3 60.5 87.3 83.7 BERT Large+CL 86.7 92.5 89.5 70.7 94.6 89.6 61.5 87.8 84.1 Table 4: Results on GLUE benchmark, indicates our re-implementation, baselines on dev sets are obtained from (Liu et al., 2019), baselines on test sets are obtained from the leaderboard ( https://gluebenchmark. com/leaderboard ) submitted by (Devlin et al., 2018), they may have taken different hyperparmeters.", "with eplison equals to 1e-8.", "The learning rates warm up over the first 5% steps and then decay linearly to 0 for all experiments.", "To construct our re-implementation, on both SQuAD 2.0 and NewsQA we perform hyperparameter search with batch size in { 16, 32 } and learning rate in { 1e-5, 2e-5, 3e-5, 4e-5 } for Base model, and { 32, 48, 64 } , { 5e-5, 6e-5, 7e-5 } for Large model.", "We reuse the best parameter setting in SQuAD 2.0 on NewsQA.", "We set the max length of input sequence to 512 for NewsQA task because the paragraph is much more longer.", "On GLUE, we implement the experiments on Large model with batch size in { 16, 32 } and learning rate in { 1e-5, 2e-5, 3e-5 } .", "The results for MRC tasks are presented in Table", "3. In all experiments, our CL approach outperforms its baseline with considerable margin.", "On SQuAD 2.0, we obtain +1.30 EM/+1.15 F1 improvements using base model and +0.31 EM/+0.57 F1 using large model compare to our competitive re-implemented baseline.", "Note that the performance gain is more significant with Base model.", "On NewsQA, we also get +0.02 EM/+0.47 F1 and +0.10 EM/+0.30 F1 improvements for base and large model respectively.", "We summarize our GLUE results in Table", "4. Results on dev sets show that our CL method consistently outperforms their competitive baseline on all 8 tasks, which proves that our CL is not only robustly effective but also generalizable on a wide range of NLU tasks.", "Because the model architecture and hyper-parameters setting are identical, all the performance gains can be attributed to our CL approach alone.", "Specifically, we observe that our CL approach is doing better on more challenging tasks.", "For CoLA and RTE, the margin is up to +3.3 and +1.8 in respective metrics, which is relatively larger than less challenging tasks where the model performance already reached a plateau.", "Such results are understandable: when learning harder tasks, the model can be overwhelmed by very difficult examples at early stages, and a well-arranged curriculum thus can be more helpful.", "And for tasks where the baselines are already approaching the human performance like SST-2, our CL approach is still able to provide another +0.4 improvements, which demonstrates the robustness of our approach.", "Overall, our CL approach obtains +0.9 average score gain on GLUE benchmark compare to our re-implemented baseline.", "Results on test sets further demonstrate the effectiveness of our approach.", "We obtain +0.4 average score gain compare to our re-implementation and the baseline on the leaderboard.", "In this section, we delve into our approach on a series of interesting topics including: ( i ) what is the best CL design strategy for NLU tasks, ( ii ) can Cross Review really distinguish easy examples from difficult ones, ( iii ) the best choice of N .", "We choose SQuAD 2.0 task in most experiments for generality, and all experiments are performed with BERT Base model.", "Comparison with Heuristic CL Methods To demonstrate our advantage over manually designed CL methods, we compare our approach with sev-Method SQuAD 2.0 EM F1 No Curriculum -76.30 No Curriculum 73.66 76.78 Rarity + Annealing 73.75 76.90 +0.12 Answer + Annealing 74.02 77.15 +0.37 Question + Annealing 74.35 77.37 +0.59 Paragraph + Annealing 74.45 77.54 +0.76 Cross-Review + Naive order 74.31 77.29 +0.51 Cross-Review + Annealing 74.96 77.93 +1.15 Table 5: Comparisions with heuristic CL design (writ-ten in italic ).", "eral heuristic curriculum design in Table", "5. For Difficulty Review methods, we adopt word rarity , answer length , question length , and paragraph length as difficulty metrics similar to (Tay et al., 2019; Platanios et al., 2019).", "We calculate word rarity as the average word frequency of the question, where the frequency is count from all questions in trainset.", "We define difficult examples as those with lower words frequencies, longer answer, question, and paragraph length.", "We first sort all examples using these metrics, and divide them evenly to obtain 10 example buckets with a corresponding level of difficulty, and the Curriculum Arrangement strategy remains unchanged as Annealing.", "For Curriculum Arrangement method, we try Naive order for comparison.", "We directly implement the curriculum as { C i } (instead of { S i } ) without any sampling algorithm, only that SN +1 is still retained for fair comparison.", "In the meantime, the Difficulty Evaluation method remains unchanged as Cross Review.", "The results show that these intuitive design indeed works well with various improvements ranging from +0.12 to +0.76 on F1 score.", "But they are all outperformed by our Cross Review + Annealing approach.", "Case study: Easy VS Difficult In our Cross-Review method, the dataset was divided into N buckets { C i } with different levels of difficulty.", "Here we further explore what do these easy/difficult examples in various tasks actually look like.", "Earlier in the introduction (see Table 1), we have provided a straightforward illustration of easy cases versus hard cases in SST-2 dataset.", "Among ten different levels of difficulty, these cases are sampled from the most easy bucket ( C 1 ) and the most difficult bucket ( C 10 ), respectively.", "The results are very clear and intuitive.", "We further choose SQuAD 2.0 as a more complex task to perform in-depth analysis.", "Under the N = 10 setting, we reveal the statistical distinctions of all buckets { C i } in Fig", "2. With three monotonically increasing curve, it is very clear that difficult examples tend to entail longer paragraph, longer questions, and longer answers.", "Such conclusions conforms to our intuition that longer text usually involves more complex reasoning patterns and context-dependency.", "And these challenging examples are now successfully excluded in the early stages attributing to our CL approach.", "Another interesting result is that the percentage of unanswerable examples drops consistently from 40% to 20% along the difficulty axis.", "We assume that simply doing classification is easier than extracting the exact answer boundaries.", "On Different Settings of N One argument that needs to be specified ahead in our approach is N , which decides the number of meta-datasets, learning stages, and also the granularity of our difficulty score.", "Assume the metric is between 0 and 1 , which fits almost all the cases, then the difficulty score c ji should range from 0 (when all teacher models fail) to N 1 (when all teacher models succeed), so all examples can be distinguished into N different levels.", "With N becoming larger, the granularity is also finer.", "To examine the impact of different settings, we perform ablation study on SQuAD 2.0 task given a wide range of choices: from 2 to 20 (see Fig 3).", "It is obvious that under all settings our approach Figure 3: F1 score on SQuAD 2.0 with respect to N .", "outperforms the baseline by at least +0.5 F1 score (even including N = 2 , where the difficulty evaluation results may be affected by the fluctuation of single-teacher review).", "We also experiment with extremely large N value.", "For N = 100 , the result is 74.10 on F1 score ( 2.68 below our baseline), which is as expected because the meta-dataset is too small to prepare a decent teacher that is capable of evaluating.", "In general, our approach is very robust with the settings of N .", "The idea of training a neural network in an easy-to-difficult fashion can be traced back to (Elman, 1993).", "(Krueger and Dayan, 2009) revisited the idea from a cognitive perspective with the shaping procedure, in which a teacher decomposes a complete task into sub-components.", "Based on these works, Curriculum Learning is first proposed in (Bengio et al., 2009).", "They designed several toy experiments to demonstrate the benefits of curriculum strategy both in image classification and language modeling.", "They also propose that curriculum can be seen as a sequence of training criteria, and at the end of it, the reweighting of examples should be uniform with the target distribution, which inspired the design of our Curriculum Arrangement algorithm.", "Although CL has been successfully applied to many areas in computer vision (Supancic and Ramanan, 2013; Chen and Gupta, 2015; Jiang et al., 2017), it was not introduced to solve NLU tasks until (Sachan and Xing, 2016).", "By experimenting with several heuristics, they migrated the success of CL (Kumar et al., 2010) to machine reading comprehension tasks.", "(Sachan and Xing, 2018) further extended this work to question generation.", "More recently, (Tay et al., 2019) employed CL strategy to solve reading comprehension over long narratives.", "Apart from them, there aren't very many works that discuss CL in the context of NLU to the best of our knowledge.", "On the methodology of designing CL algorithms, our approach is closely related to (Guo et al., 2018; Wang et al., 2019; Platanios et al., 2019; Tay et al., 2019), where a curriculum is formed via two steps: evaluating the difficulty first, then sampling the examples into batches accordingly.", "For different target tasks, the evaluation methods also vary greatly.", "(Guo et al., 2018) first examined the examples in their feature space, and define difficulty by the distribution density, which successfully distinguished noisy images.", "(Wang et al., 2019) incorporated category information into difficulty metric to address imbalanced data classification.", "In language tasks, (Platanios et al., 2019) and (Tay et al., 2019) propose to consider the length of context as extent of difficulty.", "Another line of works see curriculum construction as an optimization problem (Kumar et al., 2010; Graves et al., 2017; Fan et al., 2018), which usually involves sophisticated design and is quite different from our approach.", "In this work we proposed a novel Curriculum Learning approach which does not rely on human heuristics and is simple to implement.", "With the help of such a curriculum, language models can significantly and universally perform better on a wide range of downstream NLU tasks.", "In the future, we look forward to extend CL strategy to the pretraining stage, and guide deep models like transformer from a language beginner to a language expert.", "We thank all anonymous reviewers for their valuable comments.", "This work is supported by the National Natural Science Foundation of China, Grant No.U19A2057, No.61876223, the National Science Fund for Distinguished Young Scholars No.61525206, the Fundamental Research Funds for the Central Universities, Grant No.WK3480000008, and the grant of Tianjin New Generation Artificial Intelligence Major Program No.19ZXZNGX00110." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "abstain", "result", "result", "objective", "objective", "objective", "result", "objective", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "objective", "other", "other", "abstain", "other", "abstain", "abstain", "other", "other", "other", "other", "abstain", "objective", "abstain", "objective", "other", "other" ]
[ "We explore deception detection in interview dialogues.", "We analyze a set of linguistic features in both truthful and deceptive responses to interview questions.", "We also study the perception of deception, identifying characteristics of statements that are perceived as truthful or deceptive by interviewers.", "Our analysis show significant differences between truthful and deceptive question responses, as well as variations in deception patterns across gender and native language.", "This analysis motivated our selection of features for machine learning experiments aimed at classifying globally deceptive speech.", "Our best classification performance is 72.74 F1-Score (about 27% better than human performance), which is achieved using a combination of linguistic features and individual traits.", "Deception detection is a critical problem studied by psychologists, criminologists, and computer scientists.", "In recent years the NLP and speech communities have increased their interest in deception detection.", "Language cues are inexpensive and easy to collect, and research examining text-based and speech-based cues to deception has been quite promising.", "Prior work has examined deceptive language in several domains, including fake reviews, mock crime scenes, and opinions about topics such as abortion or the death penalty.", "In this work we explore the domain of interview dialogues, which are similar to many real-world deception conditions.", "Previous work has presented the results of classification experiments using linguistic features, attempting to identify which features contribute most to classification accuracy.", "However, studies often do not include an empirical analysis of features.", "We might know that a particular feature set (e.g. LIWC categories) is useful for deception classification, but we lack insight about the nature of the deceptive and truthful language that makes the feature set useful, and whether the differences in language use are statistically significant.", "In this work we conduct an empirical analysis of feature sets and report on the different characteristics of truthful and deceptive language.", "In addition, previous work has focused on the characteristics of deceptive language, and not on the characteristics of perceived deceptive language.", "We are also interested in human perception of deception; that is, what are the characteristics of language that listeners perceive as truthful or deceptive?", "We examine a unique dataset that includes information about both the deceiver and the interviewer, along with interviewer judgments of deception.", "Along with an analysis of deceptive and truthful speech, we analyze the believed and disbelieved speech, according to reported interviewer judgments.", "Finally, previous work has focused on general inferences about deception; here we include analysis of gender and native language, to study their effect on deceptive behavior, and also their effect on perception of deception.", "This work contributes to the critical problem of automatic deception detection, and increases our scientific understanding of deception, deception perception, and speaker differences in deceptive behavior.", "The paper is organized as follows: In Section 2 we review related work in language-based cues to deception.", "Section 3 describes the dataset used for this work, and Section 4 details the different feature sets we employ.", "In Section 5, we report on the results of our empirical study of indicators of deception and perceived deception, as well as gender and native language differences.", "Section 6 presents our machine learning classification results using the deception indicator feature sets.", "We conclude in Section 7 with a discussion and ideas 1941 for future work.", "Language-based cues to deception have been analyzed in many genres.", "Ott et al. (2011) compared approaches to automatically detecting deceptive opinion spam, using a crowdsourced dataset of fake hotel reviews.", "Several studies use a fake opinion paradigm for collecting data, instructing subjects to write or record deceptive and truthful opinions about controversial topics such as the death penalty or abortion, or about a person that they like/dislike (Newman et al., 2003; Mihalcea and Strapparava, 2009).", "Other research has focused on real-world data obtained from court testimonies and depositions (Fornaciari and Poesio, 2013; Bachenko et al., 2008; Perez-Rosas et al., 2015).", "Real-world deceptive situations are high-stakes, where there is much to be gained or lost if deception succeeds or fails; it is hypothesized that these conditions are more likely to elicit strong cues to deception.", "However, working with such data requires extensive research to annotate each utterance for veracity, so such datasets are often quite small and not always reliable.", "Linguistic features such as n-grams and language complexity have been analyzed as cues to deception (Perez-Rosas and Mihalcea, 2015; Yancheva and Rudzicz, 2013).", "Syntactic features such as part of speech tags have also been found to be useful for structured data (Ott et al., 2011; Feng et al., 2012).", "Statement Analysis (Adams, 1996) is a text-based deception detection approach that combines lexical and syntactic features.", "An especially useful resource for text-based deception detection is the Linguistic Inquiry and Word Count (LIWC) (Pennebaker and King, 1999), which groups words into psychologically motivated categories.", "In addition to lexical features, some studies have examined acoustic-prosodic cues to deception (Rockwell et al., 1997; Enos, 2009; Mendels et al., 2017).", "(Benus et al., 2006) studied pause behavior in deceptive speech.", "This work is very promising, but it is more dif-ficult to obtain large, cleanly recorded speech corpora with deception annotations than to obtain text corpora.", "An excellent meta-study of verbal cues to deception can be found in (DePaulo et al., 2003).", "For this work, we examined the Columbia X-Cultural Deception (CXD) Corpus (Levitan et al., 2015a) a collection of within-subject deceptive and non-deceptive speech from native speakers of Standard American English (SAE) and Mandarin Chinese (MC), all speaking in English.", "The corpus contains dialogues between 340 subjects.", "A variation of a fake resume paradigm was used to collect the data.", "Previously unacquainted pairs of subjects played a lying game with each other.", "Each subject filled out a 24-item biographical questionnaire and were instructed to create false answers for a random half of the questions.", "They also reported demographic information including gender and native language, and completed the NEO-FFI personality inventory (Costa and McCrae, 1989).", "The lying game was recorded in a sound booth.", "For the first half of the game, one subject assumed the role of the interviewer, while the other answered the biographical questions, lying for half and telling the truth for the other; questions cho-sen in each category were balanced across the corpus.", "For the second half of the game, the subjects roles were reversed, and the interviewer became the interviewee.", "During the game, the interviewer was allowed to ask the 24 questions in any order s/he chose; the interviewer was also encouraged to ask follow-up questions to aid them in determining the truth of the interviewees answers.", "Interviewers recorded their judgments for each of the 24 questions, providing information about human perception of deception.", "The entire corpus was orthographically transcribed using the Amazon Mechanical Turk (AMT) 1 crowd-sourcing platform, and the speech was segmented into inter-pausal units (IPUs), defined as pause-free segments of speech separated by a minimum pause length of 50 ms. The speech was also segmented into turn units, where a turn is defined as a maximal sequence of IPUs from a single speaker without any interlocutor speech that is not a backchan-nel .", "There are two forms of deception annotations in the corpus: local and global.", "Interviewees labeled their responses with local annotations by pressing a T or F key for each utterance as they spoke.", "These keypresses were automatically aligned with speaker IPUs and turns.", "Global la-1 https://www.mturk.com/mturk/ 1942 bels were provided by the biographical questionnaire, where each of the 24 questions was labeled as truthful or deceptive.", "Consider the following dialogue: Interviewer: What is your mother's job?", "Interviewee: My mother is a doctor (F).", "She has always worked very late hours and I felt neglected as a child (T).", "Is the interviewee response true or false?", "We differentiate between global and local deception.", "Globally, the response to the question is deceptive.", "However, it contains local instances of both truth and deception.", "In this work we focus on dialogue-based deception, using global deception labels.", "Previous work with the CXD corpus has focused on IPU-level and turn-level analysis and classification of local deception, mostly with acoustic-prosodic features (Levitan et al., 2015b; Mendels et al., 2017).", "Here we are interested in exploring global deception at the dialogue-level for the first time in this corpus.", "We define response-segments as sets of turns that are related to a single question (of the 24 interview questions).", "In order to annotate these segments, we first used a question detection and identification system (Maredia et al., 2017) that uses word embeddings to match semantically similar variations of questions to a target question list.", "This was necessary because interviewers asked the 24 questions using different wording from the original list of questions.", "On this corpus, (Maredia et al., 2017) obtained an F1-score of .95%.", "After tagging interviewer turns with this system, we labeled the set of interviewee turns between two interviewer questions q1 and q2 as corresponding to question q1.", "The intuition behind this was that those turns were responses to follow up questions related to q1, and while the question detection and identification system discussed above did not identify follow up questions, we found that most of the follow up questions after an interviewer question q1 would be related to q1 in our hand annotation.", "We evaluated this global segmentation on a hand-annotated test set of 17 interviews (about 10% of the corpus) consisting of 2,671 interviewee turns, 408 interviewer questions, and 977 follow up questions.", "Our global segmentation approach resulted in 77.8% accuracy on our hand-labeled test set (errors were mostly due to turns that were unrelated to any question).", "We performed our analysis and classification on two segmentations of the data using this tagging method: (1) first turn: we analyzed only the single interviewee turn directly following the original question, and (2) multiple turns we analyzed the entire segment of interviewee turns that were responding to the original interviewer question and subsequent follow-up questions.", "In our classification experiments, we explore whether a deceptive answer is be better classified by the interviewee's initial response or by all of the follow-up conversation between interviewer and interviewee.", "LIWC Previous work has found that deceivers tend to use different word usage patterns when they are lying (Newman et al., 2003).", "We used LIWC (Pennebaker et al., 2001) to extract semantic features from each utterance.", "LIWC is a text analysis program that computes features consisting of normalized word counts for 93 semantic classes.", "LIWC dimensions have been used in many studies to predict outcomes including personality (Pennebaker and King, 1999), deception (Newman et al., 2003), and health (Pennebaker et al., 1997).", "We extracted a total of 93 features using LIWC 2015 2 , including standard linguistic dimensions (e.g. percentage of words that are pronouns, articles), markers of psychological processes (e.g. affect, social, cognitive), punctuation categories (e.g periods, commas), and formality measures (e.g. fillers, swear words).", "Linguistic We extracted 23 linguistic features 3 which we adopted from previous deception studies such as (Enos, 2009; Bachenko et al., 2008).", "Included in this list are binary and numeric features capturing hedge words, filled pauses, laughter, complexity, contractions, and denials.", "We include Dictionary of Affect Language (DAL) (Whissell et al., 1986) scores that measure the emotional meaning of texts, and a specificity score which measures level of detail (Li and Nenkova, 2015).", "The full list of features is: 'hasAbsolutelyReally', 'hasContraction', 'hasI', 'hasWe', 'hasYes', 'hasNAposT' (turns 2 A full description of the features is found here: https: //s3-us-west-2.amazonaws.com/downloads.liwc.net/LIWC2015_OperatorManual.pdf 3 A detailed explanation of these linguistic features and how they were computed is found here: http://www.cs. columbia.edu/speech/cxd/features.html 1943 that contain words with the contraction n't), 'hasNo', 'hasNot', 'isJustYes', 'isJustNo', 'noYe-sOrNo', 'specificDenial', 'thirdPersonPronouns', 'hasFalseStart', 'hasFilledPause', 'numFilled-Pauses', 'hasCuePhrase', 'numCuePhrases', 'hasHedgePhrase', 'numHedgePhrases', 'hasLaugh', 'complexity', 'numLaugh', 'DAL-wc', 'DAL-pleasant', 'DAL-activate', 'DAL-imagery', 'specScores' (specificity score).", "Response Length Previous work has found that response length, in seconds, is shorter in deceptive speech, and that the difference in number of words in a segment of speech is insignificant between deceptive and truthful speech (DePaulo et al., 2003).", "For our question-level analysis, we used four different measures for response length: the total number of seconds of an interviewee response-segment, the total number of words in an interviewee response-segment, the average response time of a turn in an interviewee response-segment, and the average number of words per turn in an interviewee response-segment.", "Individual Traits We analyzed gender and native language of the speakers to determine if these traits were related to ability to deceive and to detect deception.", "We also analyzed linguistic cues to deception across gender and native language, and used gender and native language information in our classification experiments.", "All speakers were either male or female, and their native language was either Standard American English or Mandarin Chinese.", "In addition, we used the NEO-FFI (5 factor) personality inventory scores as features in classification experiments, but not for the statistical analysis in this paper.", "Follow-up Questions Follow-up questions are questions that an interviewer asks after they ask a question from the original prescribed set of questions.", "We hypothesized that if an interviewer asked more follow-up questions, they were more likely to identify deceptive responses, because asking follow-up questions indicated interviewer doubt of the interviewee's truthfulness.", "For each interviewee response-segment, we counted the number of follow-up questions interviewees were asked by the interviewer.", "In order to analyze the differences between deceptive and truthful speech, we extracted the above features from each question response-segment,", "and calculated a series of paired t-tests between the features of truthful speech and deceptive speech.", "All tests for significance correct for family-wise Type I error by controlling the false discovery rate (FDR) at = 0 .", "05 .", "The k th smallest p value is considered significant if it is less than k n .", "Table 1 shows the features that were statistically significant indicators of truth and deception in interviewee response-segments consisting of multiple turns.", "Below, we highlight some interesting findings.", "In contrast to (DePaulo et al., 2003), we found that the total duration of an interviewee response-segment was longer for deceptive speech than for truthful speech.", "Additionally, while (DePaulo et al., 2003) showed that the number of words in a segment of speech was not significantly different between deceptive and truthful speech, we found that deceptive response-segments had more words than truthful response-segments.", "Furthermore, we found that longer average response time per turn and more words per sentence were significant indicators of deception.", "These results show that when interviewees are trying to deceive, not only is their aggregate response longer in duration and number of words, but their individual responses to each follow-up question are also longer.", "Consistent with (DePaulo et al., 2003), we found that more filled pauses in an interviewee response-segment was a significant indicator of deception.", "Deceivers are hypothesized to experience an increase in cognitive load (Vrij et al., 1996), and this can result in difficulties in speech planning, which can be signaled by filled pauses.", "Although (Be-nus et al., 2006) found that, in general, the use of pauses correlates more with truthful than with deceptive speech, we found that filled pauses such as um were correlated with deceptive speech.", "The LIWC cogproc (cognitive processes) dimension, which includes words such as cause, know, ought was significantly more frequent in truthful speech, also supporting the theory that cognitive load is increased while practicing deception.", "We found that increased DALimagery scores, which compute words often used in speech to create vivid descriptions, were indicators of deception.", "We also found that the LIWC language summary variables of authenticity and adjectives 1944 Feature Deception Truth Neutral Lexical DAL.activate, DAL.imagery, DAL.pleasant isJustNo complexity, DAL.wc noYesOrNo, numCuePhrase, isJustYes numFilledPauses, numHedgePhrases numLaugh specScores, thirdPersonPronouns specificDenial LIWC achieve, adj, adverb, affiliation certain, dic affect, apostro, assent analytic, article, authentic function, negate, netspeak auxverb, body, cogproc cause, clout, compare, conj colon, comma, death dash, discrep, drives, family differ, female, filler feel, focusfuture, focuspast, friend i, ingest, insight, leisure health, interrog, ipron, male posemo, quant, quote motion, percept, ppron, prep relig, sad, see pronoun, power, relativ, reward sixltr, they, tone, work shehe, social, space, swear verb, WC, we, WPS, you Response length num words, response length avg response length, avg num words Followup num turns Table 1: Statistically significant indicators of truth and deception in interviewee response-segments consisting of multiple turns related to a single question.", "were indicators of deception: in an effort to sound more truthful and authentic, interviewees may have provided a level of detail that is uncharacteristic of truthful speech.", "Similarly, the specificity metric was indicative of deception: deceptive responses contained more detailed language.", "Words in the LIWC clout category a category describing words that indicate power of influ-ence were more prevalent in deceptive responses, suggesting that subjects sounded more confident while lying.", "Interrogatives were an indicator of deception.", "In the context of the interviewer-interviewee paradigm, these are interviewee questions to the interviewer.", "Perhaps this was a technique used to stall so that they had more time to develop an answer (e.g. Can you repeat the ques-tion?), or to deflect the interviewer's attention from their deception and put the interviewer on the spot.", "We observed that hedge words and phrases, which speakers use to distance themselves from a proposition, were more frequent in deceptive speech.", "This is consistent with Statement Analysis (Adams, 1996), which posits that hedge words are used in deceptive statements to intentionally create vagueness that obscures facts.", "Consistent with this finding, certainty in language (words such as always or never) was a strong indicator of truthfulness.", "It is also interesting to note the features that were not significant indicators of truth or decep-1945 tion.", "For example, there was no significant difference in laughter frequency or apostrophes (used for contractions in this corpus) between truthful and deceptive responses.", "When we compared indicators of truth vs. deception across multiple turns to indicators of truth vs. deception in just the first turns of interviewee response-segments, we found that, generally, indicators in first turns are a subset of indicators across multiple turns.", "In some cases there were interesting differences.", "For example, although tone (emotional tone higher numbers indicate more positive, and lower indicate negative) was not a significant indicator of deception for the entire interviewee response-segment, negative tone was a moderate indicator of deception in first turns.", "This suggests that the tone of interviewees, when they have just started their lie, is different from when they are given the opportunity to expand on that lie.", "The findings from our analysis of first turns suggest that there might be enough information in the first response alone to distinguish between deceptive and truthful speech; we test this in our classification experiments in Section 6.", "In addition to analyzing the linguistic differences between truthful and deceptive speech, we were interested in studying the characteristics of speech that is believed or disbelieved.", "Since the CXD corpus includes interviewer judgments of deception for each question asked, we have the unique opportunity to study human perception of deception on a large scale.", "Table 2 shows the features that were statistically significant indicators of truth and deception in interviewee responses consisting of multiple turns that were perceived as true or false by interviewers.", "Here we highlight some interesting findings.", "There were many features that were prevalent in speech that interviewers perceived as deceptive, which were in fact cues to deception.", "For example, speech containing more words in a response-segment and more words per sentence was generally perceived as deceptive by interviewers, and indeed, this perception was correct.", "Disbelieved answers had a greater frequency of filled pauses and hedge words, and greater specificity, all of which were increased in deceptive speech.", "There were also several features that were indicators of deception, but were not found in higher rates in statements that were perceived as false.", "For example, the LIWC dimensions clout and certain were not significantly different in believed vs. disbelieved interviewee responses, but clout was increased in deceptive speech and certain language was increased in truthful speech.", "There were also features that were significantly different between believed and disbelieved statements, but were not indicators of deception.", "For example, statements that were perceived as false by interviewers had a greater proportion of specificDenials (e.g. I did not) than those that were perceived as true; this was not a valid cue to deception.", "Number of turns was increased in dialogue segments where the interviewer did not ultimately believe the interviewee response.", "That is, more follow up questions were asked when an interviewer did not believe their in-terlocutor's response, which is an intuitive behavior.", "When we compared indicators of speech that was perceived as deceptive across multiple turns to indicators of speech that was perceived as deceptive in just the first turns, we found that, generally, indicators in first turns are a subset of indicators across multiple turns.", "On average, human accuracy at judging truth and deception in the CXD corpus was 56.75%, and accuracy at judging deceptive statements only was 47.93%.", "The average F1-score for humans was 46.", "Thus, although some cues were correctly perceived by interviewers, humans were generally poor at deception perception.", "Nonetheless, characterizing the nature of speech that is believed or not believed is useful for applications where we would ultimately like to synthesize speech that is trustworthy.", "Having discovered many differences between deceptive and truthful language across all speakers, we were interested in analyzing differences in deceptive language across groups of speakers.", "Using gender and native language (English or Mandarin Chinese) as group traits, we conducted two types of analysis.", "First, we directly compared deception performance measures (ability to deceive as interviewee, and ability to detect deception as interviewer) between speakers with different traits, to assess the effect of individual characteristics on deception abilities.", "In addition, we compared the features of deceptive and truthful language in sub-1946 Group Deception Truth Male analytic, friend, interrog posemo Female achieve, adverb, article authentic, cause compare discrep,family, feel focusfuture, percept, power relativ, we English acheve, adverb, affiliation compare, interrog, power relativ, space, swear Chinese analytic, bio cause certain discrep, feel, health (informal) percep, (filler) (netspeak) Table 3: Gender-specific and language-specific indicators of deception and truth.", "sets of the corpus, considering only people with a particular trait, in order to determine group-specific patterns of deceptive language.", "As before, tests for significance correct for family-wise Type I error by controlling the false discovery rate (FDR) at = 0 .", "05 .", "The k th smallest p value is considered significant if it is less than k n .", "There were no significant differences in deception ability between male and female speakers.", "However, there were many differences in language between male and female speakers.", "Further, some features were only discriminative between deception and truth for a specific gender.", "Table 3 shows linguistic features that were significantly different between truthful and deceptive speech, but only for one gender.", "In some cases the feature was found in different proportions in male and females, and in other cases there was no significant difference.", "For example, family words were indicative of deception only in female speakers, and these words were also used more frequently by female speakers than male speakers.", "The LIWC category of compare was also indicative of deception for females only, and this feature was generally found more frequently in female speech.", "Article usage was only significantly different between truthful and deceptive speech in females (more articles were found in deceptive speech), but articles were used more frequently in male speech.", "On the other hand, the LIWC category of posemo (positive emotion) was increased in truthful speech for male speakers only, and there was no significant difference of posemo frequency across gender.", "Interviewees were more successful at deceiving native Chinese speakers than at deceiving native English speakers ( t (170) = 2 .", "13 , p = 0 .", "033 ).", "This was true regardless of interviewee gender and native language, and slightly stronger for female interviewers ( t (170) = 2 .", "22 , p = 0 .", "027 ).", "When considering only female interviewers, interviewees were more successful at deceiving nonnative speakers than native speakers, but this difference was not significant when considering only male interviewers.", "As with gender, there were several features that were discriminative between deception and truth for only native speakers of English, or only native speakers of Mandarin.", "Table 3 shows LIWC categories and their relation to deception, broken down by native language.", "For example, power words were found more frequently in deception statements, when considering native English speakers only.", "In general, power words were used more by native Mandarin speakers than by native English speakers.", "LIWC categories of compare , relative , and swear were more prevalent in deceptive speech, only for English speakers.", "On the other hand, feel and perception dimensions were only indicators of deception for native Mandarin speakers, although there was no significant difference in the use of these word categories across native language.", "Informal and netspeak word dimensions tended to be more frequent in truthful speech for native Chinese speakers only (approaching significance), and these word categories were generally more frequent in native Mandarin speech.", "Finally, filler words tended to be more frequent in deceptive speech (approaching significance) only for native Mandarin speakers, and these were used more frequently by native Mandarin speakers than native English speakers.", "Overall, our findings suggest that deceptive behavior in general, and deceptive language in particular, are affected by a person's individual characteristics, including gender and native language.", "When building a deception classification system, it is important to account for this variation across speaker groups.", "Motivated by our analysis showing many significant differences in the language of truthful and deceptive responses to interview questions, we trained machine learning classifiers to automatically distinguish between truthful and deceptive text, using the feature sets described in section 4.", "We compared classification performance for the two segmentation methods described in section 3.2: first turn and multiple turns.", "This allowed us to explore the role of context in automatic deception detection.", "When classifying interviewee response-segments, should the immediate response only be used for classification, or is inclusion of surrounding turns helpful?", "This has implications not only for deception classification, but for practitioners as well.", "Should human interviewers make use of responses to follow up questions when determining response veracity, or should the initial response receive the most consideration?", "We compared the performance of 3 classification algorithms: Random Forest, Logistic Regression, and SVM (sklearn implementation).", "In total, there were 7,792 question segments for both single turn and multiple turns segmentations.", "We divided this into 66% train and 33% test, and used the same fixed test set in experiments for both segmentations in order to directly compare results.", "The random baseline performance is 50, since the dataset is balanced for truthful and deceptive statements.", "Another baseline is human performance, which is 46.0 F1 in this corpus.", "The Random Forest classifier was consistently the best performing, and we only report those results due to space constraints.", "Table 4 displays the classification performance for each feature set individually, as well as feature combinations, for both single turn and multiple turn segmentations.", "It also shows the human baseline performance, obtained from the in-terviewers' judgments of deception in the corpus, which were made after asking each question along with related follow-up questions (i.e. multiple turn segmentation).", "The best performance (72.74 F1-score) was obtained using LIWC features extracted from multiple turns.", "This is a 22.74% absolute increase over the random baseline of 50, and a 26.74% absolute increase over the human baseline of 46.", "The performance of classifiers trained on multiple turns was consistently better than those trained on single turns, for all feature sets.", "For multiple turns, LIWC features were better than the lexical feature set, and combining lexical with LIWC features did not improve over the performance of LIWC features alone.", "Adding individual traits information was also not beneficial.", "However, when considering the first turn only, the best results (70.87 F1-score) were obtained using a combination of LIWC+lexical+individual features.", "Using the first turns segmentation, lexical features were slightly better than LIWC features, and interestingly, adding individual traits helped both feature sets.", "A combination of LIWC and lexical features was better than each on its own.", "These results suggest that contextual informa-1948 tion, in the form of follow up questions, is ben-eficial for deception classification.", "It seems that individual traits, including gender, native language, and personality scores, are helpful in deception classification under the condition where contextual information is not available.", "When the contextual information is available, the the additional lexical content is more useful than individual traits.", "In this paper we presented a study of deceptive language in interview dialogues.", "Our analysis of linguistic characteristics of deceptive and truthful speech provides insight into the nature of deceptive language.", "We also analyzed the linguistic characteristics of speech that is perceived as deceptive and truthful, which is important for understanding the nature of trustworthy speech.", "We explored variation across gender and native language in linguistic cues to deception, highlighting cues that are specific to particular groups of speakers.", "We built classifiers that use combinations of linguistic features and individual traits to automatically identify deceptive speech.", "We compared the performance of using cues from the single first turn of an interviewee response-segment with using cues from the full context of multiple interviewee turns, achieving performance as high as 72.74% F1-score (about 27% better than human detection performance).", "This work contributes to the critical problem of automatic deception detection, and increases our scientific understanding of deception, deception perception, and individual differences in deceptive behavior.", "In future work, we plan to conduct similar analysis in additional deception corpora in other domains, in order to identify consistent domain-independent deception indicators.", "In addition, we plan to conduct cross-corpus machine learning experiments, to evaluate the robustness of these and other feature sets in deception detection.", "We also would like to explore additional feature combinations, such as adding acoustic-prosodic features.", "Finally, we plan to conduct an empirical analysis of deception behavior across personality types.", "Thank you to Bingyan Hu for her assistance with feature extraction.", "We thank the anonymous reviewers for their helpful comments." ]
[ "objective", "method", "method", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "method", "method", "objective", "method", "method", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "objective", "method", "objective", "objective", "method", "method", "objective", "method", "other", "other" ]
[ "Hongfei Xu 1 , 2 Josef van Genabith 1 , 2 Deyi Xiong 3 Qiuhui Liu 4 1 Saarland University / Saarland, Germany 2 German Research Center for Artificial Intelligence / Saarland, Germany 3 Tianjin University / Tianjin, China 4 China Mobile Online Services / Henan, China [email protected], Josef.Van [email protected], [email protected], [email protected]", "Abstract The choice of hyper-parameters affects the performance of neural models.", "While much previous research (Sutskever et al., 2013; Duchi et al., 2011; Kingma and Ba, 2015) focuses on accelerating convergence and reducing the effects of the learning rate, comparatively few papers concentrate on the effect of batch size.", "In this paper, we analyze how increasing batch size affects gradient direction, and propose to evaluate the stability of gradients with their angle change.", "Based on our observations, the angle change of gradient direction first tends to stabilize (i.e. gradually decrease) while accumulating mini-batches, and then starts to fluc-tuate.", "We propose to automatically and dynamically determine batch sizes by accumulating gradients of mini-batches and performing an optimization step at just the time when the direction of gradients starts to fluctuate.", "To improve the efficiency of our approach for large models, we propose a sampling approach to select gradients of parameters sensitive to the batch size.", "Our approach dynamically determines proper and efficient batch sizes during training.", "In our experiments on the WMT 14 English to German and English to French tasks, our approach improves the Transformer with a fixed 25 k batch size by +0 .", "73 and +0 .", "82 BLEU respectively.", "The performance of neural models is likely to be affected by the choice of hyper-parameters.", "While much previous research (Sutskever et al., 2013; Duchi et al., 2011; Kingma and Ba, 2015) focuses on accelerating convergence and reducing the effects of the learning rate, comparatively few papers concentrate on the effect of batch size.", "Specifically, it has been shown that the performance of the Transformer model (Vaswani et al., 2017) for Neural Machine Translation (NMT) (Bah-danau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) relies heavily on the batch size (Popel and Bojar, 2018; Ott et al., 2018; Abdou et al., 2017; Zhang et al., 2019a).", "The influence of batch size on performance raises the question, how to dynamically find proper and efficient batch sizes during training?", "In this paper, we investigate the relationship between the batch size and gradients, and propose a dynamic batch size approach by monitoring gradient direction changes.", "Our contributions are as follows: We observe the effects on gradients with increasing batch size, and find that a large batch size stabilizes the direction of gradients; We propose to automatically determine dynamic batch sizes in training by monitoring the gradient direction change while accumulating gradients of small batches; To measure gradient direction change efficiently with large models, we propose an approach to dynamically select those gradients of parameters/layers which are sensitive to the batch size; In machine translation experiments, our approach improves the training efficiency and the performance of the Transformer model.", "Gradients indicate the direction and size of parameter updates to minimize the loss function in training.", "To reveal the effects of the batch size in optimization, we evaluate its influence on the direction change of gradients.", "To investigate the influence of batch size on gradient direction, we gradually accumulate gradients of small mini-batches as the gradients of a large batch that consists of those mini-batches, and observe how the direction of gradients varies.", "Let d ji : ( x ji , y ji ) stands for the large batch concatenated from the i th mini-batch to the j th mini-batch, where x ji and y ji are inputs and targets.", "Then the gradients g ji of model parameters on d ji are: g ji = L ( , x j i , y j i ) (1) In gradient accumulation, the gradients g k 0 are the sum of g k 1 0 and g kk : g k 0 = g k 1 0 + g kk (2) To measure the change of gradient direction during accumulation, we regard the two gradients g k 1 0 and g k 0 as 2 vectors, and compute the angle a ( g k 1 0 , g k 0 ) between them: a ( g k 1 0 , g k 0 ) = arccos( g k 1 0 g k 0 | g k 1 0 || g k 0 | ) (3) where indicates inner-product of vectors.", "We use the angle of 2 vectors rather than cosine similarity because: The angle indicates the change between gradient directions; When the angle is small, a significant change in the angle only results in a subtle difference in cosine similarity.", "1 We observe the gradient direction varying during accumulating gradients of a Transformer model training on the WMT 14 English-German task following the setting of Vaswani et al. (2017) with a batch size of around 50 k target tokens.", "To achieve the gradient of the large batch size, we gradually 1 cos (5 ) 0 .", "9961 , cos (10 ) 0 .", "9848 .", "accumulate gradients of mini-batches with around 4 k target tokens.", "Table 1 shows a typical example:", "(i) gradient change is high at the beginning,", "(ii) gradient change reduces with increasing batch size and", "(iii) eventually it will start fluctuating (here at k=10).", "2 Intuitively, the less the direction of accumulated gradients is moved by the gradients of a new mini-batch, the more certainty there is about the gradient direction.", "Thus we propose that the magnitude of the angle fluctuation relates to the certainty of the model parameter optimization direction, and may therefore serve as a measure of optimization difficulty.", "Table 1 shows that the optimization direction is less stable with a small batch than with a large batch.", "But after the direction of gradients has stabilized, accumulating more mini-batches seems useless as the gradient direction starts to fluctuate.", "Thus, we suggest to compute dynamic and efficient batch sizes by accumulating gradients of mini-batches, while evaluating the gradient direction change with each new mini-batch, and stop accumulating more mini-batches and perform an optimization step when the gradient direction fluc-tuates.", "In practice, we only monitor a ( g k 1 0 , g k 0 ) for efficiency.", "We record the minimum angle change a min while accumulating gradients, and suppose the gradient direction starts to fluctuate, stop accumulating more mini-batches when a ( g k 1 0 , g k 0 ) > a min .", "In this way we can achieve a dynamic batch size (the size of d k 0 ), where is a pre-specified hyper-parameter.", "In practice, a model may have a large amount of parameters, and the cost of computing the cosine similarity between two corresponding gradient vectors are relatively high.", "To tackle this issue, we propose to divide model parameters into groups, and monitor gradient direction change only on a selected group in each optimization step.", "For a multi-layer model, i.e. the Transformer, a group may consist of parameters of 1 layer or several layers.", "To select the parameter group which is sensitive to the batch size, we record the angles of gradient direction change a ( g 00 , g 10 ) , ..., a ( g k 1 0 , g k 0 ) in the gradient accumulation, and define a max and a min as the maximum and minimum direction change: a max = max ( a ( g 00 , g 10 ) , ..., a ( g k 1 0 , g k 0 )) (4) a min = min ( a ( g 00 , g 10 ) , ..., a ( g k 1 0 , g k 0 )) (5) We then use a to measure the uncertainty reduction in the optimization direction: a = a max a min (6) Intuitively, the optimization direction of the parameter group which results in a larger a profits more from the batch size, and the group with a larger a should be more frequently sampled.", "We average the recent history of a k of the k th parameter group into a k .", "Inspired by Gum-bel (1954); Maddison et al. (2014); Zhang et al. (2019b), we first add Gumble noise to each a k to prevent the selection falling into a fixed group: a k = a k log( log u ) (7) where u (0 , 1) is a uniform distribution.", "3 a k is positive, but after adding Gumble noise, there is a small possibility that it turns negative.", "In our case, negative values only occur very few times.", "because it would heavily sharpen the distribution when the gap between values is large, and makes it almost impossible to select and evaluate the other groups in addition to the one with highest a k .", "4 3 Experiments We implemented our approaches based on the Neutron implementation (Xu and Liu, 2019) of the Transformer translation model.", "We applied our approach to the training of the Transformer, and to compare with Vaswani et al. (2017), we conducted our experiments on the WMT 14 English to German and English to French news translation tasks on 2 GTX 1080Ti GPUs.", "Hyper parameters were tuned on the development set (newstest 2012 and 2013).", "We followed all settings of Vaswani et al. (2017) except for the batch size.", "We used a beam size of 4 for decoding, and evaluated case-sensitive tokenized BLEU 5 with significance test (Koehn, 2004).", "We used an of 1 .", "1 to determine the fluctuation of gradient direction by default.", "We regarded each encoder/decoder layer as a parameter group, and used a of 3 for the parameter group selection.", "We compared the results of our dynamic batch size approach to two fixed batch size baselines, the 25 k", "4 For example, the result of softmax over [22, 31, 60] is [3.13e-17, 2.54e-13, 1.00], the last element takes almost all possibility mass.", "But we later find that if a is normalized ( a = ( a max a min ) /a max ) in Equation 6, the softmax works comparably well, which avoids using the hyper parameter in Equation 8.", "5 https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/generic/multi-bleu.perl 0 2 4 6 8 10 12 14 16 18 20 Figure 1: Distribution of Dynamic Batch Sizes.", "(Equation 5).", "batch size is the empirical value of Vaswani et al. (2017), while Zhang et al. (2019a) investigate 50 k batch size.", "Results are shown in Table 2 with the statistics of batch sizes of our approach shown in Table 3 and the detailed distribution of batch sizes for the En-De task shown in Figure", "1. Table 2 and 3 show that our approach outperforms both the fixed 25 k and 50 k batch size settings with an average batch size around 26 k , and our approach is slightly faster than the 25 k setting despite of the additional cost for monitoring gradient direction change.", "6 Figure 1 shows an interesting fact that the most frequently used automated batch sizes were close to the fixed value ( 25 k ) of Vaswani et al. (2017).", "In order to observe the varying of minimum gradient direction change during training, we averaged the minimum angle for every 2 .", "5 k training steps.", "6 It is hard to accumulate an accurate 25 k target tokens in a batch, and in fact, the fixed 25 k setting results in an average batch size of 26729 .", "79 .", "Figure 2 shows that the minimum direction change of gradients was small at the beginning, and gradually increased with training.", "Given that a small angle change indicates that there is more certainty in the gradient direction, this observation is consistent with the fact that finding the optimization direction is harder and harder with training.", "We studied the effects of different values on the En-De task, and results are shown in Table 4.", "7 Table 4 shows that with increasing , the average batch size and the time cost increases along with the performance.", "A wide range of values works relatively well indicating that its selection is robust, and 1 .", "1 seems to be a good trade off between the cost and the performance in our experiments.", "8 It is also worth noting that = 1 outperforms the 25 k baseline while being 1 .", "42 times faster (Table 2).", "Popel and Bojar (2018) demonstrate that the batch size affects the performance of the Transformer, and a large batch size tends to benefit performance, but they use fixed batch sizes during training.", "Abdou et al. (2017) propose to use a linearly increasing batch size from 65 to 100 which slightly outperforms their baseline.", "Smith et al. (2018) show that the same learning curve on both training and test sets can be obtained by increasing the batch size during training instead of decaying the learning rate.", "For fast convergence, Balles et al. (2017) propose to approximately estimate the mean value of the batch size for the next batch by maximizing the expected gain with a sample gradient variance ( || g || 2 ) computed on the current batch, while our 7 We observed that the minimum batch size does not change significantly with increasing , so we omit it for space.", "8 For = 1 .", "2 on the En-Fr task, the corresponding values are: 44294.16, 185972, 40.35 and 54h12m.", "approach compares the gradient direction of change ( a ( g k 1 0 , g k 0 ) ) during accumulation of mini-batches in the assembling of a large batch.", "We suggest our approach is complementary to Sutskever et al. (2013); Duchi et al. (2011); Kingma and Ba (2015), as their approaches decide the magnitude of the move in the optimization direction, while our approach provides reliable gradient direction.", "In this paper, we analyze the effects of accumulated batches on the gradient direction, and propose to achieve efficient automated batch sizes by monitoring change in gradient accumulation and performing an optimization step when the accumulated gradient direction is almost stable.", "To improve the efficiency of our approach with large models, we propose a sampling approach to select gradients of parameters sensitive to the batch size.", "Our approach improves the Transformer with a fixed 25 k batch size by +0 .", "73 and +0 .", "82 BLEU on the WMT 14 English to German and English to French tasks respectively while preserving efficiency.", "We thank anonymous reviewers for their insightful comments.", "Hongfei Xu acknowledges the support of China Scholarship Council ([2018]3101, 201807040056).", "Deyi Xiong is supported by the National Natural Science Foundation of China (Grant No. 61861130364), the Natural Science Foundation of Tianjin (Grant No. 19JCZDJC31400) and the Royal Society (London) (NAF \\ R1 \\ 180122).", "Hongfei Xu and Josef van Genabith are supported by the German Federal Ministry of Education and Research (BMBF) under the funding code 01IW17001 (Deeplee)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "other", "abstain", "result", "objective", "abstain", "method", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "method", "objective", "objective", "result", "abstain", "abstain", "other", "other", "other", "other" ]
[ "The increasing popularity of voice-based personal assistants provides new opportunities for conversational recommendation.", "One particularly interesting area is movie recommendation, which can benefit from an open-ended interaction with the user, through a natural conversation.", "We explore one promising direction for conversational recommendation: mapping a conversational user, for whom there is limited or no data available, to most similar external reviewers, whose preferences are known, by representing the conversation as a user's interest vector, and adapting collaborative filtering techniques to estimate the current user's preferences for new movies.", "We call our proposed method ConvExtr (Conver-sational Collaborative Filtering using External Data), which 1) infers a user's sentiment towards an entity from the conversation context, and 2) transforms the ratings of \"similar\" external reviewers to predict the current user's preferences.", "We implement these steps by adapting contextual sentiment prediction techniques, and domain adaptation, respectively.", "To evaluate our method, we develop and make available a finely annotated dataset of movie recommendation conversations, which we call MovieSent .", "Our results demonstrate that ConvExtr can improve the accuracy of predicting users' ratings for new movies by exploiting conversation content and external data.", "With the increasing popularity of voice-assistants, there has been a lot of research on making well-established user experiences, like recommendations, conversational (Allan et al., 2012; Culpepper et al., 2018; Radlinski et al., 2019).", "One such area is movie recommendation.", "Movie recommendation in general has been an actively researched area in conversational recommendation systems (Bennett et al., 2007; Khatri et al., 2018) and has been explored with a variety of approaches, most popular of which include collaborative filtering (CF) (Katarya and Verma, 2017; He et al., 2019), content-based filtering (Elahi et al., 2017), incorporating user reviews (Zhao et al., 2017; Dubey et al., 2018).", "Several attempts at both conversational recommendation (Christakopoulou et al., 2016; Sun and Zhang, 2018; Torbati et al., 2021), and specifically conversational movie recommendations (Dalton et al., 2018) have been made.", "However, establishing the new user's preferences through a conversation, in order to make an effective recommendation, remains an open question, and there exists little conversational data for such a task.", "In our initial exploration, we examine if current well-researched approaches and algorithms can be adapted and used to help solve this problem and to establish a baseline of such approaches for future improvement and reference.", "We explore a new approach to conversational recommendation by incorporating preferences of other, external users with established preferences, via shared discussed entities, and the user's sentiment towards them, which also addresses the resulting \"cold start\" problem.", "In this setting, users do not ask for recommendations directly, but rather have a more natural conversation with a Wizard, and receive recommendations based on this discussion.", "Previous approaches such as the \"hierarchy of recommendation goals\" (Kang et al., 2017) and \"narrative-driven recommendation\" (Bogers and Koolen, 2017; Eberhard et al., 2019) are not applicable under these conditions.", "Instead, we propose a novel knowledge-aware conversational recommendation approach, which combines conversational context with external knowledge, such as movie reviews, to predict users' ratings of unseen movies.", "To this end, we extended and refined an existing conversational dataset (Radlinski et al., 2019) to make it amenable to perform experiments in conversational recommendation.", "A short snippet of one of the available conversations can be seen in the middle left box of Figure", "1. The original dataset was created via an MTurk experiment, where two workers (one playing the Wizard and another playing the User) were asked to discuss several movies.", "The Wizard is \"coached\" to ask the most informative questions, prompting the User to express their opinions towards a mentioned movie.", "We extended this dataset to include entity annotations linked to RottenTomatoes 1 , and fine-grained user sentiment labels (details in Section 3.1).", "We are making this dataset ( MovieSent ) publicly available.", "Next, Section 2 states the problem formally and describes our approach; Section 3 provides details of the models evaluated, and construction of the MovieSent dataset; Section 4 outlines results and states the conclusions of our work, Section 5 discusses the future work.", "Given a prefix of k turns of conversation and mentions of m movies, we aim to predict the rating for the next movie m + 1 to be mentioned in the conversation.", "In this paper's experimental setting, the value of m is set to 2, which approximates the average number of movies mentioned in a conversation with a voice assistant, but could be extended.", "In this setting, we would estimate the user's preferences based on the first two movies mentioned in the conversation, and predict their rating for a 3rd (yet unseen) movie.", "This tradeoff between the length of preference elicitation and the accuracy of recommendation could be explored in future work.", "Our approach is illustrated in Figure", "1. Given a conversation about movies, we estimate user sentiment towards the mentioned movies and use it as input to a CF model to predict the rating for an unseen movie.", "The CF model uses a large set of external critics' ratings and reviews, which should include critics similar to the current user.", "The final model uses 3 main inputs: (1) CF predictions for the unseen movie; (2) similarity between the conversation user and critics; (3) similarity between the conversation and the movies' metadata.", "model(Xiao, 2018), which gives us one vector per conversation.", "Then we infer the fine-grained user sentiment for the movies discussed in the conversation using a Random Forest model, trained on the labeled dataset MovieSent (described in 3.1).", "Since this is not the focus of our work, we evaluated the prediction performance on a development set against manually annotated sentiment labels, resulting in RMSE of 0.88 (mean over 10 tries, with std 0.06), which was sufficient for the current work.", "The next step expands on a CF model (described in Section 2.2), constructed from an external reviews corpus, and predicts the score for the unseen movie.", "To make this prediction, we identify reviewers similar to the current conversation user, via the similarity of their reviews to the current conversation text.", "We then calculate BERT based sentence embeddings for all reviews of those critics and represent each critic as a centroid of their review vectors.", "Finally, we use the similarity between the conversationand critics' representations to transform the critics' scores to predict the conversational users' ratings.", "Collaborating filtering (CF) has been shown to be an effective approach for recommendation.", "We experimented with item-based CF algorithms, including variants of K-Nearest Neighbors (KNN)-based algorithms with Mean Squared Difference, Cosine, and Pearson similarity metrics, as well as Singular Value Decomposition (SVD) and SVD++ algorithms, available as part of the surprise 2 Python package.", "We report the results using the SVD++ model for Collaborative Filtering (Vozalis and Mar-garitis, 2007), which exhibited the best performance in development experiments.", "3 To use CF in our setting, after inferring the user's sentiment towards the mentioned movies, the sentiment scores are converted to ratings and provided to the CF model to estimate the user's sentiment towards a new movie.", "Domain Adaptation: From Critic Ratings to User Preferences : Our research indicates that Critics, who are paid professionals, significantly differ from Conversational Users.", "Therefore we need other features to be able to adapt the score from Critics space to Users space.", "To achieve this, we computed the dot product and the earth-movers dis-2 https://surprise.readthedocs.io/en/ stable/ 3 other models are omitted due to space limitations Figure 1: Overview of the ConvExtr system for conversational elicitation and prediction of movie preferences.", "tance between conversation vector and weighted critics vector as features.", "Other features are created by using either raw movie metadata (year of release, runtime, number of critics' and users' reviews, RottenTomatoes average critics score, RottenTomatoes average users score, all used as numbers) or dot product between vectors of BERT sequence embedded movie features (title, description, actors, genre) and conversation vector, which gives us 10 metadata-based features.", "We hypothesize that including these features would let the model learn the difference between Critics and Users, and map or merge the scores from one source to the other.", "We experimented with different model implementations, including pure CF without domain adaptation and a GBRT 4 model trained to translate the critics' preferences to user scores.", "Baselines: First, we establish natural baselines:", "AverageCritics : Critics score from RT, which is the popularity proxy in a cold start problem AverageAudience : Audience score from RT, another popularity proxy, which potentially is closer to Conversational Users, than Critics", "Evaluation Metrics : To evaluate the sentiment prediction performance and the overall system performance, we use the standard RMSE 5 and MAE metrics.", "The conversational Movie Sentiment Elicitation Dataset ( MovieSent ) that we created, is an extension to the dataset released in reference(Radlinski et al., 2019), which consists of Preference Elicitation conversations between \"coached\" crowd workers, playing the roles of Wizards and Apprentices.", "However, the movies mentioned in the dataset were not linked to a unique identifier, which required additional manual annotation to benefit from external knowledge.", "Hence, we manually labeled all movies in the dataset with their RottenTomatoes ID.", "Then, we asked human annotators to label each user response with a sentiment score on [-3; +3] scale, as well as a \"None\" score.", "The labeling was done by 8 independent judges with a 20% overlap (at most 2 people labeled the same sample).", "Inter-rater reliability for judges agreement on the labels was calculated using Cohen's kappa (Cohen, 1960) for binary labels, which is standard for this task, and it was 0.90 on 238 samples, indicating substantial inter-rater agreement.", "Reliability for the numerical sentiment was measured using a weighted Kappa (Cohen, 1968), it was 0.77 on 163 samples.", "An example can be found in Figure", "2. Reviews dataset construction: As mentioned in Section 1, most of the existing movie rating datasets are not suitable for our task, therefore we had to create a new dataset, to be released publicly.", "The basis of our CF system was Critics' ratings from an external source, specifically, a popular web-site RottenTomatoes.", "To construct the corpus, for each movie in MovieSent , we retrieved unique IDs for Critics who left reviews on that movie's page.", "We then retrieved all the reviews those critics have Figure 2: Example of utterances labeled with sentiment scores Table 1: Statistics of the reviews dataset, and MovieSent annotated conversational movie sentiment dataset.", "ever left for any movie and normalized the numerical ratings to a discrete scale from 1 to 5.", "We used the resulting sparse matrix of critics-to-movies rating scores as input to the CF algorithm, described in Section 2.2.", "The statistics of the created datasets are reported in Table", "1. Conversation Representation for User Sentiment Inference : While not the main focus of our work, our method requires estimating the user's sentiment towards the mentioned movies.", "We experimented with different representations, finally picking the concatenation of the previous Wizard utterance and the current user utterance, resulting in RMSE of 0.96 and MAE of 0.72 of the predicted sentiment against human annotations.", "We use this sentiment prediction setup for all experiments.", "To conduct an informative evaluation of our methods, we restrict the set of conversations in MovieSent to include only those, which had at least 3 movies with IDs mentioned in separate utterances, each of which had reviews in the corpus described above.", "The resulting conversational dataset contained 238 conversations out of initial 489.", "All experiments were conducted using 5-fold cross-validation, with 48 conversations on average in each split.", "Results for all discussed models are reported in Table", "2. Our method (last row) uses a natural conversation to both estimate the user's sentiment for a movie, and to retrieve relevant Critics to esti-Table 2: Main results: RMSE and MAE errors (lower better) for predicting user preferences (best in bold), significance from AvgAudience baseline marked with \"*\") Model RMSE MAE Baseline methods: 5 AverageCritics 1.34 0.99 AverageAudience 1.24 0.95 ConvExtr (our method): KNN (no adapt.) 1.20 0.94 SVD (no adapt.) 1.18* 0.95 SVD++ (no adapt.) 1.14* 0.92 GBRT 1.09 * 0.84 Best possible: 0.84 0.64 mate the rating of this User for an unseen movie.", "Both baselines performed similarly, with RMSE of around 1.3.", "The improvements that our models were able to achieve were significant, with the best model, using GBRT, achieving RMSE of 0.96 and MAE of 0.72 (+25% improvement on both metrics).", "Finally, to gain intuition on the performance of ConvExtr , we simulated the best performance possible with CF (using SVD++ model), if the conversational user were one of the critics in the Reviews dataset.", "The resulting predictions can be considered an upper bound difficult to reach, as conversational preference elicitation can at best approximate the true user's preferences.", "We explored the problem of Conversational Movie Recommendation, to take advantage of the vast amounts of external user-generated content on the Web, such as movie reviews, to improve the recommendation quality.", "As a first critical step in that direction, we focused on estimating a user's preference for an unseen movie based on estimated sentiment towards other movies mentioned in the conversation.", "Specifically, we applied sentiment analysis models to infer a User's sentiment towards movies mentioned in a conversation.", "These sentiment scores were used as a proxy for a user's explicit ratings which could be used by traditional Collaborative Filtering algorithms, applied to extensive external data of movie reviews and ratings.", "Our second insight was that the actual conversation content could provide additional benefit in representing the user's interests, for improved recommendation quality.", "Our results demonstrated that incorporating conversation content to select a more similar group of users for Collaborative Filtering, improves the recommendation performance, compared to using the inferred ratings alone.", "To advance research in this area, and for full transparency and reproducibility, the labeled conversation dataset and the code to retrieve the external review data are available on GitHub 7 .", "Together, our contributed dataset and experiments, and the resulting insights, offer a promising direction for improving conversational recommendation systems through augmentation with external data.", "This work was only a first step, with many potential areas for further improvement.", "The baseline comparison could be improved: adapting and using more sophisticated methods, like Neural Matrix Factorization(He et al., 2017), or Wide and Deep Learning models(Cheng et al., 2016) would make for a better baseline, and it might also improve the performance of our approach as well, as the CF score has one of the highest feature importance in our model.", "Another direction is trying content-based approaches, which were not used in the current paper due to scarcity of data for each conversational user.", "We also observed that our model was biased against predicting lower ratings since the conversations tend to focus on movies that a user liked.", "In future work we plan to explore correcting for this positive bias and other extensions to predicting user sentiment from a conversation more robustly.", "While our initial attempts to represent the conversation content improved the prediction accuracy, a fruitful direction of research is improving the representation of the conversation content for recommendations (i.e., for retrieving similar reviewers).", "In Section 2.1 we mention k -turn conversations and mention of m movies.", "So a natural direction for research would be analysis of recommendation 7 https://github.com/sergey-volokhin/ conversational-movies accuracy depending on the length of the conversation (number of turns), on the number of mentioned movies (does more movies equal better quality?), and on the ratio of sentiment bearing statements in a conversation (how many are factual/neutral?).", "As additional conversational data becomes available, our approach could be extended to include other sources of user preferences such as Twitter/Reddit-based conversations, and actual past conversations of other users with a recommender bot.", "This work was partially supported by a grant from Amazon Alexa towards the study of conversational search and recommendation." ]
[ "abstain", "abstain", "objective", "objective", "method", "objective", "objective", "other", "other", "other", "other", "other", "objective", "objective", "other", "other", "objective", "abstain", "other", "other", "other", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other" ]
[ "Multimodal machine translation (MMT) aims to improve neural machine translation (NMT) with additional visual information, but most existing MMT methods require paired input of source sentence and image, which makes them suffer from shortage of sentence-image pairs.", "In this paper, we propose a phrase-level retrieval-based method for MMT to get visual information for the source input from existing sentence-image data sets so that MMT can break the limitation of paired sentence-image input.", "Our method performs retrieval at the phrase level and hence learns visual information from pairs of source phrase and grounded region, which can mitigate data sparsity.", "Furthermore, our method employs the conditional variational auto-encoder to learn visual representations which can filter redundant visual information and only retain visual information related to the phrase.", "Experiments show that the proposed method significantly outperforms strong baselines on multiple MMT datasets, especially when the textual context is limited.", "Multimodal machine translation (MMT) introduces visual information into neural machine translation (NMT), which assumes that additional visual modality could improve NMT by grounding the language into a visual space (Lee et al., 2018).", "However, most existing MMT methods require additional input of images to provide visual representations, which should match with the source sentence.", "Unfortunately, in practice it is difficult to get this kind of pairwise input of text and images which hinders the applications of MMT.", "What is worse, to train an MMT model, the training data still involves the target sentence besides the source Corresponding author: Yang Feng.", "sentence and the image, which is costly to collect.", "As a result, the MMT model is usually trained on a small Multi30K (Elliott et al., 2016) data set, which limits the performance of MMT.", "Therefore, it is necessary to utilize the separate image data set to obtain visual representations to break the constraints of pairwise input.", "Towards this end, some researchers (Zhang et al., 2020; Wu et al., 2021) propose to integrate a retrieval module into NMT, which retrieve images related to the source sentence from existing sentence-image pairs as complementary input, and then use a pre-trained convolutional neural network (CNN) to encode the images.", "However, such sentence-level retrieval usually suffers from sparsity as it is difficult to get the images that properly match with the source sentence.", "Besides, visual features outputted by the CNN contain richer information ( e.g. , color, size, shape, texture, and background) than the source text, thus encoding them in a bundle without any filtering will introduce noise into the model.", "To solve these problems, we propose a novel retrieval-based method for MMT to learn phrase-level visual representations for the source sentence, which can mitigate the aforementioned problems of sparse retrieval and redundant visual representations.", "For the sparsity problem, our method retrieves the image at the phrase level and only refers to the grounded region in the image related with the phrase.", "For the redundancy problem, our method employs the conditional variational auto-encoder to force the learned representations to properly reconstruct the source phrase so that the learned representations only retain the information related to the source phrase .", "Experiments on Multi30K (Elliott et al., 2016) show that the proposed method gains significant improvements over strong baselines.", "When the textual context is limited, it achieves up to 85% gain over the text-only baseline on the BLEU score.", "Further analysis demonstrates that 5687 the proposed method can obtain visual information that is more related to translation quality.", "We use phrase-level visual representation to improve NMT.", "In this section, we will introduce our proposed phrase-guided visual representation .", "We first build a phrase-level image set, and then introduce a latent-variable model to learn a phrase-guided visual representation for each image region.", "Our phrase-level image set is built from the training set of Multi30K, which contains about 29K bilingual sentence-image pairs.", "We only use the images e and source descriptions x from them, which is denoted as D = { ( x i , e i ) } Ni =1 .", "We extract <noun phrase, image region> pairs from <sentence, image> pairs in D to build our phrase-level image set, which is denoted as D p .", "For each sentence x i , we use an open-source library spaCy 1 to identify the noun phrases, which is denoted as P i = ( p i 1 , p i 2 , ..., p it i ) , where t i is the number of noun phrases in x i .", "For each noun phrase p ij , we detect the corresponding region r ij from the paired image e i using the visual grounding toolkit (Yang et al., 2019).", "Then ( p ij , r ij ) is added to our phrase-level image set D p .", "Figure 1 illustrates an example.", "Finally, we obtain the phrase-level image set D p = { ( p i , r i ) } Ti =1 , where T = (cid:80) Ni =1 t i .", "It contains about 102K pairs in total.", "For an image region r , we can obtain the visual features v with a pre-trained ResNet-101 Faster R-CNN (He et al., 2016; Ren et al., 2015), which contains rich visual information ( e.g. , color, size, shape, texture, and background).", "However, we should not pay much attention to the visual information not mentioned in the corresponding phrase, which will introduce too much noise and even be harmful to NMT.", "Therefore, we further introduce a continuous latent variable to explicitly model the semantic information of image regions under the guidance of phrases.", "We adopt the framework of conditional variational auto-encoder (CVAE) (Kingma and Welling, 2014; Sohn et al., 2015) to maximize the conditional marginal log-likelihood 1 https://spacy.io a black dog jumping to catch a rope toy Figure 1: Example of extracting <noun phrase, image region> pairs from existing <sentence, image> pairs.", "log p ( p | v ) = log (cid:82) z p ( p | z, v ) p ( z | v ) dz by maximizing the evidence lowerbound (ELBO): L cvae ( , , ) = E z q ( z | p , v ) [log p ( p | z , v )] KL[ q ( z | p , v ) (cid:107) p ( z | v )] , (1) where p ( z | v ) is the prior, q ( z | p , v ) is an approximate posterior and p ( p | z , v ) is the decoder.", "The prior p is modeled as a Gaussian distribution: p ( z | v ) = N ( z ; p ( v ) , p ( v ) 2 I ) , (2) p ( v ) = Linear( v ) , (3) p ( v ) = Linear( v ) , (4) where Linear( ) denotes linear transformation.", "The approximate posterior q is also modeled as a Gaussian distribution: q ( z | p , v ) = N ( z ; q ( p , v ) , q ( p , v ) 2 I ) , (5) q ( p , v ) = Linear([RNN( p ) , v ]) , (6) q ( p , v ) = Linear([RNN( p ) , v ]) , (7) where RNN( ) denotes a single-layer unidirectional recurrent neural network (RNN).", "The final hidden state of RNN is used to compute the mean and variance vectors.", "To be able to update the parameters using back-propagation, we use the reparameterization trick (Kingma and Welling, 2014) to sample z from q : z = q + q (cid:12) (cid:15) , (cid:15) N (0 , I ) .", "The decoder p ( p | z , v ) is also implemented by a single-layer unidirectional RNN.", "The initial hidden state of decoder RNN is defined as: s = Linear([ z , v ]) , (9) 5688 Input Embedding PositionalEncoding Multi-head Attention Add & Norm FeedForward Add & Norm N Output Embedding Multi-headAttention Add & Norm FeedForward Add & Norm MaskedMulti-head Attention Add & Norm PositionalEncoding Linear Softmax N Output Probabilities Source : a black dog jumping to catch a rope toy Target Noun Phrase Extraction Visual Retrieval a black dog a rope toy CVAE & Sum Phrase-levelAggregation Multi-headAttention Gate & Add 1 !\"#$ % %&#'( S ou r ce E n c od e r T a r g e t D ec od e r Multimodal Aggregation Module P h r ase -L eve l V i s u a l R e t r i eva l M odu l e Figure 2: Overview of our proposed method.", "and then the decoder will reconstruct the phrase p based on s .", "We refer to s as phrase-guided visual representation , since it pays more attention to the semantic information mentioned in the phrase and filters out irrelevant information.", "We will describe how to incorporate it into NMT in the next section.", "In this section, we will introduce our retrieval-based MMT method.", "Specifically, we obtain visual context through our proposed phrase-level visual retrieval, and then learn a universal visual representation for each phrase in the source sentence, which is used to improve NMT.", "Figure 2 shows the overview of our proposed method, which is composed of four modules: source encoder, phrase-level visual retrieval module, multimodal aggregation module, and target decoder.", "The source encoder and target decoder are the same as the encoder and decoder of conventional text-only Transformer (Vaswani et al., 2017).", "Therefore, we will introduce the phrase-level visual retrieval module and multimodal aggregation module in detail in the rest of this section.", "We denote the input source sentence as x = ( x 1 , x 2 , ..., x n ) , the ground truth target sentence as y = ( y 1 , y 2 , ..., y m ) and the generated translation as y = ( y 1 , y 2 , ..., y m ) .", "The input source sentence x will be encoded with the source encoder to obtain source sentence representation, which is denoted as H = ( h 1 , h 2 , ..., h n ) .", "To obtain the visual context of the source sentence without input paired images, we design a phrase-level visual retrieval module.", "Specifically, for the input sentence x = ( x 1 , x 2 , ..., x n ) , we identify the noun phrases P = ( p 1 , p 2 , ..., p t ) in x .", "Each phrase p i = ( x l i , x l i +1 , ..., x l i + d i 1 ) is a continuous list of tokens, where l i is the index of the first token and d i is the length of p i .", "For each noun 5689 phrase p i , we will retrieve several relevant <noun phrase, image region> pairs from our phrase-level image set D p according to the semantic similarity between phrases, and then use the image regions as visual context.", "We design a phrase encoder to compute the phrase embedding, which is used to measure the semantic similarity between phrases.", "Phrase Encoder Our phrase encoder Enc p ( ) is based on a pre-trained BERT (Devlin et al., 2019).", "For a phrase p = ( p 1 , p 2 , ..., p l ) , we first use BERT to encode it into contextual embeddings: c 1 , c 2 , ..., c l = BERT( p 1 , p 2 , ..., p l ) , (10) then the phrase embedding is the average embedding of all tokens: Enc p ( p ) = 1 l l (cid:88) i =1 c i .", "Visual Retrieval For a given phrase p , we retrieve topK relevant <noun phrase, image region> pairs from D p .", "For ( p i , r i ) D p , the relevance score with given phrase p can be defined as the cosine distance between their phrase embeddings: RS( p , ( p i , r i )) = Enc p ( p ) Enc p ( p i ) (cid:107) Enc p ( p ) (cid:107)(cid:107) Enc p ( p i ) (cid:107) , (12) then we retrieve topK relevant pairs for p : { ( p i k , r i k ) } Kk =1 = top -K i =1", "..T (RS( p , ( p i , r i ))) .", "(13)", "Universal Visual Representation For every pair ( p i k , r i k ) , we can obtain the phrase-guided visual representation s i k through our latent-variable model as described in Section 2.2.", "Finally, the phrase-level universal visual representation of p is defined as the weighted sum of all { s i k } : u = 1 KK (cid:88) k =1 RS( p , ( p i k , r i k )) s i k .", "Our universal visual representation considers multiview visual information from several image regions, which avoids the bias caused by a single image region.", "Finally, for all phrases P = ( p 1 , p 2 , ..., p t ) in x , we obtain the corresponding universal visual representation U = ( u 1 , u 2 , ..., u t ) .", "Inspired by the recent success of modality fusion in multimodal machine translation (Yin et al., 2020; Zhang et al., 2020; Fang et al., 2022), we design a simple multimodal aggregation module to fuse the source sentence representation H and phrase-level universal visual representation U .", "At first, we perform a phrase-level aggregation.", "For each phrase p i = ( x l i , x l i +1 , ..., x l i + d i 1 ) , we fuse the universal visual representation u i and the textual representation of corresponding tokens ( h l i , h l i +1 , ..., h l i + d i 1 ) : m i = LayerNorm( u i + l i + d i 1 (cid:88) j = l i o ij (cid:12) h j ) , (15) o ij = sigmoid( W 1 u i + W 2 h j ) , (16) where (cid:12) denotes element-wise product.", "Now we obtain the multimodal phrase representation M = ( m 1 , m 2 , ..., m t ) .", "Afterwards, we apply a multi-head attention mechanism to append M to the source sentence representation: S = MultiHead( H , M , M ) .", "We then fuse S and H with a gate mechanism: S = H + (cid:12) S , (18) = sigmoid( W 3 H + W 4 S ) .", "Finally, S is fed into our target decoder for predicting the translation.", "The translation model is trained with a cross-entropy loss: L trans = m (cid:88) i =1 log p ( y i | y <i , x ) .", "We conduct experiments on the following datasets:", "Multi30K Multi30K dataset contains bilingual parallel sentence pairs with image annotations, where each image is paired with one English description and the translations in German and French.", "Training, validation and test sets contain 29,000, 1,014, and 1,000 instances, respectively.", "We also report the results on the WMT17 test set and the ambiguous MSCOCO test set, which contain 1,000 and 461 instances respectively.", "WMT16 EN-DE WMT16 EN-DE dataset contains about 4.5M sentence pairs.", "We choose new-stest2013 for validation and newstest2014 for test.", "WMT16 EN-RO WMT16 EN-RO dataset contains about 0.6M sentence pairs.", "We choose news-dev2016 for validation and newstest2016 for test.", "For all the above datasets, all sentences are to-kenized and segmented into subwords units using byte-pair encoding (BPE) (Sennrich et al., 2016).", "The vocabulary is shared for source and target languages, with a size of 10K for Multi30K, and 40K for WMT16 EN-DE and WMT16 EN-RO.", "Model Implementation For the latent-variable model, the image region is encoded with a pre-trained ResNet101 Faster-RCNN (He et al., 2016; Ren et al., 2015).", "Both the phrase encoder and decoder are implemented using a single-layer unidirectional RNN with 512 hidden states.", "The size of the latent variable is set to 64.", "The batch size is 1024, and the learning rate is 5e-5.", "We train the model up to 200 epochs with Adam optimizer (Kingma and Ba, 2015).", "We adopt KL cost annealing and word dropout tricks to alleviate the posterior collapse problem following Bowman et al. (2016).", "The annealing step is set to 20000 and the word dropout is set to 0.1.", "Note that the phrases are segmented using the same BPE vocabulary as that for each source language.", "For the translation model, we use Transformer (Vaswani et al., 2017) as our baseline.", "Both encoder and decoder contain 6 layers.", "The number of attention heads is set to 4. The dropout is set to 0.3, and the value of label smoothing is set to 0.1.", "For the visual retrieval module, we retrieve top-5 image regions for each phrase.", "We use Adam optimizer (Kingma and Ba, 2015) to tune the parameters.", "The learning rate is varied under a warm-up strategy with 2,000 steps.", "We train the model up to 8,000, 20,000, and 250,000 steps for Multi30K, WMT16 EN-RO, and WMT16 EN-DE, respectively.", "We average the checkpoints of last 5 epochs for evaluation.", "We use beam search with a beam size of 4. Different from previous work, we use sacreBLEU 2 (Post, 2018) to compute the BLEU (Papineni et al., 2002) scores and the statistical significance of translation results with paired bootstrap resampling (Koehn, 2004) for future standard comparison across papers.", "Specifically, we measure case-insensitive detokenized BLEU for Multi30K (sacreBLEU signature: nrefs:1 | bs:1000 | seed:12345 | case:lc | eff:no | tok:13a | smooth:exp | version:2.0.0) 3 and case-sensitive detokenized BLEU for WMT datasets (sacreBLEU signature: nrefs:1 | bs:1000 | seed:12345 | case:mixed | eff:no | tok:13a | smooth:exp | version:2.0.0).", "All models are trained and evaluated using 2 RTX3090 GPUs.", "We implement the translation model based on fairseq 4 (Ott et al., 2019).", "We train latent-variable model and translation model individually.", "Our baseline is the text-only Transformer (Vaswani et al., 2017).", "Besides, we implement Imagination (Elliott and Kdr, 2017) and UVR-NMT (Zhang et al., 2020) based on Transformer, and compare our method with them.", "The details of these methods can be found in Section 6.", "We use the same configuration for all baseline systems as our model.", "Table 1 shows the results on Multi30K.", "Our proposed method significantly outperforms the Transformer (Vaswani et al., 2017) baseline, demonstrating that our proposed phrase-level universal visual representation can be helpful to NMT.", "Our method also surpass Imagination (Elliott and Kdr, 2017) and UVR-NMT (Zhang et al., 2020).", "We consider 2 https://github.com/mjpost/sacrebleu 3 This is because the official pre-processing script of Multi30K dataset lowercases the corpus, see https://github.com/multi30k/dataset/blob/master/scripts/task1-tokenize.sh 4 https://github.com/pytorch/fairseq 5691", "it is mainly due to the following reasons.", "First, our phrase-level visual retrieval can obtain strongly correlated image regions instead of weakly correlated whole images.", "Second, our phrase-level universal visual representation considers visual information from multiple image regions and pays more attention to the semantic information mentioned in the phrases.", "Last, our phrase-level aggregation module makes it easier for the translation model to exploit the visual information.", "In Section 2.2, we introduce a latent-variable model to learn a phrase-guided visual representation for each image region.", "To understand how it improves the model performance compared with original visual features, we visualize the representations by reducing the dimension with Principal Component Analysis (PCA).", "Specifically, for all <noun phrase, image region> pairs in D p , we cluster the image regions by the head 5 of noun phrases.", "We select top-8 clusters according to their size, and randomly sample 1000 image regions for each cluster.", "As shown in Figure 3, the original visual features of different clusters are mixed together, indicating that they contains too much irrelevant information.", "In contrast, our proposed phrase-guided visual representations, which pay more attention to the semantic information, form several clusters according to their heads.", "Combined with our visual retrieval module, we found that as the number of retrieved image regions K increases, the BLEU score keeps decreasing when we use original visual features, while increasing when we use our proposed phrase-guided visual representations, which is shown in Figure 4. We believe the decrease of BLEU score is due to the 5 https://en.wikipedia.org/wiki/Head_ (linguistics) 1 2 3 4 5 K 39.7 39.8 39.9 40.0 40.1 40.2 40.3 BLEU 39.90 39.82 39.79 39.74 39.65 39.72 39.75 39.86 40.12 40.30 original phrase-guided Figure 4: BLEU scores with different number of retrieved image regions K .", "irrelevant information in original visual features, and thus directly sum them together will introduce too much noise.", "Our method filters out those irrelevant information, and multiple image regions could avoid the bias caused by a single one, which leads to the increase of BLEU score.", "However, we don't observe further improvements when using more image regions.", "We further conduct experiments under source-degradation setting, to verify the effectiveness of our method when the source textual context is limited.", "Following Wu et al. (2021), we mask the visually grounded tokens in the source sentence, which affects around 43% of tokens in Multi30K.", "As shown in Table 2, our method achieves almost 85% improvements over the text-only Transformer baseline.", "It means our proposed phrase-level universal visual representation can fill in the missing information effectively.", "To prove the effectiveness of phrase-level retrieval, we implement a sentence-level variant of our method.", "In this variant, we switch the latent-variable model, retrieval module and aggregation module from phrase-level to sentence-level.", "In this way, we retrieve several images as visual con-5692 #34 #25 #41 #101 #81 #13061 #21353 #22252 #21827 a person is driving a black car (#136 in Test2017) <query> a person <query> a black car (#27907) a person is driving a red and black race car (#28972) a person is walking with a white bag (#23551) a person is riding a bike on a dirt road (#17872) a person is riding a bike in a tunnel (#28972) a person is walking by an old building #27907 #28972 #28972 #23551 #17872", "text to help the translation.", "As shown in Table 3, the sentence-level variant Ours-sentence performs worse than Ours , especially in the case of source-degradation setting.", "We believe it is because phrase-level retrieval can obtain more relevant image regions as visual context, which contain less noise and can be integrated into textual representations more precisely.", "In contrast, sentence-level retrieval leads to images with much irrelevant information, and makes it difficult for the model to capture the fine-grained semantic correspondences between images and descriptions.", "To understand this difference more intuitively, we give an example in Figure 5. As we can see, for the input sentence, phrase-level retrieval can obtain closely related image regions for noun phrases a person and a black car , while the results of sentence-level retrieval are actually weakly related with the input sentence.", "Finally, we conduct experiments on WMT16 ENDE and WMT16 EN-RO datasets.", "As shown in Table 4, we observe that both Zhang et al. (2020) and our method only achieve marginal improvements compared with text-only Transformer baseline.", "We consider that there are two main reasons.", "On the one hand, most of tokens in such news text are not naturally related to specific visual contents.", "We found that the percentage of visual grounded tokens in the training set of WMT16 EN-DE is only 7% (vs. 43% in Multi30K), so the contribution of visual information is indeed limited.", "On the other hand, the news text is far from the descriptive text in Multi30K.", "In this way, the retrieved image regions are actually weakly correlated with the source phrase.", "We did some analysis to verify our hypotheses.", "As described in Section 3.1, we retrieve topK pairs for each phrase according to the relevance scores.", "We define the average relevance scores (ARS) as follows: ARS ( k ) = E p D val RS( p , ( p i k , r i k )) , (21) which means the average relevance scores for all phrases in the validation set.", "As shown in Figure 6, ARS on WMT news datasets are much lower than that on Multi30K, which proves that the gap between news text and descriptive text does exists.", "Multimodal machine translation (MMT) aims to enhance NMT (Vaswani et al., 2017; Zhang et al., 2019; Li et al., 2021) with additional visual context.", "Since the release of Multi30K (Elliott et al., 2016) dataset, researchers have proposed many MMT methods.", "Early methods (Huang et al., 2016; Calixto and Liu, 2017; Caglayan et al., 2016; Calixto et al., 2016; Caglayan et al., 2017; Libovick and Helcl, 2017; Delbrouck and Dupont, 2017b,a; Zhou et al., 2018; Calixto et al., 2017; Helcl et al., 2018; Caglayan et al., 2018) are mainly based on the RNN-based encoder-decoder architecture with attention (Bahdanau et al., 2015).", "Recent methods based on Transformer (Vaswani et al., 2017) achieve better performance.", "Yao and Wan (2020); Yin et al. (2020); Liu et al. (2021) design multimodal encoder to fuse the textual and visual information during encoding.", "Ive et al. (2019); Lin et al. (2020) enhance the decoder with deliberation networks (Xia et al., 2017) or capsule networks (Sabour et al., 2017) to better utilize visual information during decoding.", "Caglayan et al. (2021) propose a cross-lingual visual pre-training method and fine-tuned for MMT.", "It is worth noting that some of previous works (Ive et al., 2019; Lin et al., 2020; Yin et al., 2020; Wang and Xiong, 2021; Nishihara et al., 2020; Zhao et al., 2021) adopt regional visual information like us, which shows effectiveness compared with global visual features.", "The major difference between our method and theirs is that our method is a retrieval-based method, which breaks the reliance on bilingual sentence-image pairs, Therefore, our method is still applicable when the input is text only (without paired images), which is unfortunately not available with those previous methods.", "In addition to focusing on model design, Yang et al. (2020); Nishihara et al. (2020); Wang and Xiong (2021) propose auxiliary loss to allow the model to make better use of visual information.", "Caglayan et al. (2019); Wu et al. (2021) conduct systematic analysis to probe the contribution of visual modality.", "Caglayan et al. (2020); Ive et al. (2021) focus on improving simultaneous machine translation with visual context.", "All of the above methods require a specific image as input to provide visual context, which heavily restricts their applicability.", "To break this bottleneck, Hitschler et al. (2016) propose target-side image retrieval to help the translation.", "Elliott and Kdr (2017) propose a multitask learning framework Imagination to decomposes the multimodal translation into learning translation and learning visually grounded representation.", "Calixto et al. (2019) introduce a latent variable and estimate a joint distribution over translations and images.", "Long et al. (2020) predict the translation with visual representation generated by a generative adversarial network (GAN) (Goodfellow et al., 2014).", "The most closely related work to our method is UVR-NMT (Zhang et al., 2020), which breaks the reliance on bilingual sentence-image pairs.", "Like some retrieval-enhanced MT (Feng et al., 2017; Gu et al., 2017) methods, they build a topic-image lookup table from Multi30K, and then retrieve images related to the source sentence as visual context based on the topic words.", "The central differences between Zhang et al. (2020) and our method are as follows: First, their method depends on the weak correlation between words and images, which leads to much noise in the retrieved images, while our approach relies on the strong correlation between noun phrases and image regions.", "Second, our phrase-level retrieval can obtain more related visual context than their sentence-level retrieval (Section 5.4).", "Last, their method directly uses visual features extracted by ResNet (He et al., 2016), which may introduce too much noise.", "We adopt a latent-variable model to filter out irrelevant information and obtain a better representation.", "In this paper, we propose a retrieval-based MMT method, which learns a phrase-level universal visual representation to improve NMT.", "Our method not only outperforms the baseline systems and most existing MMT systems, but also breaks the restrictions on input that hinder the development of MMT in recent years.", "Experiments and analysis demonstrate the effectiveness of our proposed method.", "In the future, we will explore how to apply our method to other tasks.", "We thank all the anonymous reviewers for their insightful and valuable comments.", "This work was supported by National Key R&D Program of China (NO. 2017YFE0192900)." ]
[ "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "result", "objective", "objective", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "other", "abstain", "objective", "result", "objective", "objective", "other", "other" ]
[ "Auto-regressive text generation models usually focus on local fluency, and may cause inconsistent semantic meaning in long text generation.", "Further, automatically generating words with similar semantics is challenging, and hand-crafted linguistic rules are difficult to apply.", "We consider a text planning scheme and present a model-based imitation-learning approach to alleviate the aforementioned issues.", "Specifically, we propose a novel guider network to focus on the generative process over a longer horizon, which can assist next-word prediction and provide intermediate rewards for generator optimization.", "Extensive experiments demonstrate that the proposed method leads to improved performance.", "Text generation is an important area of investigation within machine learning.", "Recent work has shown excellent performance on a number of tasks, by combining reinforcement learning (RL) and generative models.", "Example applications include image captioning (Ren et al., 2017; Rennie et al., 2016), text summarization (Li et al., 2018b; Paulus et al., 2017; Rush et al., 2015), and adversarial text generation (Guo et al., 2017; Lin et al., 2017; Yu et al., 2017; Zhang et al., 2017; Zhu et al., 2018).", "The sequence-to-sequence framework (Seq2Seq) (Sutskever et al., 2014) is a popular technique for text generation.", "However, models from such a setup are typically trained to predict the next token given previous ground-truth tokens as input, causing what is termed exposure bias (Ranzato et al., 2016).", "By contrast, sequence-level training with RL provides an effective means of solving this challenge, by treating text generation as a sequential decision-making problem.", "By directly optimizing an evaluation score (cumulative rewards) (Ranzato et al., 2016), state-of-the-art results have been obtained in many text-generation tasks (Paulus et al., 2017; Rennie et al., 2016).", "However, one problem in such a framework is that rewards in RL training are particularly sparse, since a scalar reward is typically only available after an entire sequence has been generated.", "Furthermore, the recurrent models focus more on local fluency, and may cause inconsistent semantic meanings for long text generation.", "For RL-based text generation, most existing works rely on a model-free framework, which has been criticized for its high variance and poor sample efficiency (Sutton and Barto, 1998).", "On the other hand, while model-based RL methods do not suffer from these issues, they are usually difficult to train in complex environments.", "Further, a learned policy is usually restricted by the capacity of an environment model.", "Recent developments on model-based RL (Gu et al., 2016; Kurutach et al., 2018; Nagabandi et al., 2017) combine the advantages of these two approaches, and have achieved improved performance by learning a model-free policy, assisted by an environment model.", "In addition, model-based RL has been employed recently to solve problems with extremely sparse rewards, with curiosity-driven methods (Pathak et al., 2017).", "In this paper, we propose a model-based imitation-learning method to overcome the aforementioned issues in text-generation tasks.", "Our main idea is to employ an explicit guider network to model the generation environment in the feature space of sentence tokens, used to emit intermediate rewards by matching the predicted features from the guider network and features from generated sentences.", "The guider network is trained to encode global structural information of training sentences, and thus is useful to guide next-token prediction in the generative process.", "Within the proposed framework, to assist the guider network, we also develop a new type of self-attention mechanism to provide high-level planning-ahead information and maintain consistent semantic meaning.", "Text Generation Model Text generation models learn to generate a sentence Y = ( y 1 , . . . , y T ) of length T , possibly conditioned on some context X .", "Here each y t is a token from vocabulary A .", "Starting from the initial state s 0 , a recurrent neural network (RNN) produces a sequence of states ( s 1 , . . . , s T ) given an input sentence-feature representation ( e ( y 1 ) , . . . , e ( y T )) , where e ( ) denotes a word embedding function mapping a token to its d dimensional feature representation.", "The states are recursively updated with a function known as the cell : s t = h ( s t 1 , e ( y t )) .", "One typically assigns the following probability to an observation y at location t : p ( y | Y <t ) = [ softmax ( g ( s t ))] y .", "Together ( g, h ) specifies a probabilistic model , i.e. , log ( Y ) = (cid:88) t log p ( y t | Y <t ) .", "To train the model , one typically uses maximum likelihood estimation (MLE), via minimizing the cross-entropy loss, i.e. , JMLE ( ) = E [log ( Y )] .", "In order to generate sentence Y s from a (trained) model, one iteratively applies the following operations: y st +1 Multi (1 , softmax ( g ( s t ))) , (2) s t = h ( s t 1 , e ( y st )) , (3) where Multi (1 , ) denotes one draw from a multinomial distribution.", "Model-Based Imitation Learning Text generation can be considered as an RL problem with a large number of discrete actions, deterministic transitions, and deterministic terminal rewards.", "It can be formulated as a Markov decision process (MDP) M = (cid:104)S , A , P, r, (cid:105) , where S is the state space, A is the action space, P is the deterministic environment dynamics, r ( s , y ) is a reward function, and (0 , 1) is the discrete-time discount factor.", "The policy , parameterized by , maps each state s S to a probability distribution over A .", "The objective is to maximize the expected reward: J ( ) = (cid:88) t =1 E P, (cid:2) t 1 r ( s t , y t ) (cid:3) .", "In model-based imitation learning (Baram et al., 2017; Cheng et al., 2019), a model is built to make", "predictions for future state s t + (cid:52) t conditioned on the current state 1 , which can be used for action selection, e.g. , next-token generation.", "This model is typically a discrete-time system, taking the current state-action pair ( s t , y t ) as input, and outputting an estimate of the future state s t + (cid:52) t at time t + (cid:52) t .", "At each step t , y t is chosen based on the model, and the model will re-plan with the updated information from the dynamics.", "This control scheme is different from a standard model-based method, and is referred to as model-predictive control (MPC) (Nagabandi et al., 2017).", "Note that in our setting, the state in RL typically corresponds to the current generated sentences Y 1 ,...,t instead of the RNN state of generator (decoder).", "The model is illustrated in Figure 1, with an au-toeocoder (AE) structure for sentence feature extraction and generation.", "The encoder is shared for sentences from both training data and generated data, as explained in detail below.", "Overall, text generation can be formulated as an imitation-learning problem.", "At each timestep t , the agent, also called a generator (which corresponds to the LSTM decoder), takes the current LSTM state as input, denoted as s t .", "The policy ( | s t ) parameterized by is a conditional generator, to generate the next token (action) given s t , the observation representing the current generated sentence.", "The objective of text generation is to maximize the total reward as in (4).", "We detail the components for our proposed model in the following subsections.", "The guider network, implemented as an RNN with LSTM units, is adopted to model environment dynamics to assist text generation.", "The idea is to train a guider network such that its predicted sentence features at each time step are used to assist next-word generation and construct intermediate rewards, which in turn are used to optimize the sentence generator.", "Denote the guider network as G ( s Gt 1 , f t ) , with parameters and input arguments ( s Gt 1 , f t ) at time t , to explicitly write out the dependency on the guider network latent state s Gt 1 from the previous time step.", "Here f t is the input to the LSTM guider, which represents the feature of the current generated sentence extracted 1 (cid:52) t > 1 ; the model predicts future states based on the collected trajectories.", "by an encoder network. Specifically, let the current generated sentence be Y 1 ...t (encouraged to be the same as parts of a training sentence in train-ing), with f t calculated as: f t = Enc ( Y 1 ...t ) . The initial state of the guider network is the encoded feature of a true input sentence by the same convolutional neural network (CNN), i.e. , s G 0 = Enc ( X ) , where Enc ( ) denotes the encoder transformation, implemented with a CNN (Zhang et al., 2017). Importantly, the input to the guider network, at each time point, is defined by features from the entire sentence generated to that point. This provides an important guide to the LSTM decoder, accounting for the global properties of the generated text.", "Text Generation with Planning We first explain how one uses the guider network to guide next-word generation for the generator (the LSTM decoder in Figure 1). Our framework is inspired by the MPC method (Nagabandi et al., 2017), and can be regarded as a type of plan-ahead attention mechanism. Given the feature f t at time t from the current input sentence, the guider network produces a prediction G ( s Gt 1 , f t ) as a future feature representation, by feeding f t into the LSTM guider. Since the training of the guider network is based on real data (detailed in the next paragraph), the predicted feature contains global-structure information of the training sentences. To utilize such information to predict the next word, we combine the predicted feature with the output of the decoder by constructing an attention-like mechanism. Specifi-cally, we first apply a linear transformation on the predicted feature G ( s Gt 1 , f t ) , forming a weight vector w t (cid:44) (cid:0) G ( s Gt 1 , f t ) (cid:1) . The weight w t is applied to the output O t of the LSTM decoder by an element-wise multiplication operation. The result is then fed into a softmax layer to generate the next token y t . Formally, the generative process", "O t = g ( s t 1 ) , w t = ( G ( s Gt 1 , f t )) , (5) y t Multi (1 , softmax ( O t w t )) , (6) s Gt = h G ( s Gt 1 , f t ) , s t = h ( s t 1 , e ( y t )) . (7)", "Guider Network Training Given a sentence of feature representations ( f 1 , f 2 , . . . f T ) for a training sentence, we seek to update the guider network such that it is able to predict f t + c given f t , where c > 0 is the number of steps that are looked ahead. We implement this by forcing the predicted feature, G ( s Gt , f t ) , to match both the sentence feature f t + c (first term in (8)) and the corresponding feature-changing direction (second term in (8)). This is formalized by maximizing an objective function of the following form at time t :", "where D cos ( , ) denotes the cosine similarity 2 . By maximizing (8), an ideal guider network should be able to predict the true next words conditioned on the current word in a sentence. As a result, the prediction is used to construct an intermediate reward, used to update the generator (the LSTM decoder), as described further below.", "As in many RL-based text-generation methods, such as SeqGAN (Yu et al., 2017) and LeakGAN (Guo et al., 2017), the generator is updated based on policy-gradient methods. As a result, collecting rewards in the generation process is critical.", "Though SeqGAN (Yu et al., 2017) has proposed to use rollout to get rewards for each generated word, the variance of the rewards is typically too high to be useful practically. In addition, the computational cost may be too high for practical use. We below describe how to use the proposed guider network to define intermediate rewards, leading to a definition of feature-matching reward.", "Feature-Matching Rewards We first define an intermediate reward to generate a particular word. The idea is to match the ground-truth features from the CNN encoder in Figure 1 with those generated from the guider network. Equation (8) indicates that the further the generated feature is from the true feature, the smaller the reward should be. To this end, for each time t , we define the intermediate reward for generating the current word as:", "r gt = 1 2 c c (cid:88) i =1 ( D cos ( f t , f t )+ D cos ( f t f t i , f t f t i )) ,", "where f t = G ( s Gt c 1 , f t c ) is the predicted feature. Intuitively, f t f t i measures the difference between the generated sentences in feature space; the reward is high if it matches the predicted feature transition f t f t i from the guider network. At the last step of text generation, i.e. , t = T , the corresponding reward measures the quality of the whole generated sentence, thus it is called a final reward. The final reward is defined differently from the intermediate reward, discussed below for both the unconditionaland conditional-generation cases.", "Note that a token generated at time t will influence not only the rewards received at that time but also the rewards at subsequent time steps. Thus we propose to define the cumulative reward, (cid:80) Ti = t i r gi with a discount factor, as a feature-matching reward . Intuitively, this encourages the generator to focus on achieving higher long-term rewards. Finally, in order to apply policy gradient to update the generator, we combine the feature-matching reward with the problem-specific final reward, to form a Q -value reward specified below.", "Similar to SeqGAN, the final reward is defined as the output of a discriminator, evaluating the quality of the whole generated sentence, i.e. , the smaller the output, the less likely the generation is a true sentence. As a result, we combine the adversarial reward r f [0 , 1] by the discriminator (Yu et al.,", "Generator Optimization The generator is initialized by pre-training on sentences with an autoencoder structure, based on MLE training. After that, the final Q -value reward Q t is used as a reward for each time t , with standard policy gradient optimization methods to update the generator. Specifically, the policy gradient is", "J = E ( s t 1 ,y t ) [ Q t log p ( y t | s t 1 ; , )] J = E ( s t 1 ,y t ) [ Q t log p ( y t | s t 1 ; , )]", "where p ( y t | s t 1 ; , ) is the probability of generating y t given s t 1 in the generator. Algorithm 1 describes the proposed model-based imitation learning framework for text generation.", "Model-based or Model-free Text generation seeks to generate the next word (action) given the current (sub-)sentence (state). The generator is considered as an agent that learns a policy to predict the next word given its current state. In previous work (Ranzato et al., 2016), a metric reward is given and the generator is trained to only maximize the metric reward by trial, thus this is model-free learning. In the proposed method, the guider network models the environment dynamics, and is trained by minimizing the cosine similarity between the prediction and the ground truth on real text. For generator training, it maximizes the reward which is determined by the metric and guider network, and thus is model-free learning with model-based boosting (Gu et al., 2016). The model predictive control scheme is included in our method, where the guider network is used to help next-word selection at each time-step.", "As illustrated in Figure 2, our framework naturally provides a way for style transfer, where the guider network plays the role of style selection, and the generator only focuses on maintaining content without considering the styles.", "To make the guider network focus on the guidance of styles, we assign the label l as the initial state s G 0 of the guider network.", "Specifically, at each step t , we feed the current sentence representation f t and label l into the guider network: O t = g ( s t 1 ) , w t = ( G ( s Gt 1 , [ f t , l ])) , (9) y t Multi (1 , softmax ( O t w t )) .", "For the generator, we put an adversarial regularizer on the encoded latent s 0 ( X ) and penalize it if it contains the sentiment information, by maximizing the entropy, i.e., max (cid:80) l p ( l | s 0 ( X )) log p ( l | s 0 ( X )) , where p is a pre-trained classifier.", "Intuitively, the generator gives candidate words represented by O t , while the guider makes a choice implicitly by w t based on the sentiment information.", "The sentiment information is contained in w t , while the content of the original sentence is represented by O t .", "To achieve style-transfer, one feeds the original sentence X with the target style label l to get the transferred sentence Y with style l .", "Following previous work (Hu et al., 2017; Yang et al., 2018; Cheng et al., 2020), we adopt a classifier as the discriminator and the soft-argmax approach (Kusner and Miguel, 2016) for the update of generator instead of policy gradient (Sutton and Barto, 1998).", "We first review related works that combine RL and GAN for text generation.", "As one of the most representative models in this direction, SeqGAN (Yu et al., 2017) adopts Monte-Carlo search to calculate rewards.", "However, such a method introduces high variance in policy optimization.", "There are a number of works proposed subsequently to improve the reward-generation process.", "For example, RankGAN (Lin et al., 2017) proposes to replace the reward from the GAN discriminator with a ranking-based reward, MaliGAN (Che et al., 2017) modifies the GAN objective and proposes techniques to reduce gradient variance, MaskGAN (Fedus et al., 2018) uses a filling technique to define a Q -value reward for sentence completion, RelGAN (Nie et al., 2019) uses a relational memory based generator for the long-distance dependency modeling, FM-GAN (Chen et al., 2018) uses a feature mover distance to match features of real and generated sentences inspired by optimal transport (Chen et al., 2019; Zhang et al., 2018), and LeakGAN (Guo et al., 2017) tries to address the sparse-reward issue for long-text generation with hierarchical RL by utilizing the leaked information from a GAN discriminator.", "One problem of LeakGAN is that it tends to overfit the training data, yielding generated sentences that are often not diverse.", "By contrast, by relying on a model-based imitation learning approach, our method learns global-structure information, which generates more-diverse sentences, and can be extended to conditional text generation.", "Zhang et al. (2020) designed a differentiable nested Wasserstein distance for semantic matching, which can be applied for further improvement.", "RL techniques can also be used in other ways for text generation (Bachman and Precup, 2015).", "For example, Ranzato et al. (2016) trained a Seq2Seq model by directly optimizing the BLEU/ROUGE scores with the REINFORCE algorithm.", "To reduce variance of the vanilla REINFORCE, Bahdanau et al. (2017) adopted the actor-critic framework for sequence prediction.", "Furthermore, Rennie et al. (2016) trained a baseline algorithm with a greedy decoding scheme for the REINFORCE method.", "Note that all these methods can only obtain reward after a whole sentence is generated.", "Planning techniques in RL have also been explored to improve text generation (Gulcehre et al., 2017; Serdyuk et al., 2018).", "Zhang et al. (2020) introduced the self-imitation scheme to exploit historical high-quality sentences for enhanced exploration.", "Compared to these related works, the proposed guider network can provide a planning mechanism and intermediate rewards.", "We test the proposed framework on unconditional and conditional text generation tasks, and analyze the results to understand the performance gained by the guider network.", "We also perform an ablation investigation on the improvements brought by each part of our proposed method, and consider non-parallel style transfer.", "All experiments are conducted on a single Tesla P100 GPU and implemented with TensorFlow and Theano.", "Details of the datasets, the experimental setup and model architectures are provided in the Appendix.", "Encoder as the feature extractor For unconditional generation, the feature extractor that generates inputs for the guider network shares the CNN part of the encoder.", "We stop gradients from the guider network to the encoder CNN in the training process.", "For conditional generation, we use a pretrained feature extractor, trained similarly to the unconditional generation.", "Training procedure As with many imitation-learning models (Bahdanau et al., 2017; Rennie et al., 2016; Sutskever et al., 2014), we first train the encoder-decoder part based on the off-policy data with an MLE loss.", "Then we use RL training to fine-tune the trained generator.", "We adaptively transfer the training from MLE loss to RL loss, similar to (Paulus et al., 2017; Ranzato et al., 2016).", "Initial states We use the same initial state for both the generator and the guider networks.", "For conditional generation, the initial state is the encoded latent code of the conditional information for both training and testing.", "For unconditional generation, the initial state is the encoded latent code of a target sentence in training and random noise in testing.", "We focus on adversarial text generation, and compare our approach with a number of related works (Guo et al., 2017; Lin et al., 2017; Yu et al., 2017; Zhang et al., 2017; Zhu et al., 2018).", "In this setting, a discriminator in the GAN framework is added to the model in Figure 1 to guide the generator to generate high-quality sentences.", "This is implemented by defining the final reward to be the output of the discriminator.", "All baseline experiments are implemented on the texygen platform (Zhu et al., 2018).", "We adopt the BLEU score, referenced by the test set (test-BLEU, higher value implies better quality) and itself (self-BLEU, lower value implies better diversity) (Zhu et al., 2018) to evaluate quality of generated samples, where test-BLEU evaluates the reality of generated samples, and self-BLEU measures the diversity.", "A good generator should achieve both a high test-BLEU score and a low self-BLEU score.", "In practice, we use (cid:52) t = c = 4 and = 0 .", "25 .", "We call the proposed method guider-matching GAN (GMGAN) for unconditional text generation.", "Short Text Generation: COCO Image Captions We use the COCO Image Captions Dataset, in which most sentences have a length of about 10 words.", "Since we consider unconditional text generation, only image captions are used as the training data.", "After preprocessing, we use 120,000 random sample sentences as the training set, and 10,000 as the test set.", "The BLEU scores with different methods are listed in Table 1. We observe that GMGAN performs significantly better than the baseline models.", "Specifically, besides achieving higher test-BLEU scores, the proposed method also generates samples with very good diversity in terms of self-BLEU scores.", "LeakGAN represents the state-of-the-art in adversarial text generation, however, its diversity measurement is relatively poor (Zhu et al., 2018).", "We suspect that the high BLEU score achieved by LeakGAN is due to its mode collapse on some good samples, resulting in high self-BLEU scores.", "Other baselines achieve lower self-BLEU scores since they cannot generate reasonable sentences.", "Long Text Generation: EMNLP2017 WMT Following (Zhu et al., 2018), we use the News section in the EMNLP2017 WMT4 Dataset as our training data.", "The dataset consists of 646,459 words and 397,726 sentences.", "After preprocessing, the training dataset contains 5,728 words and 278,686 sentences.", "The BLEU scores with different methods are provided in Table 2. Compared with other methods, LeakGAN and GMGAN achieve comparable test-BLEU scores, demonstrating high-quality generated sentences.", "Again, LeakGAN tends to over-fit on training data, leading to much higher (worse) self-BLEU scores.", "Our proposed GMGAN shows good diversity of long text generation with lower self-BLEU scores.", "Other baselines obtain both low self-BLEU and test-BLEU scores, leading to more random generations.", "Human Evaluation Simply relying on the above metrics is not sufficient to evaluate the proposed method (Caccia et al., 2018).", "Following previous work (Guo et al., 2017), we perform human evaluations using Amazon Mechnical Turk, evaluating the text quality based on readability and meaningfulness (whether sentences make sense) on the EMNNLP2017 WMT News dataset.", "We ask the worker to rate the input sentence with scores scal-Scores Criteria 5 (Best) It is consistent, informative, grammatically correct.", "methods on EMNLP2017 WMT dataset.", "ing from 1 to 5, with 1 as the worst score and 5 as the best.", "The detailed criteria is listed in Table 3. We require all the workers to be native English speakers, with approval rate higher than 90% and at least 100 assignments completed.", "We randomly sample 100 sentences generated by each model.", "Ten native English speakers on Amazon Mechanical Turk are asked to rate each sentence.", "The average human rating scores are shown in Table 4, indicating GMGAN achieves higher human scores compared to other methods.", "As examples, Table 5 illustrates some generated samples by GMGAN and its baselines.", "The performance on the two datasets indicates that the generated sentences of GMGAN are of higher global consistency and better readability than SeqGAN and LeakGAN.", "More generated examples are provided in the Appendix.", "Ablation Study We conduct ablation studies on long text generation to investigate the improvements brought by each part of our proposed method.", "We first test the benefits of using the guider network.", "Among the methods compared, Guider is the standard MLE model with the guider network.", "We further compare RL training with i ) only final rewards , ii ) only feature-matching rewards, and iii ) combining both rewards, namely GMGAN.", "The results are shown in Table 6. We observe that guider network plays an important role in improving the performance.", "RL training with final rewards given by a discriminator typically damages the generation quality, but feature-matching reward produces sentences with much better diversity due to the ability of exploration.", "Case Study of Guider-Matching Rewards Figure", "3(a) illustrates the feature-matching rewards in the generation.", "Figure", "3(a) shows an example of failure generation in the training stage, when two sentences are combined by the word was '.", "It is grammatically wrong to select was ' for the generator, thus the guider network gives a small reward.", "We can see that the rewards become lower with more time steps, which is consistent with the exposure bias.", "Figure", "3(b) shows a successful generation, where the rewards given by the guider are relatively high (larger than 0.5).", "These observations validate that: ( i ) exposure bias exists in MLE training.", "( ii )", "RL training with exploration can help reduce the effects of exposure bias.", "( iii )", "Our proposed feature-matching rewards can provide meaningful guidance to maintain sentence structure and fluency.", "We test the proposed framework on the non-parallel text-style-transfer task, where the goal is to transfer one sentence in one style ( e.g. , positive) to a similar sentence but with a different style ( e.g. , negative).", "Pair-wise information should be inferred from the training data, which becomes more challenging.", "For a fair comparison, we use the same data and its split method as in (Shen et al., 2017).", "Specifically, there are 444,000, 63,500, and 127,000 sentences with either positive or negative sentiments in the training, validation and test sets, respectively.", "To measure whether the original sentences (in the test set) have been transferred to the desired sentiment, we follow the settings of (Shen et al., 2017) and employ a pretrained CNN classifier, which achieves an accuracy of 97 .", "4% on the validation set, to evaluate the transferred sentences.", "We also report the BLEU scores with original sentences (BLEU) and human references (BLEU-ref) (Li et al., 2018a), to evaluate the content preservation of transferred sentences.", "Results are summarized in Table 7. Our proposed model exhibits higher transfer accuracy and better content preservation, indicating the guider network provides good sentiment guidance to better preserve the content information.", "We have proposed a model-based imitation-learning framework for adversarial text generation, by introducing a guider network to model the generation environment.", "The guider network provides a plan-ahead mechanism for next-word selection.", "Furthermore, this framework can alleviate the sparse-reward issue, as the intermediate rewards are used to optimize the generator.", "Our proposed models are validated on both unconditional and conditional text generation, including adversarial text generation and non-parallel style transfer.", "We achieve improved performance in terms of generation quality and diversity for unconditional and conditional generation tasks.", "Acknowledgement The authors would like to thank the anonymous reviewers for their insightful comments.", "The research was supported in part by DARPA, DOE, NIH, NSF and ONR." ]
[ "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "result", "abstain", "abstain" ]
[ "Knowledge distillation is a critical technique to transfer knowledge between models, typically from a large model (the teacher) to a more fine-grained one (the student).", "The objective function of knowledge distillation is typically the cross-entropy between the teacher and the student's output distributions.", "However, for structured prediction problems, the output space is exponential in size; therefore, the cross-entropy objective becomes intractable to compute and optimize directly.", "In this paper, we derive a factorized form of the knowledge distillation objective for structured prediction, which is tractable for many typical choices of the teacher and student models.", "In particular, we show the tractability and empirical effectiveness of structural knowledge distillation between sequence labeling and dependency parsing models under four different scenarios: 1) the teacher and student share the same factorization form of the output structure scoring function; 2) the student factorization produces more fine-grained substructures than the teacher factorization; 3) the teacher factorization produces more fine-grained substructures than the student factorization; 4) the factorization forms from the teacher and the student are incompatible.", "1 1 Introduction Deeper and larger neural networks have led to sig-nificant improvement in accuracy in various tasks, but they are also more computationally expensive and unfit for resource-constrained scenarios such Yong Jiang and Kewei Tu are the corresponding authors.", "as online serving.", "An interesting and viable solution to this problem is knowledge distillation (KD) (Bucilua et al., 2006; Ba and Caruana, 2014; Hinton et al., 2015), which can be used to transfer the knowledge of a large model (the teacher) to a smaller model (the student).", "In the field of natural language processing (NLP), for example, KD has been successfully applied to compress massive pretrained language models such as BERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) into much smaller and faster models without sig-nificant loss in accuracy (Tang et al., 2019; Sanh et al., 2019; Tsai et al., 2019; Mukherjee and Hassan Awadallah, 2020).", "A typical approach to KD is letting the student mimic the teacher model's output probability distributions on the training data by using the cross-entropy objective.", "For structured prediction problems, however, the output space is exponentially large, making the cross-entropy objective intractable to compute and optimize directly.", "Take sequence labeling for example.", "If the size of the label set is L , then there are L n possible label sequences for a sentence of n words and it is infeasible to compute the cross-entropy by enumerating the label sequences.", "Previous approaches to structural KD either choose to perform KD on local decisions or substructures instead of on the full output structure, or resort to Top-K approximation of the objective (Kim and Rush, 2016; Kuncoro et al., 2016; Wang et al., 2020a).", "In this paper, we derive a factorized form of the structural KD objective based on the fact that almost all the structured prediction models factorize the scoring function of the output structure into scores of substructures.", "If the student's substructure space is polynomial in size and the teacher's marginal distributions over these substructures can be tractably estimated, then we can tractably compute and optimize the factorized form of the structural KD objective.", "As will be shown in the paper, many widely used structured prediction models satisfy the assumptions and hence are amenable to tractable KD.", "In particular, we show the feasibility and empirical effectiveness of structural KD with different combinations of teacher and student models, including those with incompatible factorization forms.", "We apply this technique to structural KD between sequence labeling and dependency parsing models under four different scenarios.", "1. The teacher and student share the same factorization form of the output structure scoring function.", "2. The student factorization produces more fine-grained substructures than the teacher factorization.", "3. The teacher factorization produces more fine-grained substructures than the student factorization.", "4. The factorization forms from the teacher and the student are incompatible.", "In all the cases, we empirically show that our structural KD approaches can improve the student models.", "In the few cases where previous KD approaches are applicable, we show our approaches outperform these previous approaches.", "With unlabeled data, our approaches can further improve student models' performance.", "In a zero-shot cross-lingual transfer case, we show that with sufficient unlabeled data, student models trained by our approaches can even outperform the teacher models.", "Structured prediction aims to predict a structured output such as a sequence, a tree or a graph.", "In this paper, we focus on structured prediction problems with a discrete output space, which include most of the structured prediction tasks in NLP (e.g., chunking, named entity recognition, and dependency parsing) and many structured prediction tasks in computer vision (e.g., image segmentation).", "We further assume that the scoring function of the output structure can be factorized into scores of a polynomial number of substructures.", "Consequently, we can calculate the conditional probability of the output structure y given an input x as follows: P ( y | x ) = exp ( Score ( y , x )) (cid:80) y (cid:48) Y ( x ) exp ( Score ( y (cid:48) , x )) = (cid:81) u y exp ( Score ( u , x )) Z ( x ) (1) where Y ( x ) represents all possible output structures given the input x , Score ( y , x ) is the scoring function that evaluates the quality of the output y , Z ( x ) is the partition function, and u y denotes that u is a substructure of y .", "We define the substructure space U ( x ) = (cid:83) y Y ( x ) { u | u y } as the set of substructures of all possible output structures given input x .", "Take sequence labeling for example.", "Given a sentence x , the output space Y ( x ) contains all possible label sequences of x .", "In linear-chain CRF, a popular model for sequence labeling, the scoring function Score ( y , x ) is computed by summing up all the transition scores and emission scores where i ranges over all the positions in sentence x , and the substructure space U ( x ) contains all possible position-specific labels { y i } and label pairs { ( y i 1 , y i ) } .", "Knowledge distillation is a technique that trains a small student model by encouraging it to imitate the output probability distribution of a large teacher model.", "The typical KD objective function is the cross-entropy between the output distributions predicted by the teacher model and the student model: LKD = (cid:88) y Y ( x ) P t ( y | x ) log P s ( y | x ) (2) where P t and P s are the teacher's and the student's distributions respectively.", "During training, the student jointly learns from the gold targets and the distributions predicted by the teacher by optimizing the following objective function: L student = LKD + (1 ) L target where is an interpolation coefficient between the target loss L target and the structural KD loss LKD .", "Following Clark et al. (2019); Wang et al. (2020a), one may apply teacher annealing in training by decreasing linearly from 1 to 0.", "Because KD does not require gold labels, unlabeled data can also be used in the KD loss.", "When performing knowledge distillation on structured prediction, a major challenge is that the structured output space is exponential in size, leading to intractable computation of the KD objective in Eq.", "2. However, if the scoring function of the student model can be factorized into scores of substructures (Eq. 1), then we can derive the following factorized form of the structural KD objective.", "LKD = (cid:88) y Y ( x ) P t ( y | x )log P s ( y | x ) = (cid:88) y Y ( x ) P t ( y | x ) (cid:88) u y Score s ( u , x )+log Z s ( x ) = (cid:88) y Y ( x ) P t ( y | x ) (cid:88) u U s ( x ) 1 u y Score s ( u , x )+log Z s ( x ) = (cid:88) (cid:88) u U s ( x ) , y Y ( x ) P t ( y | x ) 1 u y Score s ( u , x )+log Z s ( x ) = (cid:88) u U s ( x ) P t ( u | x ) Score s ( u , x )+log Z s ( x ) (3) where 1 condition is 1 if the condition is true and 0 otherwise.", "From Eq.", "3, we see that if U s ( x ) is polynomial in size and P t ( u | x ) can be tractably estimated, then the structural KD objective can be tractably computed and optimized.", "In the rest of this section, we will show that this is indeed the case for some of the most widely used models in sequence labeling and dependency parsing, two representative structured prediction tasks in NLP.", "Based on the difference in score factorization between the teacher and student models, we divide our discussion into four scenarios.", "Case 1a: Linear-Chain CRF Linear-Chain CRF In this case, both the teacher and the student are linear-chain CRF models.", "An example application is to compress a state-of-the-art CRF model for named entity recognition (NER) that is based on large pretrained contextualized embeddings to a smaller CRF model with static embeddings that is more suitable for fast online serving.", "For a CRF student model described in section 2.1, if we absorb the emission score S e ( y i , x ) into the transition score S t (( y i 1 , y i ) , x ) at each position i , then the substructure space U s ( x ) contains every two adjacent labels { ( y i 1 , y i ) } for i =1 , . . . , n , with n being the sequence length, and the substructure score is defined as Score (( y i 1 , y i ) , x ) = S t (( y i 1 , y i ) , x ) + S e ( y i , x ) .", "The substructure marginal P t (( y i 1 , y i ) | x ) of the teacher model can be computed by: P t (( y i 1 , y i ) | x ) ( y i 1 ) ( y i ) exp( Score (( y i 1 , y i ) , x )) (4) where ( y i 1 ) and ( y i ) are forward and backward scores that can be tractably calculated using the classical forward-backward algorithm.", "Comparing with the Posterior KD and Top-K KD of linear-chain CRFs proposed by Wang et al. (2020a), our approach calculates and optimizes the KD objective exactly, while their two KD approaches perform KD either heuristically or approximately.", "At the formulation level, our approach is based on the marginal distributions of two adjacent labels, while the Posterior KD is based on the marginal distributions of a single label.", "Dependency Parsing as Sequence Labeling In this case, we use the biaffine parser proposed by Dozat et al. (2017) as the teacher and the sequence labeling approach proposed by Strzyz et al. (2019) as the student for the dependency parsing task.", "The biaffine parser is one of the state-of-the-art models, while the sequence labeling parser provides a good speed-accuracy tradeoff.", "There is a big gap in accuracy between the two models and therefore KD can be used to improve the accuracy of the sequence labeling parser.", "Here we follow the head-selection formulation of dependency parsing without the tree constraint.", "The dependency parse tree y is represented by (cid:104) y 1 , . . . , y n (cid:105) , where n is the sentence length and y i = ( h i , l i ) denotes the dependency head of the i -th token of the input sentence, with h i being the index of the head token and l i being the dependency label.", "The biaffine parser predicts the dependency head for each token independently.", "It models separately the probability distribution of the head index P t ( h i | x ) and the probability distribution of the label P t ( l i | x ) .", "The sequence labeling parser is a MaxEnt model that also predicts the head of each token independently.", "It computes Score (( h i , l i ) , x ) for each token and applies a softmax function to produce the distribution P s (( h i , l i ) | x ) .", "Therefore, these two models share the same factorization in which each substructure is a dependency arc specified by y i .", "U s ( x ) thus contains all possible dependency arcs among tokens of the input sentence x .", "The substructure marginal predicted by the teacher can be easily derived as: P t (( h i , l i ) | x ) = P t ( h i | x ) P t ( l i | x ) (5) Note that in this case, the sequence labeling parser uses a MaxEnt decoder, which is locally normalized for each substructure.", "Therefore, the structural KD objective in Eq.", "3 can be reduced to the following form without the need for calculating the student partition function Z s ( x ) .", "Case 2a: Linear-Chain CRF MaxEnt In this case, we use a linear-chain CRF model as the teacher and a MaxEnt model as the student.", "Previous work (Yang et al., 2018; Wang et al., 2020a) shows that a linear-chain CRF decoder often leads to better performance than a MaxEnt decoder for many sequence labeling tasks.", "Still, the simplicity and efficiency of the MaxEnt model is desirable.", "Therefore, it makes sense to perform KD from a linear-chain CRF to a MaxEnt model.", "As mentioned in Case 1a , the substructures of a linear-chain CRF model are consecutive labels { ( y i 1 , y i ) } .", "In contrast, a MaxEnt model predicts the label probability distribution P s ( y i | x ) of each token independently and hence the substructure space U s ( x ) consists of every individual label { y i } .", "To calculate the substructure marginal of the teacher P t ( y i | x ) , we can again utilize the forward-backward algorithm: P t ( y i | x ) ( y i ) ( y i ) (7) where ( y i ) and ( y i ) are forward and backward scores.", "Case 2b: Second-Order Dependency Parsing Dependency Parsing as Sequence Labeling The biaffine parser is a first-order dependency parser, which scores each dependency arc in a parse tree independently.", "A second-order dependency parser scores pairs of dependency arcs with a shared token.", "The substructures of second-order parsing are therefore all the dependency arc pairs with a shared token.", "It has been found that second-order extensions of the biaffine parser often have higher parsing accuracy (Wang et al., 2019; Zhang et al., 2020; Wang et al., 2020d; Wang and Tu, 2020).", "Therefore, we may take a second-order dependency parser as the teacher to improve a sequence labeling parser.", "Here we consider the second-order dependency parser of Wang and Tu (2020).", "It employs mean field variational inference to estimate the probabilities of arc existence P t ( h i | x ) and uses a first-order biaffine model to estimate the probabilities of arc labels P t ( l i | x ) .", "Therefore, the substructure marginal can be calculated in the same way as Eq.", "5.", "Case 3: MaxEnt Linear-Chain CRF Here we consider KD in the opposite direction of Case 2a .", "An example application is zero-shot cross-lingual NER.", "Previous work (Pires et al., 2019; Wu and Dredze, 2019) has shown that multilingual BERT (M-BERT) has strong zero-shot cross-lingual transferability in NER tasks.", "Many such models employ a MaxEnt decoder.", "In scenarios requiring fast speed and low computation cost, however, we may want to distill knowledge from such models to a model with much cheaper static monolingual embeddings while compensating the performance loss with a linear-chain CRF decoder.", "As described in Case 1a , the substructures of a linear-chain CRF model are consecutive labels { ( y i 1 , y i ) } .", "Because of the label independence and local normalization in the MaxEnt model, the substructure marginal of the MaxEnt teacher is calculated by: P t (( y i 1 , y i ) | x ) = P t ( y i 1 | x ) P t ( y i | x ) (8) 3.4 Factorization Forms From Teacher and Student are Incompatible Case 4: NER as Parsing MaxEnt Very recently, Yu et al. (2020) propose to solve the NER task as graph-based dependency parsing and achieve state-of-the-art performance.", "They represent each named entity with a dependency arc from the first token to the last token of the named entity, and represent the entity type with the arc label.", "However, for the flat NER task (i.e., there is no overlapping between entity spans), the time complexity of this method is higher than commonly used sequence labeling NER methods.", "In this case, we take a parsing-based NER model as our teacher and a MaxEnt model with the BIOES label scheme as our student.", "The two models adopt very different representations of NER output structures.", "The parsing-based teacher model represents an NER output of a sentence with a set of labeled dependency arcs and defines its score as the sum of arc scores.", "The MaxEnt model represents an NER output of a sentence with a sequence of BIOES labels and defines its score as the sum of token-wise label scores.", "Therefore, the factorization forms of these two models are incompatible.", "Computing the substructure marginal of the teacher P t ( y i | x ) , where y i { B l , I l , E l , S l , O | l L } and L is the set of entity types, is much more complicated than in the previous cases.", "Take y i = B l for example.", "P t ( y i = B l | x ) represents the probability of the i -th word being the beginning of a multi-word entity of type l '.", "In the parsing-based teacher model, this probability is proportional to the summation of exponentiated scores of all the output structures that contain a dependency arc of label l ' with the i -th word as its head and with its length larger than 1. It is intractable to compute such marginal probabilities by enumerating all the output structures, but we can tractably compute them using dynamic programming.", "See supplementary material for a detailed description of our dynamic programming method.", "We evaluate our approaches described in Section 3 on NER ( Case 1a, 2a, 3, 4 ) and dependency parsing ( Case 1b, 2b ).", "Datasets We use CoNLL 2002/2003 datasets (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) for Case 1a , 2a and 4 , and use WikiAnn datasets (Pan et al., 2017) for Case 1a , 2a , 3 , and 4 .", "The CoNLL datasets contain the corpora of four Indo-European languages.", "We use the same four languages from the WikiAnn datasets.", "For cross-lingual transfer in Case 3 , we use the four Indo-European languages as the source for the teacher model and additionally select four languages from different language families as the target for the student models.", "2 We use the standard training/development/test split for the CoNLL datasets.", "For WikiAnn, we follow the sampling of Wang et al. (2020a) with 12000 sentences for English and 5000 sentences for each of the other languages.", "We split the datasets by 3:1:1 for training/development/test.", "For Case 1b and 2b , we use Penn Treebank (PTB) 3.0 and follow the same pre-processing pipeline as in Ma et al. (2018).", "For unlabeled data, we sample sentences that belong to the same languages of the labeled data from the WikiAnn datasets for Case 1a , 2a and 4 and we sample sentences from the target languages of WikiAnn datasets for Case 3 .", "We use the BLLIP corpus 3 as the unlabeled data for Case 1b and 2b .", "Models For the student models in all the cases, we use fastText (Bojanowski et al., 2017) word embeddings and character embeddings as the word representation.", "For Case 1a , 2a and 4 , we concatenate the multilingual BERT, Flair (Akbik et al., 2018), fastText embeddings and character embeddings (Santos and Zadrozny, 2014) as the word representations for stronger monolingual teacher models (Wang et al., 2020c).", "For Case 3 , we use M-BERT embeddings for the teacher.", "Also for Case 3 , we fine-tune the teacher model on the training set of the four Indo-European languages from the WikiAnn dataset and train student models on the four additional languages.", "For the teacher models in Case 1b and 2b , we simply use the same embeddings as the student because there is already huge performance gap between the teacher and student in these settings and hence we do not need strong embeddings for the teacher to demonstrate the utility of KD.", "Baselines We compare our Structural KD ( Struct. KD ) with training without KD ( w/o KD ) as well as existing KD approaches.", "In Case 1a , the Pos.", "KD baseline is the Posterior KD approach for linear-chain CRFs proposed by Wang et al. (2020a).", "They 2 The four languages from the CoNLL datasets are Dutch, English, German and Spanish and the four target languages for Case 3 are Basque, Hebrew, Persian and Tamil.", "We use ISO 639-1 language codes ( https://en.wikipedia.org/ wiki/List_of_ISO_639-1_codes ) to represent each language.", "3 Brown Laboratory for Linguistic Information Processing (BLLIP) 1987-89 WSJ Corpus Release 1. Case 1a 1b 2a 2b 4 Labeled CoN Wiki PTB CoN Wiki PTB CoN Wiki Teacher 89.15 88.52 95.96 89.15 88.52 96.04 88.57 88.38 w/o KD 84.70 83.31 89.85 83.87 80.86 89.85 83.87 80.86 Pos.", "KD 85.27 83.73 ---Struct.", "KD 85.35 84.12 91.83 84.50 82.23 91.78 84.28 81.45 Table 1: Averaged F1 scores for NER and labeled attachment scores (LAS) for dependency parsing on labeled datasets.", "CoN: CoNLL datasets.", "also propose Top-K KD but have shown that it is inferior to Pos.", "KD .", "For experiments using unlabeled data in all the cases, in addition to labeled data, we use the teacher's prediction on the unlabeled data as pseudo labeled data to train the student models.", "This can be seen as the Top-1 KD method 4 .", "In Case 2a and 3 , where we perform KD between CRF and MaxEnt models, we run a reference baseline that replaces the CRF teacher or student model with a MaxEnt model and performs token-level KD ( Token KD ) of MaxEnt models that optimizes the cross entropy between the teacher and student label distributions at each position.", "Training For MaxEnt and linear-chain CRF models, we use the same hyper-parameters as in Akbik et al. (2018).", "For dependency parsing, we use the same hyper-parameters as in Wang and Tu (2020) for teacher models and Strzyz et al. (2019) for student models.", "For M-BERT fine-tuning in Case 3 , we mix the training data of the four source datasets and train the teacher model with the AdamW optimizer (Loshchilov and Hutter, 2018) with a learning rate of 5 10 5 for 10 epochs.", "We tune the KD temperature in { 1 , 2 , 3 , 4 , 5 } and the loss interpolation annealing rate in { 0 .", "5 , 1 .", "0 , 1 .", "5 } .", "For all experiments, we train the models for 5 runs with a fixed random seed for each run.", "4 We do not predict pseudo labels for the labeled data, because we find that the teacher models' predictions on the labeled training data have approximately 100% accuracy in most of the cases.", "Table 1 shows the experimental results with labeled data only and 2 shows the experimental results with additional 3000 unlabeled sentences.", "The results show that our structural KD approaches outperform the baselines in all the cases.", "Table 3 compares Struct.", "KD with Token KD , the reference baseline based on MaxEnt models.", "For Case 2a , which involves a MaxEnt student, Struct.", "KD with a CRF teacher achieves better results than Token KD with a MaxEnt teacher.", "For Case 3 , which involves a MaxEnt teacher, Struct.", "KD with a CRF student achieves better results than Token KD with a MaxEnt student.", "These results are to be expected because Struct.", "KD makes it possible to apply exact knowledge distillation with a more capable teacher or student.", "In all the experiments, we run Almost Stochastic Dominance proposed by Dror et al. (2019) with a significance level of 0 .", "05 and find that the advantages of our structural KD approaches are significant.", "Please refer to Appendix for more detailed results.", "There is a recent increase of interest in training multilingual NER models (Tsai et al., 2019; Mukherjee and Hassan Awadallah, 2020) because of the strong", "generalizability of M-BERT on multiple languages.", "Existing work explored knowledge distillation approaches to train fast and effective multilingual NER models with the help of monolingual teachers (Wang et al., 2020a).", "To show the effectiveness of structural KD in the multilingual NER setting, we compare our approaches with those reported by Wang et al. (2020a).", "Specifically, the monolingual teachers are always CRF models, and the multilingual student is either a CRF model ( Case 1a ) or a MaxEnt model ( Case 2a ).", "Wang et al. (2020a) report results of the Top-WK KD (a weighted version of Top-K KD ) and Pos.", "KD approaches for Case 1a and the reference baseline Token KD (with a MaxEnt teacher) for Case 2a .", "We follow their experimental settings when running our approach.", "The experimental results in Table 4 show the effectiveness of Struct.", "KD in both cases.", "In Case 1a , our approach is stronger than both Top-WK KD and Pos.", "KD as well as the mixture of the two approaches on average.", "In Case 2a , Struct.", "KD not only outperforms Token KD , but also makes the MaxEnt student competitive with the CRF student without KD (87.32 vs. 87.36).", "We compare our approaches with the baselines with different amounts of unlabeled data for Case 1a , 1b and 3 , which are cases that apply in-domain unlabeled data for NER and dependency parsing, and cross-lingual unlabeled data for NER.", "We experiment with more unlabeled data for Case 1b than for the other two cases because the labeled training data of PTB is more than 10 times larger than the labeled NER training data in Case 1a and 3 .", "Results are shown in Figure 1. The experimental results show that our approaches consistently outperform the baselines, though the performance gaps between them become smaller when the amount of unlabeled data increases.", "Comparing the performance of the students with the teachers, we can see that in Case 1a and 1b , the gap between the teacher and the student remains large even with the largest amount of unlabeled data.", "This is unsurprising considering the difference in model capacity between the teacher and the student.", "In Case 3 , however, we find that when using 30,000 unlabeled sentences, the CRF student models can even outperform the MaxEnt teacher model, which shows the effectiveness of CRF models on NER.", "A frequently used KD technique is dividing the log-its of probability distributions of both the teacher and the student by a temperature in the KD objective (Hinton et al., 2015).", "Using a higher temperature produces softer probability distributions and often results in higher KD accuracy.", "In structural KD, there are two approaches to applying the temperature to the teacher model, either globally to the logit of P t ( y | x ) (i.e., Score t ( y , x ) ) of the full structure y , or locally to the logit of P t ( u | x ) of each student substructure u .", "We empirically compare these two approaches in Case 1a with the same setting as in Section 4.1.", "Table 5 shows that the local approach results in better accuracy for all the languages.", "Therefore, we use the local approach by default in all the experiments.", "In Case 2a and Case 4 , we use the same MaxEnt student model but different types of teacher models.", "Our structural KD approaches in both cases compute the marginal distribution P t ( y i | x ) of the teacher at each position i following the substructures of the MaxEnt student, which is then used to train the student substructure scores.", "We can evaluate the quality of the marginal distributions by taking their modes as label predictions and evaluating their accuracy.", "In Table 6, we compare the accuracy of the CRF teacher and its marginal distributions from Case 2a , the NER-as-parsing teacher and its marginal distributions from Case 4 , and the MaxEnt teacher which is the KD baseline in Case 2a .", "First, we observe that for both", "CRF and NER-as-parsing, predicting labels from the marginal distributions leads to lower accuracy.", "This is to be expected because such predictions do not take into account correlations between adjacent labels.", "While predictions from marginal distributions of the CRF teacher still outperform MaxEnt, those of the NER-as-parsing teacher clearly under-perform MaxEnt.", "This provides an explanation as to why Struct.", "KD in Case 4 has equal or even lower accuracy than the Token KD baseline in Case 2a in Table 3. 6 Related Work 6.1 Structured Prediction In this paper, we use sequence labeling and dependency parsing as two example structured prediction tasks.", "In sequence labeling, a lot of work applied the linear-chain CRF and achieved state-of-the-art performance in various tasks (Ma and Hovy, 2016; Akbik et al., 2018; Liu et al., 2019b; Yu et al., 2020; Wei et al., 2020; Wang et al., 2021a,b).", "Meanwhile, a lot of other work used the MaxEnt layer instead of the CRF for sequence labeling (Devlin et al., 2019; Conneau et al., 2020; Wang et al., 2020b) because MaxEnt makes it easier to fine-tune pretrained contextual embeddings in training.", "Another advantage of MaxEnt in comparison with CRF is its speed.", "Yang et al. (2018) showed that models equipped with the CRF are about two times slower than models with the MaxEnt layer in sequence labeling.", "In dependency parsing, recent work shows that second-order CRF parsers achieve significantly higher accuracy than first-order parsers (Wang et al., 2019; Zhang et al., 2020).", "However, the inference speed of second-order parsers is much slower.", "Zhang et al. (2020) showed that second-order parsing is four times slower than the simple head-selection first-order approach (Dozat and Manning, 2017).", "Such speed-accuracy tradeoff as seen in sequence labeling and dependency parsing also occurs in many other structured prediction tasks.", "This makes KD an interesting and very useful technique that can be used to circumvent this tradeoff to some extent.", "KD has been applied in many structured prediction tasks in the fields of NLP, speech recognition and computer vision, with applications such as neural machine translation (Kim and Rush, 2016; Tan et al., 2019), sequence labeling (Tu and Gimpel, 2019; Wang et al., 2020a), connectionist temporal classification (Huang et al., 2018), image semantic segmentation (Liu et al., 2019a) and so on.", "In KD for structured prediction tasks, how to handle the exponential number of structured outputs is a main challenge.", "To address this difficult problem, recent work resorts to approximation of the KD objective.", "Kim and Rush (2016) proposed sequence-level distillation through predicting K-best sequences of the teacher in neural machine translation.", "Kuncoro et al. (2016) proposed to use multiple greedy parsers as teachers and generate the probability distribution at each position through voting.", "Very recently, Wang et al. (2020a) proposed structure-level knowledge distillation for linear-chain CRF models in multilingual sequence labeling.", "During the distillation process, teacher models predict the Top-K label sequences as the global structure information or the posterior label distribution at each position as the local structural information, which is then used to train the student.", "Besides approximate approaches, an alternative way is using models that make local decisions and performing KD on these local decisions.", "Anderson and Gomez-Rodrguez (2020) formulated dependency parsing as a head-selection problem and distilled the distribution of the head node at each position.", "Tsai et al. (2019) proposed MiniBERT through distilling the output distributions of M-BERT models of the MaxEnt classifier.", "Besides the output distribution, Mukherjee and Hassan Awadallah (2020) further distilled the hidden representations of teachers.", "In this paper, we propose structural knowledge distillation, which transfers knowledge between structured prediction models.", "We derive a factorized form of the structural KD objective and make it tractable to compute and optimize for many typical choices of teacher and student models.", "We apply our approach to four KD scenarios with six cases for sequence labeling and dependency parsing.", "Empirical results show that our approach outperforms baselines without KD as well as previous KD approaches.", "With sufficient unlabeled data, our approach can even boost the students to outperform the teachers in zero-shot cross-lingual transfer.", "This work was supported by the National Natural Science Foundation of China (61976139) and by Alibaba Group through Alibaba Innovative Research Program." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "method", "objective", "objective", "objective", "objective", "result", "result", "result", "result", "other", "method", "method", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "objective", "abstain", "method", "other", "other", "method", "other", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "method", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "result", "result", "other" ]
[ "Generating some appealing questions in open-domain conversations is an effective way to improve human-machine interactions and lead the topic to a broader or deeper direction.", "To avoid dull or deviated questions, some researchers tried to utilize answer, the future information, to guide question generation.", "However, they separate a post-question-answer (PQA) triple into two parts: post-question (PQ) and question-answer (QA) pairs, which may hurt the overall coherence.", "Besides, the QA relationship is modeled as a one-to-one mapping that is not reasonable in open-domain conversations.", "To tackle these problems, we propose a generative triple-wise model with hierarchical variations for open-domain conversational question generation (CQG).", "Latent variables in three hierarchies are used to represent the shared background of a triple and one-to-many semantic mappings in both PQ and QA pairs.", "Experimental results on a large-scale CQG dataset show that our method significantly improves the quality of questions in terms of fluency, coherence and diversity over competitive baselines.", "Questioning in open-domain dialogue systems is indispensable since a good system should have the ability to well interact with users by not only responding but also asking (Li et al., 2017).", "Besides, raising questions is a proactive way to guide users to go deeper and further into conversations (Yu et al., 2016).", "Therefore, the ultimate goal of open-domain conversational question generation (CQG) is to enhance the interactiveness and maintain the continuity of a conversation (Wang et al., 2018).", "CQG differs fundamentally from traditional question generation (TQG) (Zhou et al., 2019; Kim et al., 2019; Li et al., 2019) that generates a question given a sentence/paragraph/passage and a spec-ified answer within it.", "While in CQG, an answer always follows the to-be-generated question, and is unavailable during inference (Wang et al., 2019).", "At the same time, each utterance in open-domain scenario is casual and can be followed by several appropriate sentences, i.e., one-to-many mapping (Gao et al., 2019; Chen et al., 2019).", "At first, the input information of CQG was mainly a given post (Wang et al., 2018; Hu et al., 2018), and the generated questions were usually dull or deviated (Q3 and Q4 in Table 1).", "Based on the observation that an answer has strong relevance to its question and post, Wang et al. (2019) tried to integrate answer into the question generation process.", "They applied a reinforcement learning framework that firstly generated a question given the post, and then used a pre-trained matching model to estimate the relevance score (reward) between answer and generated question.", "This method separates a post-question-answer (PQA) triple into post-question (PQ) and question-answer (QA) pairs rather than considering the triple as a whole and modeling the overall coherence.", "Furthermore, the training process of the matching model only utilizes one-to-one relation of each QA pair and neglects the one-to-many mapping feature.", "An open-domain PQA often takes place under a background that can be inferred from all utterances in the triple and help enhance the overall coherence.", "When it comes to the semantic relationship in each triple, the content of a specific question is under the control of its post and answer (Lee et al., 2020).", "Meanwhile, either a post or an answer could correspond to several meaningful questions.", "As shown in Table 1, the triple is about a person's eating activity (the background of the entire conver-sation).", "There are one-to-many mappings in both PQ and QA pairs that construct different meaningful combinations, such as P-Q1.1-A1, P-Q1.2-A1, P-Q2.1-A2 and P-Q2.2-A2.", "An answer connects tightly to both its post and question, and in turn helps decide the expression of a question.", "On these grounds, we propose a generative triplewise model (GTM) for CQG.", "Specifically, we firstly introduce a triple-level variable to capture the shared background among PQA.", "Then, two separate variables conditioned on the triple-level variable are used to represent the latent space for question and answer, and the question variable is also dependent on the answer one.", "During training, the latent variables are constrained to reconstruct both the original question and answer according to the hierarchical structure we define, making sure the triple-wise relationship flows through the latent variables without any loss.", "For the question generation process, we sample the triple-level and answer variable given a post, then obtain the question variable conditioned on them, and finally generate a question based on the post, triple-level and question variables.", "Experimental results on a large-scale CQG dataset show that GTM can generate more fluent, coherent, and intriguing questions for open-domain conversations.", "The main contribution is threefold: To generate coherent and informative questions in the CQG task, we propose a generative triple-wise model that models the semantic relationship of a triple in three levels: PQA, PQ, and QA.", "Our variational hierarchical structure can not only utilize the future information (answer), but also capture one-to-many mappings in PQ and QA, which matches the open-domain scenario well.", "Experimental results on a large-scale CQG corpus show that our method significantly outperforms the state-of-the-art baselines in both automatic and human evaluations.", "Given a post as the input, the goal of CQG is to generate the corresponding question.", "Following the work of Zhao et al. (2017) and Wang et al. (2019), we leverage the question type qt to control the generated question, and take advantage of the answer information a to improve coherence.", "In training set, each conversation is represented as { p , q , qt, a } , consisting of post p = { p i } | p | i =1 , question q = { q i } | q | i =1 with its question type qt , and answer a = { a i } | a | i =1 .", "The graphical model of GTM for training process is shown in Figure", "1. , , and are used to denote parameters of generation, prior, and recognition network, respectively.", "We integrate answer generation to assist question generation with hierarchical latent variables.", "Firstly, a triple-level variable z t is imported to capture the shared background and Prior Network(a) RecognitionNetwork (a) RecognitionNetwork (q) Prior Network (q) MLP 1 MLP 2 ,0 ,0 Encoder & Prior/Recognition Network Answer Decoder Question Decoder KL KL Question Post Answer MLP Question Type Prediction Bi-GRU Encoder Bi-GRU Encoder Bi-GRU Encoder GRU Decoder GRU Decoder I ate out with my friends this evening.", "is inferred from PQA utterances.", "Then answer latent variable z a and question latent variable z q are sampled from Gaussian distributions conditioned on both post and z t .", "To ensure that the question is controlled by answer, z q is also dependent on z a .", "We use a bidirectional GRU (Cho et al., 2014) as encoder to capture the semantic representation of each utterance.", "Take post p as an example.", "Each word in p is firstly encoded into its embedding vector.", "The GRU then computes forward hidden states { h i } | p | i =1 and backward hidden states { h i } | p | i =1 : h i = GRU ( e p i , h i 1 ) , h i = GRU ( e p i , h i +1 ) , where e p i is employed to represent the embedding vector of word p i .", "We finally get the post representation by concatenating the last hidden states of two directions h encp = [ h | p | ; h 1 ] .", "Similarly, we can obtain representations of question q and answer a , denoted as h encq and h enca , respectively.", "The question type qt is represented by a real-valued, low dimensional vector v qt which is updated during training and is regarded as a linguistic feature that benefits the training of latent variables (Zhao et al., 2017).", "We use the actual question type qt during training to provide the information of interrogative words that is the most important feature to distinguish question types.", "is inferred from PQA utterances and is in turn responsible for generating the whole triple.", "Inspired by Park et al. (2018), we use a standard Gaussian distribution as the prior distribution of z t : p ( z t ) = N ( z | 0 , I ) , where I represents the identity matrix.", "For the inference of z t in training set, we consider three utterance representations h encp , h encq and h enca as a sequence, and use a bidirectional GRU to take individual representation as the input of each time step.", "The triple representation h t is obtained by concatenating the last hidden states of both directions.", "Then, z t is sampled from: q ( z t | p , q , a ) = N ( z | t , t I ) , t = MLP t ( h t ) , t = softplus ( MLP t ( h t )) , where MLP( ) is a feed-forward network, and softplus function is a smooth approximation to ReLU and can be used to ensure positiveness (Park et al., 2018; Serban et al., 2017).", "After obtaining z t , we use a GRU f to get a vector h ctxp for connecting p and q / a .", "h ctxp is then transformed to h ctxq and h ctxa that are used in prior and recognition networks for z q and z a : h ctxp = f ( z t , h encp ) , h ctxq = MLP tr 1 ( h ctxp ) , h ctxa = MLP tr 2 ( h ctxp ) .", "To model one-to-many mappings in PQ and QA pairs under the control of z t , we design two utterance-level variables, z q and z a , to represent latent spaces of question and answer.", "We define the prior and posterior distributions of z a as follows: p ( z a | p , z t ) = N ( z | a , a I ) , q ( z a | p , z t , a ) = N ( z | (cid:48) a , (cid:48) a I ) , where a , a , (cid:48) a , and (cid:48) a , the parameters of two Gaussian distributions, are calculated as: a = MLP a ([ h ctxa ; z t ]) , a = softplus ( MLP a ([ h ctxa ; z t ])) , (cid:48) a = MLP a ([ h ctxa ; z t ; h enca ]) , (cid:48) a = softplus ( MLP a ([ h ctxa ; z t ; h enca ])) .", "To make sure the content of question is also decided by answer and improve their relatedness, we import z a into z q space.", "The prior and posterior distributions of z q are computed as follows: p ( z q | p , z t , z a ) = N ( z | q , q I ) , q ( z q | p , z t , q , qt, z a ) = N ( z | (cid:48) q , (cid:48) q I ) , where q , q , (cid:48) q , and (cid:48) q are calculated as: q = MLP q ([ h ctxq ; z t ; z a ]) , q = softplus ( MLP q ([ h ctxq ; z t ; z a ])) , (cid:48) q = MLP q ([ h ctxq ; z t ; h encq ; v qt ; z a ]) , (cid:48) q = softplus ( MLP q ([ h ctxq ; z t ; h encq ; v qt , z a ])) .", "Following the work of Zhao et al. (2017) and Wang et al. (2019), a question type prediction network MLP qt is introduced to approximate p ( qt | z q , z t , p ) in training process and produces question type qt (cid:48) during inference.", "As shown in Figure 2, there are two decoders in our model, one is for answer generation that is an auxiliary task and only exists in the training process, and the other is for desired question generation.", "The question decoder employs a variant of GRU that takes the concatenation result of z q , z t , h ctxq , and qt as initial state s 0 , i.e., s 0 = [ z q ; z t , h ctxq , qt ] .", "For each time step j , it calculates the context vector c j following Bahdanau et al. (2015), and computes the probability distribution p ( q | z q , z t , p , qt ) over all words in the vocabulary: s j = GRU ( e j 1 , s j 1 , c j ) s j = MLP ([ e j 1 ; c j ; s j ]) , p ( q j | q <j , z q , z t , p , qt ) = softmax ( W o s j ) , where e j 1 represents the embedding vector of the ( j 1) -th question word.", "Similarly, the answer decoder receives the concatenation result of z a , z t , and h ctxa as initial state to approximate the probability p ( a | z a , z t , p ) .", "Importantly, our model GTM is trained to maximize the log-likelihood of the joint probability p ( p , q , a , qt ) :", "However, the optimization function is not directly tractable.", "Inspired by Serban et al. (2017) and Park et al. (2018), we convert it to the following objective that is based on the evidence lower bound and needs to be maximized in training process: LGTM = KL ( q ( z t | p , q , a ) || p ( z t )) KL ( q ( z a | p , z t , a ) || p ( z a | p , z t )) KL ( q ( z q | p , z t , q , qt, z a ) || p ( z q | p , z t , z a )) + E z a , z t q [log p ( a | z a , z t , p )] + E z q , z t q [log p ( q | z q , z t , p , qt )] + E z q , z t q [log p ( qt | z q , z t , p )] .", "The objective consists of two parts: the variational lower bound (the first five lines) and question type prediction accuracy (the last line).", "Meanwhile, the variational lower bound includes the reconstruc-tion terms and KL divergence terms based on three hierarchical latent variables.", "The gradients to the prior and recognition networks can be estimated using the reparameterization trick (Kingma and Welling, 2014).", "During inference, latent variables obtained via prior networks and predicted question type qt (cid:48) are fed to the question decoder, which corresponds to red dashed arrows in Figure", "2. The inference process is as follows: (1) Sample triple-level LV: z t q ( z t | p ) 1 .", "(2) Sample answer LV: z a p ( z a | p , z t ) .", "(3) Sample question LV: z q p ( z q | p , z t , z a ) .", "(4) Predict question type: qt p ( qt | z q , z t , p ) .", "(5) Generate question: q p ( z q , z t , p , qt ) .", "In this section, we conduct experiments to evaluate our proposed method.", "We first introduce some empirical settings, including dataset, hyper-parameters, baselines, and evaluation measures.", "Then we illustrate our results under both automatic and human evaluations.", "Finally, we give out some cases generated by different models and do further analyses over our method.", "We apply our model on a large-scale CQG corpus 2 extracted from Reddit 3 by Wang et al. (2019).", "There are over 1.2 million PQA triples which have been divided into training/validation/test set with the number of 1,164,345/30,000/30,000.", "The dataset has been tokenized into words using the NLTK tokenizer (Bird et al., 2009).", "The average number of words in post/question/answer is 18.84/19.03/19.30, respectively.", "Following Fan et al. (2018) and Wang et al. (2019), we categorize questions in training and validation set into 9 types based on interrogative words, i.e., what, when, where, who, why, how, can (could), do (did, does), is (am, are, was, were) 3.2 Hyper-parameter Settings We keep the top 40,000 frequent words as the vocabulary and the sentence padding length is set to 30.", "The dimension of GRU layer, word embedding and latent variables is 300, 300, and 100.", "The prior networks and MLPs have one hidden layer with size 300 and tanh non-linearity, while the number of hidden layers in recognition networks for both triple-level and utterance-level variables is", "2. We apply dropout ratio of 0.2 during training.", "The mini-batch size is 64.", "For optimization, we use Adam (Kingma and Ba, 2015) with a learning rate of 1e-4.", "In order to alleviate degeneration problem of variational framework (Park et al., 2018), we 1 Inspired by Park et al. (2018), using z t inferred from post with the posterior distribution is better than sampling it from the prior one, i.e., a standard Gaussian distribution.", "apply KL annealing, word drop (Bowman et al., 2016) and bag-of-word (BOW) loss (Zhao et al., 2017) 4 .", "The KL multiplier gradually increases from 0 to 1, and the word drop probability is 0.25.", "We use Pytorch to implement our model, and the model is trained on Titan Xp GPUs.", "We compare our methods with four groups of representative models: (1) S2S-Attn : A simple Seq2Seq model with attention mechanism (Shang et al., 2015).", "(2) CVAE&kgCVAE : The CVAE model integrates an extra BOW loss to generate diverse questions.", "The kgCVAE is a knowledge-guided CVAE that utilizes some linguistic cues (question types in our experiments) to learn meaningful latent variables (Zhao et al., 2017).", "(3) STD&HTD : The STD uses soft typed decoder that estimates a type distribution over word types, and the HTD uses hard typed decoder that specifies the type of each word explicitly with Gumbel-softmax (Wang et al., 2018).", "(4) RL-CVAE : A reinforcement learning method that regards the coherence score (computed by a one-to-one matching network) of a pair of generated question and answer as the reward function (Wang et al., 2019).", "RL-CVAE is the first work to utilize the future information, i.e., answer, and is also the state-of-the-art model for CQG 5 .", "Additionally, we also conduct ablation study to better analyze our method as follows: (5) GTM-z t : GTM without the triple-level latent variable, which means z t is not included in the prior and posterior distributions of both z p and z a .", "(6) GTM-a : the variant of GTM that does not take answer into account.", "That is, answer decoder and z a are removed from the loss function and the prior and posterior distributions of z q .", "Besides, z t here does not capture the semantics from answer.", "(7) GTM-z q / z a : GTM variant in which distributions of z q are not conditioned on z a , i.e., the fact that the content of question is also controlled by answer is not modelled explicitly by latent variables.", "In our model, we use an MLP to predict question types during inference, which is different from the conditional training (CT) methods (Li et al., 2016b; Zhou et al., 2018; Shen and Feng, 2020) 4 The total BOW loss is calculated as the sum of all BOW losses between each latent variable and q / a .", "Please refer to Park et al. (2018) for more details.", "5 For those methods with open-source codes, we run the original codes; otherwise, we re-implement them based on the corresponding paper.", "that provide the controllable feature, i.e., question types, in advance for inference.", "Therefore, we do not consider CT-based models as comparable ones.", "To better evaluate our results, we use both quantitative metrics and human judgements in our experiments.", "For automatic evaluation, we mainly choose four kinds of metrics: (1) BLEU Scores: BLEU (Pa-pineni et al., 2002) calculates the n-gram overlap score of generated questions against ground-truth questions.", "We use BLEU-1 and BLEU-2 here and normalize them to 0 to 1 scale.", "(2) Embedding Metrics: Average, Greedy and Extrema metrics are embedding-based and measure the semantic similarity between the words in generated questions and ground-truth questions (Serban et al., 2017; Liu et al., 2016).", "We use word2vec embeddings trained on the Google News Corpus 6 in this part.", "Please refer to Serban et al. (2017) for more details.", "(3) Dist-1& Dist-2: Following the work of Li et al. (2016a), we apply Distinct to report the degree of diversity.", "Dist-1/2 is defined as the ratio of unique uni/bi-grams over all uni/bi-grams in generated questions.", "(4) RUBER Scores: Referenced metric and Unreferenced metric Blended Evaluation Routine (Tao et al., 2018) has shown a high correlation with human annotation in open-domain conversation evaluation.", "There are two versions, one is RubG based on geometric averaging and the other is RubA based on arithmetic averaging.", "mantic coherence of PQ pairs (Wang et al., 2019), while Dist-1/2 evaluates the diversity of questions.", "Inspired by Wang et al. (2019), Shen et al. (2019), and Wang et al. (2018), we use following three criteria for human evaluation: (1) Fluency measures whether the generated question is reasonable in logic and grammatically correct.", "(2) Coherence denotes whether the generated question is semantically consistent with the given post.", "Incoherent questions include dull cases.", "(3) Willingness measures whether a user is willing to answer the question.", "This criterion is to justify how likely the generated questions can elicit further interactions.", "We randomly sample 500 examples from test set, and generate questions using models mentioned above.", "Then, we send each post and corresponding 10 generated responses to three human annotators without order, and require them to evaluate whether each question satisfies criteria defined above.", "All annotators are postgraduate students and not involved in other parts of our experiments.", "Now we demonstrate our experimental results on both automatic evaluation and human evaluation.", "The automatic results are shown in Table", "2. The top part is the results of all baseline models, and we can see that GTM outperforms other methods on all metrics (significance tests (Koehn, 2004), p -value < 0.05), which indicates that our proposed model can improve the overall quality of generated questions.", "Specifically, Dist-2 and RubA have been improved by 2.43% and 1.90%, respectively, compared to the state-of-the-art RL-CVAE model.", "First, higher embedding metrics and BLEU scores show that questions generated by our model are similar to ground truths in both topics and contents.", "Second, taking answer into account and using it to decide the expression of question can improve the consistency of PQ pairs evaluated by RUBER scores.", "Third, higher distinct values illustrate that one-to-many mappings in PQ and QA pairs make the generated responses more diverse.", "The bottom part of Table 2 shows the results of our ablation study, which demonstrates that taking advantage of answer information, modeling the shared background in entire triple, and considering one-to-many mappings in both PQ and QA pairs can help enhance the performance of our hierarchical variational model in terms of relevance, coherence and diversity.", "As shown in Table 3, GTM can alleviate the problem of generating dull and deviated questions compared with other models (significance tests (Koehn, 2004), p -value < 0.05).", "Both our proposed model and the state-of-the-art model RL-CVAE utilize the answer information and the results of them could prove that answers assist the question generation process.", "Besides, GTM can produce more relevant and intriguing questions, which indicates the effectiveness of modeling the shared background and one-to-many mappings in CQG task.", "The inter-annotator agreement is calculated with the Fleiss' kappa (Fleiss and Cohen, 1973).", "Fleiss' kappa for Fluency, Coherence and Willingness is 0.493, 0.446 and 0.512, respectively, indicating Moder-ate Agreement for all three criteria.", "Automatic metrics in Section Automatic Metrics are designed to compare generated questions with ground-truth ones (RUBER also takes the post information into consideration), but ignore answers in the evaluation process.", "To measure the semantic coherence between generated questions and answers, we apply two methods (Wang et al., 2019): (1) Cosine Similarity: We use the pre-trained Infersent model 7 (Conneau et al., 2017) to obtain sentence embeddings and calculate cosine similarity between the embeddings of generated responses 7 The Infersent model is trained to predict the meaning of sentences based on natural language inference, and the cosine similarity computed with it is more consistent with human's judgements, which performs better than the pre-trained Transformer/BERT model in our experiments.", "and answers.", "(2) Matching Score: We use the GRU-MatchPyramid (Wang et al., 2019) model that adds the MatchPyramid network (Pang et al., 2016) on top of a bidirectional GRU to calculate the semantic coherence.", "As shown in Table 4, questions generated by GTM are more coherent to answers.", "Attributing to the design of triple-level latent variable that captures the shared background, one-to-many Model Fluency Coherence Willingness S2S-Attn 0.482 0.216 0.186 CVAE 0.462 0.484 0.428 kgCVAE 0.474 0.536 0.476 STD 0.488 0.356 0.286 HTD 0.526 0.504 0.414 RL-CVAE 0.534 0.578 0.508 GTMz t 0.538 0.580 0.516 GTM-a 0.532 0.570 0.512 GTMz q / z a 0.542 0.586 0.520 GTM 0.548 0.608 0.526 Table 3: Results for human evaluation.", "mappings in PQ and QA pairs, and relationship modeling for z q and z a , GTM can improve the relevance in QA pairs.", "In Table 5, we list the generated results of two posts from the test set to compare the performance of different models.", "In the first case, both the post and answer mention two topics, donation and song, so the question is better to consider their relations.", "Besides, the answer here begins with because, then why and what (reason) questions are reasonable.", "For the second case, the post only talks about pen, while the answer refers to ink, which means there is a topic transition the question needs to cover.", "The second case shows the effectiveness of an answer that not only decides the expression of question but also improves the entire coherence of a tripe.", "Questions generated by GTM are more relevant to Post Question Answer S2S-Attn: what does that mean?", "both posts and answers, and could attract people to give an answer to them.", "However, other baselines may generate dull or deviated responses, even the RL-CVAE model that considers the answer information would only contain the topic words in answers (e.g., the question in case two), but fail to ensure the PQA coherence.", "Variational models suffer from the notorious degeneration problem, where the decoders ignore latent variables and reduce to vanilla Seq2Seq models (Zhao et al., 2017; Park et al., 2018; Wang et al., 2019).", "Generally, KL divergence measures the amount of information encoded in a latent variable.", "In the extreme case where the KL divergence of latent variable z equals to zero, the model completely ignores z , i.e., it degenerates.", "Figure 3 shows that the total KL divergence of GTM model maintains around 2 after 18 epochs indicating that the degeneration problem does not exist in our model and latent variables can play their corresponding roles.", "The researches on open-domain dialogue systems have developed rapidly (Majumder et al., 2020; Zhan et al., 2021; Shen et al., 2021), and our work mainly touches two fields: open-domain conversational question generation (CQG), and context modeling in dialogue systems.", "We introduce these two fields as follows and point out the main differences between our method and previous ones.", "Traditional question generation (TQG) has been widely studied and can be seen in reading comprehension (Zhou et al., 2019; Kim et al., 2019), sentence transformation (Vanderwende, 2008), question answering (Li et al., 2019; Nema et al., 2019), visual question generation (Fan et al., 2018) and task-oriented dialogues (Li et al., 2017).", "In such tasks, finding information via a generated question is the major goal and the answer is usually part of the input.", "Different from TQG, CQG aims to enhance the interactiveness and persistence of conversations (Wang et al., 2018).", "Meanwhile, the answer is the future information which means it is unavailable in the inference process.", "Wang et al. (2018) first studied on CQG, and they used soft and hard typed decoders to capture the distribution of different word types in a question.", "Hu et al. (2018) added a target aspect in the input and proposed an extended Seq2Seq model to generate aspect-specific questions.", "Wang et al. (2019) devised two methods based on either reinforcement learning or generative adversarial network (GAN) to further enhance semantic coherence between posts and questions under the guidance of answers.", "Existing methods mainly focus on the historical context in multi-turn conversations, and hierarchical models occupy a vital position in this field.", "Serban et al. (2016) proposed the hierarchical recurrent encoder-decoder (HRED) model with a context RNN to integrate historical information from utterance RNNs.", "To capture utterance-level variations, Serban et al. (2017) raised a new model Variational HRED (VHRED) that augments HRED with CVAEs.", "After that, VHCR (Park et al., 2018) added a conversation-level latent variable on top of the VHRED, while CSRR (Shen et al., 2019) used three-hierarchy latent variables to model the complex dependency among utterances.", "In order to detect relative utterances in context, Tian et al. (2017) and Zhang et al. (2018) applied cosine similarity and attention mechanism, respectively.", "HRAN (Xing et al., 2018) combined the attention results on both word-level and utterance-level.", "Besides, the future information has also been considered for context modeling.", "Shen et al. (2018) separated the context into history and future parts, and assumed that each of them conditioned on a latent variable is under a Gaussian distribution.", "Feng et al. (2020) used future utterances in the discriminator of a GAN, which is similar to Wang et al. (2019).", "The differences between our method and aforementioned ones in Section 4.1 and 4.2 are: (1) Rather than dividing PQA triples into two parts, i.e., PQ (history and current utterances) and QA (current and future utterances) pairs, we model the entire coherence by utilizing a latent variable to capture the share background in a triple.", "(2) Instead of regarding the relationship between question and answer as a text matching task that lacks the consideration of diversity, we incorporate utterance-level latent variables to help model one-to-many mappings in both PQ and QA pairs.", "We propose a generative triple-wise model for generating appropriate questions in open-domain conversations, named GTM.", "GTM models the entire background in a triple and one-to-many mappings in PQ and QA pairs simultaneously with latent variables in three hierarchies.", "It is trained in a one-stage end-to-end framework without pre-training like the previous state-of-the-art model that also takes answer into consideration.", "Experimental results on a large-scale CQG dataset show that GTM can generate fluent, coherent, informative as well as intriguing questions.", "We would like to thank all the reviewers for their insightful and valuable comments and suggestions." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "objective", "abstain", "abstain", "abstain", "other" ]
[ "This work examines the rhetorical techniques that speakers employ during political campaigns.", "We introduce a new corpus of speeches from campaign events in the months leading up to the 2016 U.S. presidential election and develop new models for predicting moments of audience applause.", "In contrast to existing datasets, we tackle the challenge of working with transcripts that derive from uncorrected closed captioning, using associated audio recordings to automatically extract and align labels for instances of audience applause.", "In prediction experiments, we find that lexical features carry the most information, but that a variety of features are predictive, including prosody, long-term contextual dependencies, and theoretically motivated features designed to capture rhetorical techniques.", "Every public speech involving a large audience can be seen as a game of coordination (Asch, 1951): at each moment, each individual mem-ber of the audience must decide in a split second whether to applaud at what has just been said.", "Applause is a potentially risky action: if an individual spontaneously claps but no one joins in, they suffer some negative social cost; the game is to judge from their own private information and content of the speech whether the rest of the audience will applaud at the same time they do.", "Because of this cost, audiences respond to several interacting factors in a speaker's behavior:", "a.) the content of the message;", "b.) their delivery (so that changes in pitch, duration and gaze signal salient moments for which applause may be licensed); and", "c.) the verbal design of the messagethose rhetorical strategies that speakers use to signal that applause is welcome (Atkinson, 1984; Heritage and Greatbatch, 1986).", "In this work, we attempt to model all three of these dimensions in developing a computational model for applause.", "While past work has focused on these elements in isolation (Guerini et al., 2015; Liu et al., 2017) or for related problems such as laughter detection (Purandare and Litman, 2006; Chen and Lee, 2017; Bertero and Fung, 2016), we find that developing a holistic model encompassing all three aspects yields the most robust predictor of applause.", "We focus on political speeches, and in particular those at campaign rallies, which lend themselves well to analysis of rhetorical strategies for several reasons.", "First, the speakers at these events prioritize maintaining the crowd's attention (Strangert, 2005).", "Motivated to drum up excitement and fervor among their supporters that they hope will carry beyond the event and into the voting booth, speakers pull out their strongest rhetorical tactics.", "Second, campaign speeches usually consist of a series of self-contained messages that can be fully expressed within a few utterances (Heritage and Greatbatch, 1986), yielding a well-defined observation of a complete rhetorical strategy.", "Lastly, these speeches are delivered by a single speaker to a partisan crowd, and clapping, cheering, and other responses are invited and expected.", "We focus in particular in this work on opera-tionalizating the verbal design of the speech; in so doing, one contribution we make is operationalizing the concepts of tension and release .", "Writers and performers often communicate with their audience on a fundamental level by building up tension, and then, at the proper time, delivering a satisfying release.", "These simple but pervasive concepts structure our experience of different modes of communication used throughout everyday life, including music (Madsen and Fredrickson, 1993), literature (Rabkin, 1973) and film (Carroll, 1996).", "Tension in music can be built up by harmonic 92 movement away from a tonal center; release then comes with a return to that established tonic (Hin-demith, 1937).", "One form of tension in literature is realized as suspense (Barthes and Duisit, 1975; Vorderer et al., 1996; Algee-Hewitt, 2016), in which a reader's knowledge of events is uncertain (either because those events take place in the narrative future or are withheld from narration), and released when that knowledge is revealed.", "In film, sudden changes in camera perspective create graphic tension, which is then released as the shot returns to a stable position (Bordwell, 2013).", "Often, it is the confluence of multiple sources of tension that mark the climax of a narrative (Hume, 2017).", "We draw on each of these strands of work in operationalizing tension and release as a rhetorical strategy.", "In this work, we make the following contributions: We collect a new dataset of text and audio from 310 speeches from campaign events leading up to the 2016 U.S presidential election with associated tags for over 19,000 instances of audience applause.", "We introduce new textual and acoustic features inspired by tension and release, combine and compare them with features used in previous work, and deploy those features in a logistic regression model and in an LSTM to predict when applause is likely to occur.", "Code, data, and trained models are openly available to the public at https://github.com/ jrgillick/Applause/ .", "Heritage and Greatbatch (1986) conduct an extensive analysis of nearly 500 speeches from British political party conferences, manually associating each of over 2000 instances of applause with coded message types (e.g. External Attacks or Statements of Approval), rhetorical devices (e.g. Contrast/Antithesis or Headline-Punchline), and performance factors (e.g. speech stress or body language).", "They find most of these factors to be positively correlated with applause; one especially striking result is over two thirds of observed instances of applause can be explained through a set of seven rhetorical devices (including contrast, pursuit, position taking, and the 3-part list).", "Though each device is different, a common feature of most of these techniques is that they are not always carried out within a single sentence or utterance; they often depend on the relationship between a series of utterances or phrases.", "We argue in this work that some of these relationships can be characterized and subsequently operationalized within models as tension and release.", "Recent work from Guerini et al. (2015) and Liu et al. (2017) approaches the task of applause prediction by looking at textual features of the individual sentences that immediately precede audience applause.", "Both follow the methodology proposed by Danescu-Niculescu-Mizil et al. (2012) in constructing a data set for binary classification, which is composed of sentences that generated applause, each paired with a single nearby sentence from the same document that did not lead to applause.", "Guerini et al. (2015) examine a set of features designed to capture aspects of euphony, or the inherent pleasantness of the sounds of words that might make an utterance memorable or persuasivesuch as rhyme, alliteration, homogeneity, and plosives.", "On the CORPS dataset (Guerini et al., 2013), which consists of the text of several thousand political speeches dating from 1917 to 2011, they define persuasive sentences as those that preceded annotations of either applause or laughter.", "Liu et al. (2017), working with a corpus of TED talks, use logistic regression to predict applause from sentences using a combination of features: euphony (again from Guerini et al. (2015)), linguistic style markers derived from membership in LIWC categories, markers of emotional expression derived from membership in the NRC Emotion Lexicon, mentions of names, rhetorical questions (string matching for ?), expressions of gratitude (matching a handcrafted list of word stems including thank and grateful ), and expressions seeking applause (matching the pattern applau ).", "Liu et al. (2017) also report that adding the same features for earlier sentences beyond the final sentence that preceded the applause caused the prediction accuracy to go down.", "Chen and Lee (2017) and Bertero and Fung (2016) run similar binary classification experiments but pre-93 dict laughter as opposed to applause.", "Bertero and Fung (2016) analyze punchlines from the TV sitcom The Big Bang Theory and report 70% accuracy using an LSTM.", "They touch briefly on the notion of tension and release in humor, as punchlines typically depend on a previous line as a setup in order to be funny.", "In this work, we focus on a new data set of campaign speeches from the 2016 U.S. presidential race, which we obtain from the public domain broadcasts of C-SPAN.", "We downloaded about 500 speeches from presidential candidates, vice presidential candidates, or former presidents, collecting audio files and transcripts that were tagged in the categories Campaign 2016 and Speech and which took place between 12/01/2015 and 12/01/2016.", "We then excluded events that took place outside of a traditional campaign speech setting (e.g. town hall events) or events that contained multiple speakers without a speaker identification tied to the transcript, which yielded a final set of 310 speeches from 16 speakers.", "Because different types of events have different social norms around when and whether applause is appropriate (Atkinson, 1984; Heritage and Greatbatch, 1986), we control for these factors to some degree by restricting our dataset to events in similar settings and within a single year.", "As a point of comparison, the C-SPAN dataset contains 62 instances of applause per speech on average, whereas the CORPS data (Guerini et al., 2013) contains 13.", "Since our C-SPAN data originates in video, we have access to the audio information of a speech event, which we employ both for feature extraction and for automatically identifying when applause occurs.", "Following Clement and McLaughlin (2016), we train an acoustic model using a set of poetry readings from the PennSound archive to distinguish applause from speech.", "We used logistic regression on the standard set of MFCC features and found similar results on the PennSound data to the reported classification accuracy of 99.4%.", "In a manual inspection of 100 applause segments from 5 different speeches in the C-SPAN corpus, our applause detector achieved 92% precision, 90% recall, and 91% F1 score.", "Due to variation in the nature of applause in a crowd (some-times we observe examples of isolated clapping and cheering, mixed laughter and applause, or applause interrupting the speaker), some ambiguity is inherent among the labels.", "We also measure the applause by first running the speeches through the audio source separation algorithm from Chandna et al. (2017), which was trained to separate voice from music, and then measuring the RMSE loudness of the separated non-vocal track.", "We found that the separation worked well, qualitatively matching with the results from the applause detection classifier.", "To match the identified segments of applause in the audio files with the relevant text from the transcriptions, we ran forced alignment using the Kaldi Toolkit (Povey et al., 2011).", "Since the CSPAN transcripts are sourced from uncorrected closed captioning, the text contains a number of misspellings and paraphrases, which we handled by discarding the 12% of words for which forced alignment failed.", "Though these transcriptions are not as accurate as what we would find in professionally transcribed datasets, previous work has shown that it is possible to achieve good accuracy in downstream tasks even with high error rates in transcription (Peskin et al., 1993; Novotney and Callison-Burch, 2010).", "Moreover, the caliber of transcripts derived from closed captioning is representative of the data that would be available in real time for practical use at future speech events.", "To estimate the accuracy of the closed captions, we manually transcribed selections from 5 speeches in the C-SPAN data totaling about 25 minutes and 2250 words, finding 30.9% WER relative to the reference transcriptions in our sample.", "Many of the errors are due to omitted words and phrases in the closed captions, which may occur as a result of transcribers' inability to keep up with the pace of fast speeches; in this sample, the closed caption texts contained 17% fewer words than our gold standard transcriptions.", "After finding the alignments, we segmented out a list of utterances by defining a minimum period of silence between words.", "Since many of the transcripts do not have punctuation, we find that dividing the text into utterances yielded qualitatively more coherent units than sentence boundary detec-94 Speaker Number of Speeches Number of Utterances Number Applauded Percentage Donald Trump 86 27493 7357 0.27 Hilary Clinton 72 12825 3933 0.31 Bernie Sanders 40 10994 3529 0.32 Ted Cruz 23 5873 1041 0.18 Marco Rubio 20 4407 797 0.18 John Kasich 17 4023 319 0.08 Barack Obama 10 3888 920 0.24 Bill Clinton 8 2087 292 0.14 Joe Biden 7 1847 270 0.15 Mike Pence 6 1302 246 0.19 Carly Fiorina 5 1222 129 0.11 Jeb Bush 5 1482 191 0.13 Rand Paul 4 939 134 0.14 Gary Johnson 3 354 56 0.16 Chris Christie 3 1868 42 0.022 Rick Santorum 1 245 17 0.07 Total 310 80849 19273 0.24 Table 1: Speakers and applause in C-SPAN corpus tion.", "Dividing into utterances is also conducive to building a dataset for binary classification, since every pause by the speaker yields an opportunity for applause.", "We chose a pause length of 0.7 seconds, but in future work we might be able to improve our models by adapting this threshold to the rate of speech in order to maintain consistent phrase sizes across different speakers.", "Given this set of utterances, we paired each utterance with a positive or negative label, determined by whether applause occurred within 1.5 seconds of the end of the utterance.", "All of these preprocessing choices were made during the corpus preparation phase, prior to any experimental evaluation.", "In our models, we draw features from previous work on applause or humor prediction and then supplement them with a new set of features inspired by the ideas of tension and release and by the rhetorical strategies of Heritage and Greatbatch (1986).", "LIWC.", "Features for membership in 73 LIWC categories proved to be the most effective for applause prediction in TED talks (Liu et al., 2017).", "Euphony.", "We adopt the 4 features for eu-phony defined by Guerini et al. (2015): Rhyme, Alliteration, Homogeneity, and Plosives.", "Lexical.", "Guerini et al. (2015) find n-grams to be highly predictive of both applause and laughter.", "We operationalize these features with bigrams, including in our model all bigrams that appear at least 5 times in the corpus.", "Embeddings.", "Bertero and Fung (2016) use sentence embeddings learned from a CNN encoder as input to an LSTM.", "We adopt this feature for use in our neural models, encoding phrases using the Skip-Thought model of Kiros et al. (2015).", "Acoustic.", "Purandare and Litman (2006) use a set of features intended to capture elements of prosody in a model for humor prediction in television dialogue.", "These features include the mean, max, min, range, and standard deviation values in an utterance's pitch (F0) and energy (RMS), along with features for internal silence and for tempo.", "We compute the F0 statistics with Reaper (Talkin, 2015) and the energy statistics with Li-brosa (McFee et al., 2015).", "Repeated Words.", "Rhetorical strategies such as The 3-part List and Contrast rely on repetition to drive home important points.", "We capture this phenomenon by computing the proportion of words in each utterance that also appear in the immediately preceding phrase.", "Longest Common Subsequence.", "Repeating an entire phrase, especially one with a politically charged topic, serves to build tension through the notion of theme and variation as is often realized 95 in music (Cope, 2005); an example of this phenomenon in our data can be found in the following passage: We will not allow the party of Lincoln and Reagan to fall into the hands of a con artist.", "We will not allow the next president of the United States to be a socialist like Bernie Sanders.", "And we will not allow the next president of the United States to be someone under FBI investigation like Hillary Clinton.", "We calculate this theme and variation by measuring the longest common subsequence between adjacent phrases.", "Delta features (local approximations to derivatives) are commonly used in speech recognition and audio classification systems (Povey et al., 2011).", "In a discourse, either highly similar or drastically different neighboring pairs of utterances may indicate dramatic moments.", "We operationalize these features by explicitly adding a delta measurement for every feature in our model, which captures the difference between every feature at time t and the same feature at time t 1 .", "For K -dimensional vector embeddings, we calculate deltas as their cosine distance.", "Rhetorical Structure Theory (RST) provides a foundation for describing the ways in which functional components of a text combine to form a coherent whole (Thompson and Mann, 1987).", "At the core of RST is a categorization system consisting of relations between elementary discourse units (EDUs).", "Relations between units are typically hierarchical (a nucleus and a satellite), but can also be defined between equally significant units (two nuclei).", "A typical RST tree can be seen below, where the sentence He won't win, but I'll vote for him any-way, he said is decomposed into three elementary discourse units (EDUs); those discourse units form the leaves of a tree with intermediate structure between subphrases and labeled edges along each branch.", "Some of the rhetorical strategies defined by Heritage and Greatbatch (1986), such as Con-trast, map directly to RST relations, while others do not have a clear one-to-one mapping but are qualitatively similar in their descriptions.", "While RST has been used with success for classification problems in the past (Ji and Smith, 2017; Bhatia et al., 2015), it has not yet been employed in existing models for applause prediction.", "In our work, we parse the rhetorical structure of the extracted sequence of phrases using the RST parser of Ji and Eisenstein (2014).", "From the structure of this RST tree, we extract two classes of features.", "RST label.", "First, we operationalize the rhetorical category for an individual elementary discourse unit.", "While the span of text within a single EDU is implicated in several rhetorical relations throughout the tree (as He won't win bears a CONTRAST relationship with but I'll vote for him anyway and is part of the ATTRIBUTION relationship with he said ), each EDU bears exactly one leaf relationship with the rest of the treehere, He won't win is a nucleus of a CONTRAST relationship, but I'll vote for him anyway is also a nucleus of a CONTRAST relationship, and he said is the satellite of an ATTRIBUTION relationship.", "We featurize a sentence as the set of all such typed relationships that EDUs within it hold; each typed relationship is the conjunction of the label (e.g., CONTRAST , ATTRIBUTION ) and directionality (Nucleus, Satellite).", "Rhetorical phrase closures.", "In order to further operationalize the notion of predictability of applause, we measure the number of rhetorical phrases that a given discourse segment brings to closure .", "We can illustrate this with figure 1, which presents a sample RST tree with only the spans annotated (i.e., without RST labels or nu-cleus/satellite directed edges).", "This tree spans 10 elementary discourse units; each non-terminal node is annotated with the span of the subtree 96 rooted at that node (so the root spans all ten EDUs, while its left child spans only the first five).", "The final discourse unit (EDU", "10) is the final EDU in three rhetorical phrases (those spanning EDUs 9-10, 6-10 and the entire discourse 1-10).", "We might hypothesize that the greater number of discourse phrases that a given discourse unit closes, the stronger the signal it provides that applause is licensed (and hence the greater likelihood to be followed by applause empirically).", "For a sentence with multiple discourse units, we featurize this value as the maximum number of rhetorical phrases closed by any unit it contains.", "We present two experiments to uncover the degree to which we are able to predict applause from different operationalizations of a politician's campaign speech: one in which have access to a politician's previous speeches, and can learn their specific nuances and stock phrases used to solicit applause; and another in which we seek to uncover the broader rhetorical strategies common to multiple speakers.", "We refer to the following sets of features when we summarize results: Guerini.", "Euphony features from Guerini et al. (2015).", "Liu.", "LIWC features and additional matchers for handcrafted regular expressions from Liu et al. (2017) Audio.", "All acoustic features described in 4.1 above.", "Guerini, Liu, and Audio.", "Tension.", "Combination of RST ( 4.2.3), repetition ( 4.2.1), and delta features ( 4.2.2).", "N-gram.", "Bigram features.", "Skip-Thought.", "4800 dimensional Skip-Thought embeddings.", "Access to a politician's previous speeches provides a great deal of evidence for understanding their rhetorical strategies for soliciting applause; speakers often give variations of the same speech at different campaign events, and rely on a fixed set of stock phrases (e.g., Yes, We Can, Make America Great Again) and general strategies to solicit reactions (Lu, 1999; Miller, 1939; Petrow and Sullivan, 2007).", "To model this, we attempt to predict a speaker's likelihood of applause using only information from their own speeches.", "We use logistic regression with 2 regularization for this experiment, with hyperparameters chosen through cross-validation on the training data. We run 10-fold cross validation for each speaker, and leave-one-out cross validation for those speakers with fewer than 10 speeches (we exclude Rick Santorum from this experiment because we have only one speech from him), with whole speeches divided across folds so that no utterances from the same speech ever appear in both training and test sets. Reported results aggregate the predictions across all speakers to calculate the final accuracies. We choose utterances (or sequences of utterances) that directly precede applause as positive examples, pairing each one with a negative example randomly chosen from the same speech. Since we use different amounts of data for each speaker, we are not able to compare accuracies across all speakers, but we can see that some speakers are significantly easier to model: for example, our best model reaches 0.719 accuracy on Bernie Sanders but only 0.660 on Donald Trump. Table 2 summarizes the results, comparing across different combinations of features as well as across a scope of a single phrase or multiple phrases. All feature combinations are scoped over a single utterance unless otherwise noted. 5.2 Inter-speaker validation At the same time, many of the strategies identified by Heritage and Greatbatch (1986) are gener-97 Model Mean Accuracy Mean F1 Max F1 Min F1 Guerini 0.566 0.533 0.659 (Bernie Sanders) 0.422 (Donald Trump) Liu 0.601 0.594 0.649 (Bernie Sanders) 0.499 (Jeb Bush) Audio 0.598 0.574 0.634 (Hilary Clinton) 0.516 (Donald Trump) Combined 0.646 0.640 0.685 (Bernie Sanders) 0.598 (Marco Rubio) N-gram 0.637 0.578 0.672 (Bernie Sanders) 0.478 (Barack Obama) Combined+Tension 0.639 0.635 0.682 (Bernie Sanders) 0.585 (Jeb Bush) Combined (3-Phrase) 0.645 0.640 0.671 (Bernie Sanders) 0.587 (Bill Clinton) Combined+Tension (3-Phrase) 0.626 0.624 0.665 (Bernie Sanders) 0.602 (Marco Rubio) Combined+N-gram 0.673 0.661 0.711 (Bernie Sanders) 0.600 (Marco Rubio) Combined+Tension+N-gram 0.671 0.658 0.711 (Bernie Sanders) 0.599 (Marco Rubio) Table 2: Intra-speaker predictive accuracy (logistic regression). The 95% confidence interval for Mean Accuracy and Mean F1 is within 0.005, and the 95% confidence interval for Max F1 and Min F1 (1 speaker at a time) is within 0.05. alized rhetorical devices used to solicit applause; we should expect then that a model trained on a fixed set of speakers should be able to generalize to speakers not in the training data. To test this more realistic scenario, we performed K fold cross-validation on all of the speakers in our dataset, holding out one speaker in turn for each fold (so that the same speaker did not appear in the training and test partitions). In this experiment, we use both logistic regression and neural models (sharing training data between speakers has the added benefit of allowing us enough data to reasonably train a neural model). All logistic regression models were trained in the same way is in the intra-speaker case. Our feed-forward and LSTM models use a hidden state size of 100 for models including phrase embeddings (4800 dimensions) and a hidden state of size 25 for models without phrase embeddings. All LSTM models use a standard formulation of attention (Bahdanau et al., 2014), and all neural models are trained with dropout (Srivastava et al., 2014) and the ADAM optimizer (Kingma and Ba, 2014). We implemented the models using Keras (Chollet et al., 2015) and Tensorflow (Abadi et al., 2016). Table 3 summarizes these results, and table 4 shows the coefficients for the most significant features. 6 Analysis Each of the feature classes we operationalize offers some ability to recognize what Heritage and Greatbatch (1986) term the projectability of applausethe ability of an audience to see an applaudable moment on the horizon. Audio. Perhaps not surprising in retrospect is the ability of acoustic features (only summary statistics of the pitch and energy) to solicit applause: Logistic Regression Models Acc. F1 Guerini 0.557 0.534 Liu 0.577 0.541 Audio 0.573 0.548 Combined 0.615 0.601 N-gram 0.594 0.578 Combined+Tension 0.617 0.605 Combined (3-Phrase) 0.614 0.601 Combined+Tension (3-Phrase) 0.615 0.600 Combined+N-gram 0.633 0.598 Combined+Tension+N-gram 0.630 0.594 Neural Models Acc. F1 Feed-Forward:Skip-Thought 0.577 0.562 Feed-Forward:Combined+Tension 0.620 0.620 LSTM:Skip-Thought(3-Phrase) 0.585 0.583 LSTM:Combined+Tension(3-Phrase) 0.626 0.616 LSTM:Combined+Tension(5-Phrase) 0.628 0.625 LSTM:Combined+Tension(8-Phrase) 0.629 0.621 Table 3: Inter-speaker predictive accuracy. The 95% confidence interval for each measurement of accuracy is within 0.005. higher pitch and energy, and a broader pitch range are all predictive of applause; while past work has focused on textual indicators of applause, these results suggest that how a message is delivered is equally important. Lexical. The use of explicit n-grams improves performance significantly in the intra-speaker setting, where they are able to capture stock phrases employed by the same speaker at different events. N-grams are also predictive across different speakers, though the performance gains are not as high in the inter-speaker setting. The strongest bigrams predictive of applause include moral declaratives like should not (e.g., and billionaires should not be able to buy elec-tions [Bernie Sanders]), right to (you have a right to be angry [Marco Rubio]), and should be (They should be ashamed of that kind of behav-ior [Hillary Clinton]); call-outs to the audience such as this room (Love the people in this room 98 Significant Features Coefficient Expression of Gratitude 0.472 LIWC FOCUSFUTURE 0.340 Homogeneity (Guerini) 0.301 Mean Energy (Audio) 0.293 LIWC BODY 0.203 Min Energy (Audio) 0.165 Max Pitch (Audio) 0.157 LIWC TENTATIVE -0.161 LIWC THEY -0.172 LIWC VERB -0.216 LIWC FUNCTION -0.228 Pitch Standard Deviation (Audio) -0.249 LIWC SHEHE -0.275 LIWC FOCUSPAST -0.342 Table 4: Most significant positive and negative features for the Combined+Tension regression model in the inter-speaker setting. [Donald Trump]) and listening to (our campaign is listening to our Latino brothers and sisters [Bernie Sanders]); and politically charged topics such as political revolution , equal pay , immigration reform , planned parenthood , campaign contributors and police officers . LIWC. Among broader lexical category features, we see the LIWC FOCUSFUTURE category strongly indicative of applause; this category includes auxilaries like will, going, gonna (including conjunctions I'll ) and future-oriented verbs like anticipate ; also important are categories of BODY (including heart, hands, brain ) and REWARD (in-cluding succeed, optimism, great ). Rhetorical. While RST features were not as predictive for applause as other (likely correlated) features, we still see a strong alignment between the RST features most associated with applause and those rhetorical devices outlined by Heritage and Greatbatch (1986): in particular, a clear relationship between applause and the RST category of ANTITHESIS (a contrastive relation between two discourse units with a clear nucleus and satellite, rather than two equal nuclei) and PURPOSE (a relation between a discourse unit that must take place in order for another to be realized). As expected, phrases that close more discourse units tend to be more predictive of applause. Contextual. Though lexical features from the final utterance significantly outweigh the effects of previous context in the intra-speaker setting, in the inter-speaker case we leveraged gains from long-term context in the LSTM to reach a similar level of performance attained from the lexical features, but without access to lexical cues provided by the n-grams at all. This result suggests that the improved performance in the intra-speaker setting may be largely due to the presence of specific words and catch-phrases; the other stylistic features are more easily generalized to new speakers. 7 Please clap As a further measure of out-of-sample validity, we can analyze the predictions we make for the single example where a speaker wears his communicative intent on his sleeve. On February 2, 2016, presidential candidate Jeb Bush spoke to a crowd in New Hampshire a week before their state primary. His speech ended with the following: So here's my pledge to you. [I] will be a commander-in-chief who will have the back of the military, I won't trash talk, I won't be a divider-in-chief or an agitator-in-chief, I won't be out there blowharding talking a big game without backing it up; I think the next President needs to be a lot quieter but send a signal that we're prepared to act in the national security interests of this country to get back in the business of creating a more peaceful world . . . . . . . . . Please clap. [Jeb Bush, Feb 2, 2016] 1 Bush's admonition to the audience (please clap) earned criticism in news coverage at the time (Benen, 2016), but also presents us with a rare insight into a speaker's true rhetorical intention; in this case, Bush was soliciting applause and was vocal about not being able to do so. Does our model recover this true intention? Indeed it does; while the opening So here's my pledge to you is predicted to not solicit applause (with applause probability of 24.8%), the segment that ends with peaceful world is strongly predicted to have been followed by applause (with an applause probability of 94.5%). The strongest features are again lexical ( this country , commander in chief ), a LIWC focus on the future (elicited by will ), and an RST PURPOSE relation (evoked by to get back in the business of creating a more peaceful world ). 1 Video of this speech can be found at: https://www. youtube.com/watch?v=DdCYMvaUcrA 99 8 Conclusion We present in this work a new dataset for the analysis of political rhetoric derived from the public campaign speeches of politicians during the 2016 United States presidential election, along with empirical results assessing the performance of different operationalizations of rhetoric derived from the theoretical work of Heritage and Greatbatch (1986) and others in order to measure and predict the occurrence of applause. We introduce several new features designed to capture elements of tension and release in public performance, including rhetorical contrast, closure, repetition and movement across speech segments; while each of these features in isolation is able to predict applause to varying degree and comport with our prior understanding of their utility, we find that lexicalized features are among the strongest source of information in determining applause; while audiences react to many dimensions of a speaker's style, the words they useas slogan, stock phrases, and indicators of more complex rhetorical functions like moral valuations and imperativesmatter most. As detailed in previous work (Liu et al., 2017; Haider et al., 2017; Clement and McLaughlin, 2016), understanding and identifying climactic moments in speeches can be useful for a variety of reasons, including learning to give better talks, automatically summarizing videos and transcripts, and analyzing social dynamics within crowds. One additional interesting application of this work is to bring to the surface occasions where a speaker uses typical applause-seeking devices but does not receive applause (the Please Clap moments); we leave to future work identifying the reverse, when speakers receive applause without invoking common techniques (for example, to identify instances of claques paid to clap). 9 Acknowledgments Many thanks to the anonymous reviewers for their helpful feedback. The research reported in this article was supported by a UC Berkeley Fellowship for Graduate Study to J.G. and by resources provided by NVIDIA. References Martn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 . Mark Algee-Hewitt. 2016. The machinery of suspense. http://markalgeehewitt. org/index.php/main-page/projects/the-machinery-of-suspense/ . S. E. Asch. 1951. Effects of group pressure on the modification and distortion of judgments. In H. Guetzkow, editor, Groups, Leadership and Men . Carnegie Press. J. Maxwell Atkinson. 1984. Public speaking and audience responses: some techniques for inviting applause. In Structures of Social Action . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Roland Barthes and Lionel Duisit. 1975. An introduction to the structural analysis of narrative. New Literary History 6(2):237272. http://www. jstor.org/stable/468419 . Steve Benen. 2016. Jeb Bush urges audience, Please clap'." ]
[ "method", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "While traditional systems for Open Information Extraction were statistical and rule-based, recently neural models have been introduced for the task.", "Our work builds upon CopyAttention, a sequence generation OpenIE model (Cui et al., 2018).", "Our analysis reveals that CopyAttention produces a constant number of extractions per sentence, and its extracted tuples often express redundant information.", "We present IMOJIE, an extension to CopyAttention, which produces the next extraction conditioned on all previously extracted tuples.", "This approach overcomes both shortcomings of CopyAttention, resulting in a variable number of diverse extractions per sentence.", "We train IMOJIE on training data bootstrapped from extractions of several non-neural systems, which have been automatically filtered to reduce redundancy and noise.", "IMOJIE outperforms CopyAttention by about 18 F1 pts, and a BERT-based strong baseline by 2 F1 pts, establishing a new state of the art for the task.", "Extracting structured information from unstructured text has been a key research area within NLP.", "The paradigm of Open Information Extraction (OpenIE) (Banko et al., 2007) uses an open vocabulary to convert natural text to semi-structured representations, by extracting a set of (subject, relation, object) tuples.", "OpenIE has found wide use in many downstream NLP tasks (Mausam, 2016) like multi-document question answering and summarization (Fan et al., 2019), event schema induction (Balasubramanian et al., 2013) and word embedding generation (Stanovsky et al., 2015).", "Traditional OpenIE systems are statistical or rule-based.", "They are largely unsupervised in nature, or bootstrapped from extractions made by earlier systems.", "They often consist of several components like POS tagging, and syntactic parsing.", "To bypass error accumulation in such pipelines, end-to-end neural systems have been proposed recently.", "Recent neural OpenIE methods belong to two categories: sequence labeling , e.g., RnnOIE (Stanovsky et al., 2018) and sequence generation , e.g., CopyAttention (Cui et al., 2018).", "In princi-ple, generation is more powerful because it can introduce auxiliary words or change word order.", "However, our analysis of CopyAttention reveals that it suffers from two drawbacks.", "First, it does not naturally adapt the number of extractions to the length or complexity of the input sentence.", "Second, it is susceptible to stuttering : extraction of multiple triples bearing redundant information.", "These limitations arise because its decoder has no explicit mechanism to remember what parts of the sentence have already been consumed' or what triples have already been generated.", "Its decoder uses a fixed-size beam for inference.", "However, beam search can only ensure that the extractions are not exact duplicates.", "In response, we design the first neural OpenIE system that uses sequential decoding of tuples conditioned on previous tuples.", "We achieve this by adding every generated extraction so far to the encoder.", "This iterative process stops when the EndOfExtractions tag is generated by the decoder, allowing it to produce a variable number of extractions.", "We name our system I terative M em O ry J oint Open I nformation E xtraction ( IMOJIE ).", "CopyAttention uses a bootstrapping strategy, where the extractions from OpenIE-4 (Christensen et al., 2011; Pal and Mausam, 2016) are used as training data.", "However, we believe that training on extractions of multiple systems is preferable.", "For example, OpenIE-4 benefits from high precision compared to ClausIE (Del Corro and Gemulla, 2013), which offers high recall.", "By aggregating extractions from both, IMOJIE could potentially Sentence He was appointed Commander of the Order of the British Empire in the 1948 Queen's Birthday Honours and was knighted in the 1953 Coronation Honours .", "However, simply concatenating extractions from multiple systems does not work well, as it leads to redundancy as well as exaggerated noise in the dataset.", "We devise an unsupervised Score-and-Filter mechanism to automatically select a subset of these extractions that are non-redundant and expected to be of high quality.", "Our approach scores all extractions with a scoring model, followed by filtering to reduce redundancy.", "We compare IMOJIE against several neural and non-neural systems, including our extension of CopyAttention that uses BERT (Devlin et al., 2019) instead of an LSTM at encoding time, which forms a very strong baseline.", "On the recently proposed CaRB metric, which penalizes redundant extractions (Bhardwaj et al., 2019), IMOJIE outperforms CopyAttention by about 18 pts in F1 and our strong BERT baseline by 2 pts, establishing a new state of the art for OpenIE.", "We release IMOJIE & all related resources for further research 1 .", "In summary, our contributions are: We propose IMOJIE, a neural OpenIE system that generates the next extraction, fully conditioned on the extractions produced so far.", "IMOJIE produce a variable number of diverse extractions for a sentence, We present an unsupervised aggregation scheme to bootstrap training data by combining extractions from multiple OpenIE systems.", "IMOJIE trained on this data establishes a new 1 https://github.com/dair-iitd/imojie SoTA in OpenIE, beating previous systems and also our strong BERT-baseline.", "Open Information Extraction (OpenIE) involves extracting (arg1 phrase, relation phrase, arg2 phrase) assertions from a sentence.", "Traditional open extractors are rule-based or statistical, e.g., Textrun-ner (Banko et al., 2007), ReVerb (Fader et al., 2011; Etzioni et al., 2011), OLLIE (Mausam et al., 2012), Stanford-IE (Angeli et al., 2015), ClausIE (Del Corro and Gemulla, 2013), OpenIE-4 (Christensen et al., 2011; Pal and Mausam, 2016), OpenIE-5 (Saha et al., 2017, 2018), PropS (Stanovsky et al., 2016), and MinIE (Gashteovski et al., 2017).", "These use syntactic or semantic parsers combined with rules to extract tuples from sentences.", "Recently, to reduce error accumulation in these pipeline systems, neural OpenIE models have been proposed.", "They belong to one of two paradigms: sequence labeling or sequence generation .", "Sequence Labeling involves tagging each word in the input sentence as belonging to the subject, predicate, object or other.", "The final extraction is obtained by collecting labeled spans into different fields and constructing a tuple.", "RnnOIE (Stanovsky et al., 2018) is a labeling system that first identifies the relation words and then uses sequence labelling to get their arguments.", "It is trained on OIE2016 dataset, which postprocesses SRL data for OpenIE (Stanovsky and Dagan, 2016).", "SenseOIE (Roy et al., 2019), improves upon RnnOIE by using the extractions of multiple OpenIE systems as features in a sequence labeling setting.", "However, their training requires manually annotated gold extractions, which is not scalable for the task.", "This restricts SenseOIE to train on a dataset of 3,000 sentences.", "In contrast, our proposed Score-and-Filter mechanism is unsupervised and can scale unboundedly.", "Jiang et al. (2019) is another labeling system that better calibrates extractions across sentences.", "SpanOIE (Zhan and Zhao, 2020) uses a span selection model, a variant of the sequence labelling paradigm.", "Firstly, the predicate module finds the predicate spans in a sentence.", "Subsequently, the argument module outputs the arguments for this predicate.", "However, SpanOIE cannot extract nominal relations.", "Moreover, it bootstraps its training data over a single OpenIE system only.", "In contrast, IMOJIE overcomes both of these limitations.", "Sequence Generation uses a Seq2Seq model to generate output extractions one word at a time.", "The generated sequence contains field demarcators, which are used to convert the generated flat sequence to a tuple.", "CopyAttention (Cui et al., 2018) is a neural generator trained over bootstrapped data generated from OpenIE-4 extractions on a large corpus.", "During inference, it uses beam search to get the predicted extractions.", "It uses a fixed-size beam, limiting it to output a constant number of extractions per sentence.", "Moreover, our analysis shows that CopyAttention extractions severely lack in diversity, as illustrated in Table", "1. Sun et al. (2018) propose the Logician model, a restricted sequence generation model for extracting tuples from Chinese text.", "Logician relies on coverage attention and gated-dependency attention, a language-specific heuristic for Chinese.", "Using coverage attention, the model also tackles generation of multiple extractions while being globally-aware.", "We compare against Logician's coverage attention as one of the approaches for increasing diversity.", "Sequence-labeling based models lack the ability to change the sentence structure or introduce new auxiliary words while uttering predictions.", "For example, they cannot extract (Trump, is the President of, US) from US President Trump, since is', of' are not in the original sentence.", "On the other hand, sequence-generation models are more general and, in principle, need not suffer from these limitations.", "Evaluation: All neural models have shown improvements over the traditional systems using the OIE2016 benchmark.", "However, recent work shows that the OIE2016 dataset is quite noisy, and that its evaluation does not penalize highly redundant extractions (Lechelle et al., 2018).", "In our work, we use the latest CaRB benchmark, which crowd-sources a new evaluation dataset, and also provides a modified evaluation framework to downscore near-redundant extractions (Bhardwaj et al., 2019).", "We now describe IMOJIE, our generative approach that can output a variable number of diverse extractions per sentence.", "The architecture of our model is illustrated in Figure", "1. At a high level, the next extraction from a sentence is best determined in context of all other tuples extracted from it so far.", "Hence, IMOJIE uses a decoding strategy that generates extractions in a sequential fashion, one after another, each one being aware of all the ones generated prior to it.", "This kind of sequential decoding is made possible by the use of an iterative memory .", "Each of the generated extractions are added to the memory so that the next iteration of decoding has access to all of the previous extractions.", "We simulate this iterative memory with the help of BERT encoder, whose input includes the [ CLS ] token and original Figure 2: Ranking-Filtering subsystem for combining extractions from multiple open IE systems in an unsupervised fashion.", "IMOJIE uses an LSTM decoder, which is initialized with the embedding of [ CLS ] token.", "The contextualized-embeddings of all the word tokens are used for the Copy (Gu et al., 2016) and Attention (Bahdanau et al., 2015) modules.", "The decoder generates the tuple one word at a time, producing (cid:104) rel (cid:105) and (cid:104) obj (cid:105) tokens to indicate the start of relation and object respectively.", "The iterative process continues until the EndOfExtractions token is generated.", "The overall process can be summarized as:", "1. Pass the sentence through the Seq2Seq architecture to generate the first extraction.", "2. Concatenate the generated extraction with the existing input and pass it again through the Seq2Seq architecture to generate the next extraction.", "3. Repeat Step 2 until the EndOfExtractions token is generated.", "IMOJIE is trained using a cross-entropy loss between the generated output and the gold output.", "To train generative neural models for the task of OpenIE, we need a set of sentence-extraction pairs.", "It is ideal to curate such a training dataset via human annotation, but that is impractical, considering the scale of training data required for a neural model.", "We follow Cui et al. (2018), and use bootstrapping using extractions from a pre-existing OpenIE system as silver'-labeled (as distinct from gold'-labeled) instances to train the neural model.", "We first order all extractions in the decreasing order of confidences output by the original system.", "We then construct training data in IMO JIE's input-output format, assuming that this is the order in which it should produce its extractions.", "Different OpenIE systems have diverse quality characteristics.", "For example, the human-estimated (precision, recall) of OpenIE-4 is ( 61 , 43) while that of ClausIE is (40 , 50 ) .", "Thus, by using their combined extractions as the bootstrapping dataset, we might potentially benefit from the high precision of OpenIE-4 and high recall of ClausIE.", "However, simply pooling all extractions would not work, because of the following serious hurdles.", "No calibration: Confidence scores assigned by different systems are not calibrated to a comparable scale.", "Redundant extractions: Beyond exact duplicates, multiple systems produce similar extractions with low marginal utility.", "We solve these problems using a Score-and-Filter framework, shown in Figure", "Wrong extractions: Pooling inevitably pollutes the silver data and can amplify incorrect instances, forcing the downstream open IE system to learn poor-quality extractions.", "2. Scoring: All systems are applied on a given sentence, and the pooled set of extractions are scored such that good (correct, informative) extractions generally achieve higher values compared to bad (incorrect) and redundant ones.", "In principle, this score may be estimated by the generation score from IMOJIE, trained on a single system.", "In practice, such a system is likely to consider extractions similar to its bootstrapping training data as good, while disregarding extractions of other systems, even though those extractions may also be of high quality.", "To mitigate this bias, we use an IMOJIE model, pre-trained on a random bootstrapping dataset .", "The random bootstrapping dataset is generated by picking extractions for each sentence randomly from any one of the bootstrapping systems being aggregated.", "We assign a score to each extraction in the pool based on the confidence value given to it by this IMOJIE (Random) model.", "Filtering: We now filter this set of extractions for redundancy.", "Given the set of ranked extractions in the pool, we wish to select that subset of extractions that have the best confidence scores (assigned by the random-boostrap model), while having minimum similarity to the other selected extractions.", "We model this goal as the selection of an optimal subgraph from a suitably designed complete weighted graph.", "Each node in the graph corresponds to one extraction in the pool.", "Every pair of nodes ( u, v ) are connected by an edge.", "Every edge has an associated weight R ( u, v ) signifying the similarity between the two corresponding extractions.", "Each node u is assigned a score f ( u ) equal to the confidence given by the random-bootstrap model.", "Given this graph G = ( V, E ) of all pooled extractions of a sentence, we aim at selecting a subgraph G (cid:48) = ( V (cid:48) , E (cid:48) ) with V (cid:48) V , such that the most significant ones are selected, whereas the extractions redundant with respect to already-selected ones are discarded.", "Our objective is max G (cid:48) G | V (cid:48) | (cid:88) i =1 f ( u i ) | V (cid:48) | 1 (cid:88) j =1 | V (cid:48) | (cid:88) k = j +1 R ( u j , u k ) , (1) where u i represents node i V (cid:48) .", "We compute R ( u, v ) as the ROUGE2 score between the serialized triples represented by nodes u and v .", "We can intuitively understand the first term as the aggregated sum of significance of all selected triples and second term as the redundancy among these triples.", "If G has n nodes, we can pose the above objective as: max x { 0 , 1 } n x (cid:62) f x (cid:62) Rx , (2) where f R n representing the node scores, i.e., f [ i ] = f ( u i ) , and R R n n is a symmetric matrix with entries R j,k = ROUGE2 ( u j , u k ) .", "x is the decision vector, with x [ i ] indicating whether a particular node u i V (cid:48) or not.", "This is an instance of Quadratic Boolean Programming and is NP-hard, but in our application n is modest enough that this is not a concern.", "We use the QPBO (Quadratic Pseudo Boolean Optimizer) solver 2 (Rother et al., 2007) to find the optimal x and recover V (cid:48) .", "We obtain our training sentences by scraping Wikipedia, because Wikipedia is a comprehensive source of informative text from diverse domains,", "rich in entities and relations.", "Using sentences from Wikipedia ensures that our model is not biased towards data from any single domain.", "We run OpenIE-4 3 , ClausIE 4 and RnnOIE 5 on these sentences to generate a set of OpenIE tuples for every sentence, which are then ranked and filtered using our Score-and-Filter technique.", "These tuples are further processed to generate training instances in IMO JIE's input-output format.", "Each sentence contributes to multiple (input, output) pairs for the IMOJIE model.", "The first training instance contains the sentence itself as input and the first tuple as output.", "For example, (I ate an apple and an orange., I; ate; an apple).", "The next training instance, contains the sentence concatenated with previous tuple as input and the next tuple as output (I ate an apple and an orange. [SEP] I; ate; an apple, I; ate; an orange).", "The final training instance generated from this sentence includes all the extractions appended to the sentence as input and EndOfExtractions token as the output.", "Every sentence gives the seq2seq learner one training instance more than the number of tuples.", "While forming these training instances, the tuples are considered in decreasing order of their confidence scores.", "If some OpenIE system does not provide confidence scores for extracted tuples, then the output order of the tuples may be used.", "We use the CaRB data and evaluation framework (Bhardwaj et al., 2019) to evaluate the systems 6 at different confidence thresholds, yielding a precision-recall curve.", "We identify three important summary metrics from the P-R curve.", "Optimal F1 : We find the point in the P-R curve corresponding to the largest F1 value and report that.", "This is the operating point for getting extractions with the best precision-recall trade-off.", "AUC : This is the area under the P-R curve.", "This metric is useful when the downstream application can use the confidence value of the extraction.", "Last F1 : This is the F1 score computed at the point of zero confidence.", "This is of importance when we cannot compute the optimal threshold, due to lack of any gold-extractions for the domain.", "Last F1 is an important measure for such applications.", "We compare IMOJIE against several nonneural baselines, including Stanford-IE, OpenIE-4, OpenIE-5, ClausIE, PropS, MinIE, and OLLIE.", "We also compare against the sequence labeling baselines of RnnOIE, SenseOIE, and the span selection baseline of SpanOIE.", "Probably the most closely related baseline to us is the neural generation baseline of CopyAttention.", "To increase CopyAttention's diversity, we compare against an English version of Logician, which adds coverage attention to a single-decoder model that emits all extractions one after another.", "We also compare against CopyAttention augmented with diverse beam search (Vijayakumar et al., 2018) it adds a diversity term to the loss function so that new beams have smaller redundancy with respect to all previous beams.", "We implement IMOJIE in the AllenNLP framework 7 (Gardner et al., 2018) using Pytorch 1.2.", "We use BERT-small model for faster training.", "Other 7 https://github.com/allenai/allennlp System Metric Opt.", "hyper-parameters include learning rate for BERT, set to 2 10 5 , and learning rate, hidden dimension, and word embedding dimension of the decoder LSTM, set to (10 3 , 256 , 100) , respectively.", "Since the model or code of CopyAttention (Cui et al., 2018) were not available, we implemented it ourselves.", "Our implementation closely matches their reported scores, achieving (F1, AUC) of (56.4, 47.7) on the OIE2016 benchmark.", "How well do the neural systems perform as compared to the rule-based systems?", "Using CaRB evaluation, we find that, contrary to previous papers, neural OpenIE systems are not necessarily better than prior non-neural systems (Table 3).", "Among the systems under consideration, the best non-neural system reached Last F1 of 51.5, whereas the best existing neural model could only reach 49.2.", "Deeper analysis reveals that CopyAttention produces redundant extractions conveying nearly the same information, which CaRB effectively penalizes.", "RnnOIE performs much better, however suffers due to its lack of generating auxil-liary verbs and implied prepositions.", "Example, it can only generate (Trump; President; US) instead of (Trump; is President of; US) from the sentence Filtering Metric Opt.", "US President Trump....", "Moreover, it is trained only on limited number of pseudo-gold extractions, generated by Michael et al. (2018), which does not take advantage of boostrapping techniques.", "In comparison with existing neural and nonneural systems, IMOJIE trained on aggregated bootstrapped data performs the best.", "It outperforms OpenIE-4, the best existing OpenIE system, by 1.9 F1 pts, 3.8 pts of AUC, and 1.8 pts of Last-F1.", "Qualitatively, we find that it makes fewer mistakes than OpenIE-4, probably because OpenIE-4 accumulates errors from upstream parsing modules (see Table 2).", "IMOJIE outperforms CopyAttention by large margins about 18 Optimal F1 pts and 13 AUC pts.", "Qualitatively, it outputs non-redundant extractions through the use of its iterative memory (see Table 1), and a variable number of extractions owing to the EndofExtractions token.", "It also outperforms CopyAttention with BERT, which is a very strong baseline, by 1.9 Opt.", "F1 pts, 0.5 AUC and 3.7 Last F1 pts.", "IMOJIE consistently outperforms CopyAttention with BERT over different bootstrapping datasets (see Table 8).", "Figure 3 shows that the precision-recall curve of IMOJIE is consistently above that of existing OpenIE systems, emphasizing that IMOJIE is consistently better than them across the different confidence thresholds.", "We do find that CopyAtten-tion+BERT outputs slightly higher recall at a significant loss of precision (due to its beam search with constant size), which gives it some benefit in the overall AUC.", "CaRB evaluation of SpanOIE 8 results in (precision, recall, F1) of (58.9, 40.3, 47.9).", "SpanOIE sources its training data only from OpenIE-4.", "In order to be fair, we compare it against IMOJIE trained only on data from OpenIE-4 which evaluates to (60.4, 46.3, 52.4).", "Hence, IMOJIE outperforms SpanOIE, both in precision and recall.", "Attention is typically used to make the model focus on words which are considered important for the task.", "But the IMOJIE model successfully uses attention to forget certain words, those which are already covered.", "Consider, the sentence He served as the first prime minister of Australia and became a founding justice of the High Court of Australia.", "Given the previous extraction (He; served; as the first prime minister of Australia), the BERTs attention layers figure out that the words prime' and minister' have already been covered, and thus push the decoder to prioritize founding' and justice'.", "Appendix D analyzes the attention patterns of the model when generating the intermediate extraction in the above example and shows that IMOJIE gives less attention to already covered words.", "What is the extent of redundancy in IMOJIE when compared to earlier OpenIE systems?", "We also investigate other approaches to reduce redundancy in CopyAttention, such as Logician's coverage attention (with both an LSTM and a BERT encoder) as well as diverse beam search.", "Table 4 reports that both these approaches indeed make significant improvements on top of CopyAttention scores.", "In particular, qualitative analysis of diverse beam search output reveals that the model gives out different words in different tuples in an effort to be diverse, without considering their correctness.", "Moreover, since this model uses beam search, it still outputs a fixed number of tuples.", "Un-8 https://github.com/zhanjunlang/Span OIE", "fortunately, IMOJIE (w/o BERT) is behind the CopyAttention baseline by 12.1 pts in AUC and 4.4 pts in Last F1.", "We hypothesize that this is because the LSTM encoder is unable to learn how to capture inter-fact dependencies adequately the input sequences are too long for effectively training LSTMs.", "This explains our use of Transformers (BERT) instead of the LSTM encoder to obtain the final form of IMOJIE.", "With a better encoder, IMOJIE is able to perform up to its potential, giving an improvement of (17.8, 12.7, 19.6) pts in (Optimal F1, AUC, Last F1) over existing seq2seq OpenIE systems.", "We further measure two quantifiable metrics of redundancy: Mean Number of Occurrences (MNO): The average number of tuples, every output word appears in.", "Intersection Over Union (IOU): Cardinality of intersection over cardinality of union of words in the two tuples, averaged over all pairs of tuples.", "These measures were calculated after removing stop words from tuples.", "Higher value of these measures suggest higher redundancy among the extractions.", "IMOJIE is significantly better than Copy-Attention+BERT, the strongest baseline, on both these measures (Table 7).", "Interestingly, IMOJIE has a lower redundancy than even the gold triples; this is due to imperfect recall.", "To what extent does the IMOJIE style of generating tuples improve performance, over and above the use of BERT?", "We add BERT to CopyAttention model to generate another baseline for a fair comparison against the IMOJIE model.", "When trained only on OpenIE-4, IMOJIE continues to outperform CopyAtten-tion+BERT baseline by (1.6, 0.3, 2.8) pts in (Op-timal F1, AUC, Last F1), which provides strong evidence that the improvements are not solely by virtue of using a better encoder.", "We repeat this experiment over different (single) bootstrapping datasets.", "Table 8 depicts that IMOJIE consistently outperforms CopyAttention+BERT model.", "We also note that the order in which the extractions are presented to the model (during training) is indeed important.", "On training IMoJIE using a randomized-order of extractions, we find a decrease of 1.6 pts in AUC (averaged over 3 runs).", "To what extent does the scoring and filtering approach lead to improvement in performance?", "IMOJIE aggregates extractions from multiple systems through the scoring and filtering approach.", "It uses extractions from OpenIE-4 (190K), ClausIE (202K) and RnnOIE (230K) to generate a set of 215K tuples.", "Table 6 reports that IMOJIE does not perform well when this aggregation mechanism is turned off.", "We also try two supervised approaches to aggregation, by utilizing the gold extractions from CaRB's dev set.", "Extraction Filtering : For every sentence-tuple pair, we use a binary classifier that decides whether or not to consider that extraction.", "The input features of the classifier are the [CLS] embeddings generated from BERT after processing the concatenated sentence and extraction.", "The classifier is trained over tuples from CaRB's dev set.", "Sentence Filtering : We use an IMOJIE model (bootstrapped over OpenIE-4), to score all the tuples.", "Then, a Multilayer Perceptron (MLP) predicts a confidence threshold to perform the filtering.", "Only extractions with scores greater than this threshold will be considered.", "The input features of the MLP include the length of sentence, IMOJIE (OpenIE-4) scores, and GPT (Radford et al., 2018) scores of each extraction.", "This MLP is trained over sentences from CaRB's dev set and the gold optimal confidence threshold calculated by CaRB.", "We observe that the Extraction, Sentence Filtering are better than no filtering by by 7.5, 11.2 pts in Last F1, but worse at Opt.", "F1 and AUC.", "We hypothesise that this is because the training data for the MLP (640 sentences in CaRB's dev set), is not sufficient and the features given to it are not suffi-ciently discriminative.", "Thereby, we see the value of our unsupervised Score-and-Filter that improves the performance of IMOJIE by (3.8, 15.9) pts in System Bootstrapping System OpenIE-4 OpenIE-5 ClausIE RnnOIE Base 50.7, 29, 50.7 47.4, 25.1, 47.4 45.1, 22.4, 45.1 49.2, 26.5, 49.2 CopyAttention+BERT 51.6, 32.8, 49.6 48.7, 29.4 , 48.0 47.4, 30.2, 43.6 47.9, 30.6, 41.1 IMOJIE 53.2 , 33.1 , 52.4 48.8 , 27.9, 48.7 49.2 , 31.4 , 45.5 51.3 , 31.1 , 50.8 Table 8: Evaluating models trained with different bootstrapping systems.", "(Optimal F1, Last F1).", "The 1.2 pt decrease in AUC is due to the fact that the IMOJIE (no filtering) produces many low-precision extractions, that inflates the AUC.", "Table 5 suggests that the model trained on all three aggregated datasets perform better than models trained on any of the single/doubly-aggregated datasets.", "Directly applying the Score-and-Filter method on the test-extractions of RnnOIE+OpenIE-4+ClausIE gives (Optimal F1, AUC, Last F1) of (50.1, 32.4, 49.8).", "This shows that training the model on the aggregated dataset is important.", "Computational Cost : The training times for Copy-Attention+BERT, IMOJIE (OpenIE-4) and IMOJIE (including the time taken for Score-and-Filter) are 5 hrs, 13 hrs and 30 hrs respectively.", "This shows that the performance improvements come with an increased computational cost, and we leave it to future work to improve the computational efficiency of these models.", "We randomly selected 50 sentences from the CaRB validation set.", "We consider only sentences where at least one of its extractions shows the error.", "We identified four major phenomena contributing to errors in the IMOJIE model: (1) Missing information: 66% of the sentences have at least one of the relations or arguments or both missing in predicted extractions, which are present in gold extractions.", "This leads to incomplete information.", "(2) Incorrect demarcation: Extractions in 60% of the sentences have the separator between relation and argument identified at the wrong place.", "(3) Missing conjunction splitting: In 32% of the sentences, our system fails to separate out extractions by splitting a conjunction.", "E.g., in the sentence US 258 and NC 122 parallel the river north . . . , IMOJIE predicts just one extraction (US 258 and NC 122; parallel; . . . ) as opposed to two separate extractions (US 258; parallel; . . . ) and (NC 122; parallel; . . . ) as in gold.", "(4) Grammatically incorrect extractions: 38% sentences have a grammatically incorrect extraction (when serialized into a sentence).", "Additionally, we observe 12% sentences still suffering from redundant extractions and 4% miscellaneous errors.", "We propose IMOJIE for the task of OpenIE.", "IMOJIE significantly improves upon the existing OpenIE systems in all three metrics, Optimal F1, AUC, and Last F1, establishing a new State Of the Art system.", "Unlike existing neural OpenIE systems, IMOJIE produces non-redundant as well as a variable number of OpenIE tuples depending on the sentence, by iteratively generating them conditioned on the previous tuples.", "Additionally, we also contribute a novel technique to combine multiple OpenIE datasets to create a high-quality dataset in a completely unsupervised manner.", "We release the training data, code, and the pretrained models.", "9 IMOJIE presents a novel way of using attention for text generation.", "Bahdanau et al. (2015) showed that attending over the input words is important for text generation.", "See et al. (2017) showed that using a coverage loss to track the attention over the decoded words improves the quality of the generated output.", "We add to this narrative by showing that deep inter-attention between the input and the partially-decoded words (achieved by adding previous output in the input) creates a better representation for iterative generation of triples.", "This general observation may be of independent interest beyond OpenIE, such as in text summarization.", "Mausam is supported by IBM AI Horizons Network grant, an IBM SUR award, grants by Google, Bloomberg and 1MG, and a Visvesvaraya faculty award by Govt.", "of India.", "We thank IIT Delhi HPC facility for compute resources.", "Soumen is supported by grants from IBM and Amazon.", "We would like to thank Arpita Roy for sharing the extractions of SenseOIE with us.", "9 https://github.com/dair-iitd/imojie References Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning." ]
[ "abstain", "method", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "method", "other", "method", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "other", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other" ]
[ "Open-domain questions are likely to be open-ended and ambiguous, leading to multiple valid answers.", "Existing approaches typically adopt the rerank-then-read framework, where a reader reads top-ranking evidence to predict answers.", "According to our empirical analysis, this framework faces three problems: first , to leverage a large reader under a memory constraint, the reranker should select only a few relevant passages to cover diverse answers, while balancing relevance and diversity is non-trivial; second , the small reading budget prevents the reader from accessing valuable retrieved evidence filtered out by the reranker; third , when using a generative reader to predict answers all at once based on all selected evidence, whether a valid answer will be predicted also pathologically depends on the evidence of some other valid answer(s).", "To address these issues, we propose to answer open-domain multi-answer questions with a recall-then-verify framework, which separates the reasoning process of each answer so that we can make better use of retrieved evidence while also leveraging large models under the same memory constraint.", "Our framework achieves state-of-the-art results on two multi-answer datasets, and predicts significantly more gold answers than a rerank-then-read system that uses an oracle reranker.", "Open-domain question answering (Voorhees, 1999; Chen et al., 2017) is a long-standing task where a question answering system goes through a large-scale corpus to answer information-seeking questions.", "Previous work typically assumes that there is only one well-defined answer for each question, or only requires systems to predict one correct answer, which largely simplifies the task.", "However, humans may lack sufficient knowledge or patience *Corresponding author: Minlie Huang.", "Original Question: When did [You Don't Know Jack] come out?", "Interpretation #1: When did the first video game called [You Don't Know Jack] come out?", "Evidence #1: You Don't Know Jack is a video game released in 1995, and the first release in ...", "Answer #1: 1995 Interpretation #2: When did the Facebook game [You Don't Know Jack] come out on Facebook?", "Evidence #2: In 2012, Jackbox Games developed and published a social version of the game on Facebook ...", "Answer #2: 2012 Interpretation #3: When did the film [You Don't Know Jack] come out?", "Evidence #3: You Don't Know Jack premiered April 24, 2010 on HBO.", "Answer #3: April 24, 2010 Table 1: An example of open-domain multi-answer questions.", "to frame very specific information-seeking questions, leading to open-ended and ambiguous questions with multiple valid answers.", "According to Min et al. (2020b), over 50% of a sampled set of Google search queries (Kwiatkowski et al., 2019) are ambiguous.", "Figure 1 shows an example with at least three interpretations.", "As can be seen from this example, the number of valid answers depends on both questions and relevant evidence, which challenges the ability of comprehensive exploitation of evidence from a large-scale corpus.", "Existing approaches mostly adopt the rerank-then-read framework.", "A retriever retrieves hundreds or thousands of relevant passages which are later reranked by a reranker; a generative reader then predicts all answers in sequence conditioned on top-ranking passages.", "With a fixed memory constraint 1 , there is a trade-off between the size of the reader and the number of passages the reader can process at a time.", "According to Min et al. (2021), provided that the reranker is capable of 1 We follow Min et al. (2021) to constrain memory usage, which is usually a bottleneck of performance on open-domain question answering .", "selecting a small set of highly-relevant passages with high coverage of diverse answers, adopting a larger reader can outperform a smaller reader using more passages.", "However, as shown by Section 4.4, this framework is faced with three problems: first , due to the small reading budget, the reranker has to balance relevance and diversity, which is non-trivial as it is unknown beforehand that which answers should be distributed with more passages to convince the reader and which answers can be safely distributed with less to save the budget for the other answers; second , the reader has no access to more retrieved evidence that may be valuable but is filtered out by the reranker, while combining information from more passages was found to be beneficial to open-domain QA (Izacard and Grave, 2021b); third , as the reader predicts answers in sequence all at once, the reader learns pathological dependencies among answers, i.e., whether a valid answer will be predicted also depends on passages that cover some other valid answer(s), while ideally, prediction of a particular answer should depend on the soundness of associated evidence itself.", "To address these issues, we propose to answer open-domain multi-answer questions with a recall-then-verify framework.", "Specifically, we first use an answer recaller to predict possible answers from each retrieved passage individually; this can be done with high recall, even when using a weak model for the recaller, but at the cost of low precision due to insufficient evidence to support or refute a candidate.", "We then aggregate retrieved evidence relevant to each candidate, and verify each candidate with a large answer verifier.", "By separating the reasoning process of each answer, our framework avoids the problem of multiple answers sharing a limited reading budget, and makes better use of retrieved evidence while also leveraging strong large models under the same memory constraint.", "Our contributions are summarized as follows: We empirically analyze the problems faced by the rerank-then-read framework when dealing with open-domain multi-answer QA.", "To address these issues, we propose to answer open-domain multi-answer questions with a recall-then-verify framework, which makes better use of retrieved evidence while also leveraging the power of large models under the same memory constraint.", "Open-domain QA requires question answering systems to answer factoid questions by searching for evidence from a large-scale corpus such as Wikipedia (Voorhees, 1999; Chen et al., 2017).", "The presence of many benchmarks has greatly promoted the development of this community, such as questions from real users like NQ (Kwiatkowski et al., 2019) and WEBQUESTIONS (Berant et al., 2013), and trivia questions like Quasar-T (Dhingra et al., 2017) and TriviaQA (Joshi et al., 2017).", "All these benchmarks either assume that each question has only one answer with several alternative surface forms, or only require a system to predict one valid answer.", "A typical question answering system is a pipeline as follows: an efficient retriever retrieves relevant passages using sparse (Mao et al., 2021; Zhao et al., 2021) or dense (Karpukhin et al., 2020; Xiong et al., 2021; Izacard and Grave, 2021a; Khattab et al., 2021) representations; an optional passage reranker (Asadi and Lin, 2013; Nogueira and Cho, 2019; Nogueira et al., 2020) further nar-rows down the evidence; an extractive or generative reader (Izacard and Grave, 2021b; Cheng et al., 2021) predicts an answer conditioned on retrieved or top-ranking passages.", "Nearly all previous work focused on locating passages covering at least one answer, or tried to predict one answer precisely.", "However, both Kwiatkowski et al. (2019) and Min et al. (2020b) reported that there is genuine ambiguity in open-domain questions, resulting in multiple valid answers.", "To study the challenge of finding all valid answers for open-domain questions, Min et al. (2020b) proposed a new benchmark called AMBIGQA where questions are annotated with as many answers as possible.", "In this new task, the passage reranker becomes more vital in the rerank-then-read framework, particularly when only a few passages are allowed to feed a large reader due to memory constraints.", "This is because the reranker has to ensure that top-ranking passages are highly relevant and also cover diverse answers.", "Despite state-of-the-art performance on AMBIGQA (Min et al., 2021), according to our empirical analysis, applying the rerank-then-read framework to open-domain multi-answer QA faces the following problems: balancing relevance and diversity is nontrivial for the reranker due to unknown effect on the 1826 performance of the subsequent reader; when using a large reader under a fixed memory constraint, the small reading budget prevents it from making use of more retrieved evidence that is valuable but filtered out; when using a generative reader to predict all answers in sequence based on all selected evidence, it learns pathological dependencies among answers.", "To address these issues, we propose to tackle this task with a recall-then-verify framework, which separates the reasoning process of each answer with a higher level of evidence usage while also leveraging large models under the same memory constraint.", "Some previous work argued that a reader can be confused by similar but spurious passages, resulting in wrong predictions.", "Therefore, they proposed answer rerankers (Wang et al., 2018a,b; Hu et al., 2019; Iyer et al., 2021) to rerank top predictions from readers.", "Our framework is related to answer reranking but with two main differences.", "First, a reader typically aggregates available evidence and already does a decent job of answer prediction even without answer reranking; an answer reranker is introduced to filter out hard false positive predictions from the reader.", "By contrast, our answer recaller aims at finding possible answers with high recall, most of which are invalid.", "Evidence focused on each answer is then aggregated and reasoned about by our answer verifier.", "It is also possible to introduce another model analogous to an answer reranker to filter out false positive predictions from our answer verifier.", "Second, answer reranking typically compares answer candidates to determine the most valid one, while our answer verifier selects multiple valid answers mainly based on the soundness of their respective evidence but without comparisons among answer candidates.", "Open-domain multi-answer QA can be formally defined as follows: given an open-ended question q , a question answering system is required to make use of evidence from a large-scale text corpus C and predict a set of valid answers { a 1 , a 2 , ..., a n } .", "Questions and their corresponding answer sets are provided for training.", "Evaluation To evaluate passage retrieval and reranking, we adopt the metric MRECALL @ k from (Min et al., 2021), which measures whether the topk passages cover at least k distinct answers (or n answers if the total number of answers n is less than k ).", "To evaluate question answering performance, we follow (Min et al., 2020b) to use F1 score between gold answers and predicted ones.", "In this section, we will briefly introduce the representative and state-of-the-art rerank-then-read pipeline from (Min et al., 2021) for open-domain multi-answer questions, and provide empirical analysis of this framework.", "Dense retrieval is widely adopted by open-domain question answering systems (Min et al., 2020a).", "A dense retriever measures relevance of a passage to a question by computing the dot product of their semantic vectors encoded by a passage encoder and a question encoder, respectively.", "Given a question, a set of the most relevant passages, denoted as B ( |B| (cid:28) |C| ), is retrieved for subsequent processing.", "To improve the quality of evidence, previous work (Nogueira et al., 2020; Gao et al., 2021) finds it effective to utilize a passage reranker, which is more expressive than a passage retriever, to rerank retrieved passages, and select the k best ones to feed a reader for answer generation ( k < |B| ).", "With a fixed memory constraint, there is a trade-off between the number of selected passages and the size of the reader.", "As shown by (Min et al., 2021), with good reranking, using a larger reader is more beneficial.", "To balance relevance and diversity of evidence, Min et al. (2021) proposed a passage reranker called JPR for joint modeling of selected passages.", "Specifically, they utilized T5-base (Raf-fel et al., 2020) to encode retrieved passages following (Izacard and Grave, 2021b) and decode the indices of selected passages autoregressively using a tree-decoding algorithm.", "JPR is designed to seek for passages that cover new answers, while also having the flexibility to select more passages covering the same answer, especially when there are less than k answers for the question.", "A reader takes as input the top-ranking passages, and predicts answers.", "Min et al. (2021) adopted a generative encoder-decoder reader initialized with T5-3b, and used the fusion-in-decoder method from (Izacard and Grave, 2021b) which efficiently ag-1827 gregates evidence from multiple passages.", "Specifi-cally, each passage is concatenated with the question and is encoded independently by the encoder; the decoder then attends to the representations of all passages and generates all answers in sequence, separated by a [SEP] token.", "To analyze performance of the rerank-then-read framework for open-domain multi-answer questions, we built a system that resembles the state-of-the-art pipeline from (Min et al., 2021) but with two differences 2 .", "First, we used the retriever from (Izacard and Grave, 2021a).", "Second, instead of using JPR, we used an oracle passage reranker (OPR): a passage p is ranked higher than another passage p (cid:48) if and only if 1) p covers some answer while p (cid:48) covers none 2) or both p and p (cid:48) cover or fail to cover some answer but p has a higher retrieval score.", "Following (Min et al., 2021), we retrieved |B| =100 Wikipedia passages, k =10 of which were selected by the reranker.", "Table 2 shows model performance on a representative multi-answer dataset called AMBIGQA (Min et al., 2020b).", "Compared with JPR, OPR is better in terms of reranking, with similar question answering results 3 .", "Though 3,670 diverse gold answers are covered by OPR on the dev set, the reader predicts only 1,554 of them.", "Our empirical analysis and findings are detailed as follows.", "(1) To leverage a large reader under a fixed memory constraint, a reranker should select only a few highly-relevant passages to cover diverse answers, while balancing relevance and diversity is nontrivial.", "As shown by Figure 1a (bottom), the num-2 Code and models from (Min et al., 2021) were not publicly available in the period of this work.", "3 With the oracle knowledge of whether a passage contains a gold answer during reranking, OPR is probably still far from being a perfect reranker.", "Notably, we are not striving for a better rerank-then-read pipeline for multi-answer questions, but use OPR as a representative case to analyze the problems a rerank-then-read pipeline may face.", "ber of selected supporting passages 4 of predicted gold answers has a widespread distribution.", "There may be cases where redundant false positive evidence is selected and can be safely replaced with passages that cover other gold answers.", "However, it is non-trivial for the reranker to know beforehand whether a passage is redundant, and how many or which supporting passages of an answer are strong enough to convince the reader.", "(2) Multiple answers sharing a small reading budget prevents a reader from using more evidence that may be valuable but is filtered out by the reranker.", "Due to the shared reading budget, it is inevitable that some answers are distributed with less supporting passages.", "As shown by Figure 1a, a gold answer covered by OPR but missed by the reader generally has significantly less supporting passages fed to the reader (3.13 on average) than a predicted gold answer (5.08 on average), but not because of lacking available evidence.", "There is more evidence in retrieved passages for missed answers but filtered out by the reranker.", "As shown by Figure 1b, OPR has a much lower level of evidence usage for missed answers.", "(3) As the reader predicts answers all at once conditioned on all selected passages, whether a valid answer will be predicted also pathologically depends on evidence of some other valid answer(s), which partly accounted for the large number of 4 We abuse the use of supporting passages of an answer to refer to passages that cover the answer.", "gold answers missed by the reader.", "For verification, we attacked OPR's reader on the dev set of AMBIGQA as follows: a question is a target if and only if 1) it has a gold answer covered by OPR but missed by the reader 2) and it has a predicted gold answer whose supporting passages cover no other gold answer; a successful attack on a targeted question means that a missed answer is recovered after removing a subset of supporting passages of some predicted answer 5 without removing any supporting passage of the other gold answers.", "There are 179 targeted questions; for 43.6% of them, we successfully recovered at least one missed gold answer.", "Figure 3 shows the success rate breakdown on the number of answers covered by the reader's input, indicating that predictions tend to be brittle when the reader is fed with many diverse supporting passages.", "One possible explanation of the pathological dependencies is that the reader implicitly compares 5 Removed passages were replaced with the same number of top-ranking passages that cover no gold answer, so that the number of passages fed to the reader remained unchanged.", "the validity of answer candidates and predicts the most likely ones.", "However, for 40.0% of successfully attacked questions, according to OPR, supporting passages of recovered missed answers are more relevant than those removed passages of predicted answers.", "Notably, Min et al. (2020b) also had a similar observation on another rerank-then-read pipeline, i.e., it is hard to argue that the predicted answers are more likely than the missed ones.", "To avoid the issues faced by the rerank-then-read framework, we propose a recall-then-verify framework, which separates the reasoning process of each answer so that answers (1) can be individually distributed with maximum supporting passages allowed on the same hardware (2) and are predicted mainly based on their own evidence.", "Figure 2 shows our framework.", "Specifically, we first guess possible answers based on retrieved passages using an answer recaller, an evidence aggregator then aggregates evidence for each answer candidate, and finally, an answer verifier verifies each candidate and outputs valid ones.", "Our answer recaller, based on T5, is trained to predict all gold answer(s) in sequence (separated by a [SEP] token) from each retrieved positive passage p B that cover some gold answer(s).", "We also train the recaller to predict the irrelevant token given a negative passage so that the recaller can filter out negative candidates; the number of negatives per positive used for training is denoted as neg .", "The set of answer candidates recalled during inference is denoted as A = { a 1 , a 2 , ..., a m } .", "Though a passage may not contain strong enough evidence to support an answer, by exploiting semantic clues in the question and the passage (e.g., the answer type), it is sufficient for even a weak model to achieve high recall.", "However, this is at the cost of low precision, which necessitates answer verification based on more supporting passages.", "We aggregate evidence for each answer candidate from retrieved passages, which can be formulated as a reranking task, i.e., to rerank retrieved passages according to their relevance to a question-candidate pair, and select top-ranking ones for answer verification.", "Our evidence aggregator resembles OPR: for a specific candidate a i , we encode the question-candidate pair with the retriever's question encoder; a passage p is ranked higher than another passage p (cid:48) if and only if 1) p covers a i while p (cid:48) does not 2) or both p and p (cid:48) cover or fail to cover a i but the semantic vector of p is closer to that of the question-candidate pair.", "We denote the topk relevant passages of a i as E i .", "Given a candidate a i and its evidence E i , our answer verifier, based on T5-3b, predicts whether a i is valid, using the fusion-in-decoder method from (Izacard and Grave, 2021b).", "Each passage from E i is concatenated with the question and the candidate, and is encoded independently; the decoder then attends to the representations of all passages and is trained to produce the tokens right or wrong depending on whether the encoded candidate is valid or not 6 .", "During inference, we compute the validity score of a candidate by taking the normalized probability assigned to the token right: P ( a i is valid ) = exp( logit ( right | q, a i , E i )) (cid:80) t { right , wrong } exp( logit ( t | q, a i , E i )) (1) Candidates with their validity scores higher than a threshold will be produced as final predictions.", "We conducted experiments on two multi-answer QA datasets, whose statistics are shown in Table", "3. 6 We have tried other verbalizers such as yes and no, but found no significant difference.", "WEBQSP (Yih et al., 2016) is a semantic parsing dataset for knowledge base question answering, where answers are a set of entities in Freebase.", "Following (Min et al., 2021), we repurposed this dataset for textual QA based on Wikipedia 7 .", "AMBIGQA (Min et al., 2020b) originates from NQ (Kwiatkowski et al., 2019), where questions are annotated with equally valid answers from Wikipedia.", "We compare our recall-then-verify system with two state-of-the-art rerank-then-read systems.", "REFUEL (Gao et al., 2021) selects 100 top-ranking passages from 1,000 retrieved passages, and predicts answers with a reader based on BART large (Lewis et al., 2020).", "It also has a round-trip prediction mechanism, i.e., to generate disambiguated questions based on predicted answers, which are re-fed to the reader to recall more answers.", "JPR (Min et al., 2021) is a passage reranker which jointly models selected passages.", "With improved reranking performance, Min et al. (2021) selected only 10 passages from 100 retrieved passages, and used a reader based on T5-3b which is much larger and more powerful than REFUEL 's reader, while requiring no more memory resources than REFUEL .", "Our retrieval corpus is the English Wikipedia from 12/20/2018.", "We finetuned the dense retriever from (Izacard and Grave, 2021a) on each multi-answer dataset.", "The answer recaller and the answer verifier were initialized with T5-3b; both were pre-trained on NQ and then finetuned on each multi-answer dataset.", "neg was 0.1 when finetuning the recaller.", "We retrieved 100 passages for a question, and verified each candidate with k =10 passages.", "The threshold for verification was tuned on the dev set based on the sum of F1 scores on all questions (F1 (all)) and questions with multiple answers (F1 (Multi)); the best on WEB QSP/A MBIGQA are 7 Our train/dev split on WEBQSP is different from Min et al. (2021)'s, as their split was not publicly available in the period of this work.", "0.8/0.5, respectively.", "Experiments with different model choices for the recaller and different values of neg , k and are shown in Section 6.5.", "Please refer to the Appendix for more implementation details.", "Memory Constraint: Min et al. (2021) considered a fixed hardware and trained a reader with the maximum number of passages.", "We follow this memory constraint, under which a reader/verifier based on T5-3b can encode up to 10 passages each of length no longer than 360 tokens at a time.", "Due to candidate-aware evidence aggregation and a fixed sufficient number of passages distributed to each candidate, our recall-then-verify framework can make use of most retrieved supporting passages (see our improvements over OPR in Figure 1b).", "With a higher level of evidence usage, our recall-then-verify system outperforms state-of-the-art rerank-then-read baselines on both multi-answer datasets, which is shown by Table", "4. Though focused on multi-answer questions, our framework is also applicable to single-answer scenario and achieves state-of-the-art results on NQ.", "Please refer to the Appendix for more details.", "In this section, we present ablation studies on AMBIGQA.", "Please refer to the Appendix for results on WEBQSP, which lead to similar conclusions.", "Model Choices for the Answer Recaller As shown by Table 5, though T5-base is commonly recognized as a much weaker model than T5-3b, a recaller based on T5-base can achieve a high coverage of gold answers, leading to competitive end-to-end performance on the test set.", "Necessity of Verification To investigate whether the recaller has the potential to tackle multi-answer questions alone, we tuned the precision of the re-Recaller Verifier Dev Test T5 neg |A| # Hit Recall Precision F1 F1 3b 10 -2.2 2068/1237 54.4/39.0 39.6/38.3 41.1/34.3 3b 5 -3.3 2206/1328 56.8/41.7 36.6/36.5 39.7/34.7 3b 1 -7.2 2714/1690 65.7/50.9 22.2/22.7 29.7/28.2 3b 0 -51.2 3364 / 2211 73.5 / 61.9 3.8/4.6 6.8/8.2 3b 0.1 -28.7 3288/2141 72.6/60.5 6.3/7.5 10.9/12.7 base 0.1 -48.4 3156/2056 70.0/57.9 3.3/4.1 6.0/7.5 3b 0.1 0.5 28.7 2046/1184 55.2/37.8 57.7 / 56.4 52.1 / 41.6 46.2 / 37.1 base 0.1 0.6 48.4 2051/1181 54.8/37.6 55.4/54.3 50.8/40.8 45.8/37.0 Table 5: Performance of recallers on AMBIGQA, trained with different models and neg .", "caller by varying neg .", "As shown in Table 5, with increased neg , the recaller learns to recall answers more precisely but still significantly underperforms the overall recall-then-verify system.", "It is likely that the recaller is trained on false positive passages, which may mislead the recaller to be overconservative in filtering out hard negative passages.", "By contrast, using more evidence for verification is less likely to miss true positive evidence if there is any for a candidate, thus not prone to mislead the verifier.", "Reducing Answer Candidates Though only using our recaller for multi-answer QA falls short, the recaller can be trained to shrink down the number of candidates so that the burden on the verifier can be reduced.", "As shown by Table 5, a small value of neg helps reduce answer candidates without significantly lowering recall.", "Effect of k Figure 4 shows the benefit of using more evidence for verification.", "As k increases from 1 to 10, there is a significant boost in F1 scores.", "Effect of As shown by Figure 4a, the balance between recall and precision can be controlled by : a lower leads to higher recall and may benefit performance on questions with multiple answers.", "With k =10, our system outperforms the previous state-of-the-art system for a wide range of .", "As shown by Figure 4b, under the best setups ( k =10, =0.5), our system predicts 31.7% and 34.1% more gold answers than the system using OPR on all questions and questions with multiple answers, respectively.", "Dependencies among Answers Despite being candidate-aware, aggregated evidence E can also include supporting passages of some other gold answer(s).", "We therefore investigated how answer verification is affected by the evidence of the other gold answers.", "Specifically, we attacked the verifier as follows: a question-candidate pair is a target if and only if 1) the candidate a i is a gold answer and 2) the aggregated evidence E i includes at least one supporting passage of some other gold answer(s) that do not cover a i ; we removed an arbitrary subset of supporting passages of the other gold answer(s) at a time 8 without removing any supporting passages of a i , and recorded the worst changes of the predicted validity scores of a i .", "As shown by Figure 5, the changes are small, indicating that missed gold candidates with low scores are not mainly suppressed by some other answer(s), and that predicted gold candidates with high scores are verified 8 Removed passages were replaced with the same number of top-ranking passages that cover no gold answers.", "mainly based on their associated evidence.", "Among 3,288 recalled gold answers on the dev set of AMBIGQA, the answer verifier misses 1,242 of them and outputs 1,323 wrong predictions.", "We manually analyzed 50 random samples, 25 of which are missed gold answers and 25 are wrong predictions.", "Table 6 reports our analysis.", "For 76% of missed gold answers, our evidence aggregator actually aggregates straightforward true positive evidence.", "Among these missed answers with straightforward evidence, 58% of them have validity scores higher than 0.2 but lower than the threshold 0.5.", "We attacked the verifier on missed gold answers with their validity scores below 0.2 as in Section 6.5.2, and found that the maximum change of predicted scores on average is small (+0.04), indicating that the low scores can not be attributed to the negative distraction by the other gold answer(s).", "We conjecture that, as it is difficult even for human annotators to find all valid answers to an open-domain question (Min et al., 2020b), the verifier was trained to refute false negative candidates, resulting in unexpected low scores on some straightforward valid answers.", "Notably, 80% of our wrong predictions turn out to be false negatives: 52% of wrong predictions are semantically equivalent to some annotated answer but are superficially different (Si et al., 2021); 28% of wrong predictions are unannotated false negatives.", "Therefore, it is likely that our system is underrated.", "In this section, we analyze the time complexity of our framework during inference, make comparisons with the state-of-the-art rerank-then-read framework JPR, and show how to reduce the computation cost of a recall-then-verify system.", "For convenience, we denote the encoder length and decoder length as L p and L a , respectively.", "Recaller vs. Reranker The time complexity of answer recalling is O ( |B| ( L 2 p + L a L p + L 2 a )) , while that of passage reranking is O ( |B| L 2 p + k |B| L p + k 2 ) .", "As encoding dominates computation cost (whose time complexity is O ( |B| L 2 p ) ), given the same model size and |B| , the time complexity of answer recalling and passage reranking is at the same level.", "Verifier vs. Reader The time complexity of answer verification is O ( |A| ( k L 2 p + k L p )) , while that of the reader is O ( k L 2 p + L a k L p + L 2 a ) .", "As the reader decodes a sequence of length L a in an autoregressive way, while the decoding length of the verifier is only 1, the ratio between the inference time of the verifier and that of the reader should be much less than |A| .", "Evidence Aggregator Evidence aggregation is significantly faster than answer recalling and verification, as representations of Wikipedia passages are pre-computed.", "The time complexity is O ( |A| ( L 2 p + |B| log k )) where L 2 p comes from encoding a question-candidate pair with the retriever's question encoder, and |B| log k comes from selecting the topk relevant passages for a candidate.", "One can adjust the computation cost of a recall-then-verify system, depending on how much inference efficiency is valued over precision and recall, by (1) choosing a recaller model of proper time complexity 9 , (2) tuning neg to adjust the expected number of candidates |A| needed for verification, (3) or tuning the number of passages k used for verification.", "Table 7 shows QA performance and inference efficiency of our systems with different configurations.", "Replacing T5-3b with T5-base for the recaller is significantly faster in answer recalling, but is much less precise and produces more answer candidates with the same neg , which increases the burden on the verifier and thus may fail to reduce the overall computation cost if neg is not raised.", "By also increasing neg and choosing a smaller k , as shown by the last row of Table 7, the overall time needed to answer a question on the dev set of AMBIGQA can be reduced to 1.88 sec on a single V100 GPU while also obtaining state-of-the-art F1 scores (50.7/38.2).", "By contrast, the rerank-then-read system from Min et al. (2021) using a T5-base JPR ( k =10) and a T5-3b reader is estimated to take 1.51 sec per question 10 with F1 scores of 48.5/37.6.", "In this paper, we empirically analyze the problems of the rerank-then-read framework for open-domain multi-answer questions, and propose the recall-then-verify framework, which separates the reasoning process of each answer so that 1) we can have a higher level of evidence usage 2) and predicted answers are mainly based on associated evidence and are more robust to distraction by evidence of the other gold answer(s), 3) while also leveraging large models under the same memory constraint.", "On two multi-answer datasets, our framework significantly outperforms rerank-then-read baselines with new state-of-the-art records.", "This work was supported by the National Science Foundation for Distinguished Young Scholars (with No. 62125604) and the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096).", "This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005.", "pro-10 The average inference time of JPR from Min et al. (2021) is independent of its parameters given a fixed number of encoded tokens and a fixed decoder length, which can be estimated with a randomly initialized JPR.", "The average inference time of JPR's reader was estimated with OPR's reader.", "pose a recall-then-verify framework that will hopefully benefit information-seeking users with an enhanced ability of comprehensive exploitation of evidence from a large-scale corpus.", "As our predictions are verified with textual knowledge, our system itself would not raise new significant ethical concerns.", "All the datasets as well as the retrieval corpus in our experiments have been widely used for research purposes, and to our knowledge, do not have any attached privacy and ethical issues." ]
[ "abstain", "abstain", "other", "objective", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "objective", "objective", "result", "method", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "other", "abstain", "other", "objective", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "objective", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "We consider the problem of making efficient use of heterogeneous training data in neural machine translation (NMT).", "Specifically, given a training dataset with a sentence-level feature such as noise, we seek an optimal curriculum , or order for presenting examples to the system during training.", "Our curriculum framework allows examples to appear an arbitrary number of times, and thus generalizes data weighting, filtering, and fine-tuning schemes.", "Rather than relying on prior knowledge to design a curriculum, we use reinforcement learning to learn one automatically, jointly with the NMT system, in the course of a single training run.", "We show that this approach can beat uniform baselines on Paracrawl and WMT English-to-French datasets by +3.4 and +1.3 BLEU respectively.", "Additionally, we match the performance of strong filtering baselines and hand-designed, state-of-the-art curricula.", "Machine Translation training data is typically heterogeneous: it may vary in characteristics such as domain, translation quality, and degree of dif-ficulty.", "Many approaches have been proposed to cope with heterogeneity, such as filtering (Duh et al., 2013) or down-weighting (Wang et al., 2017) examples that are likely to be noisy or out of domain.", "A powerful technique is to control the curriculumthe order in which examples are presented to the systemas is done in fine-tuning (Freitag and Al-Onaizan, 2016), where training occurs first on general data, and then on more valuable in-domain data.", "Curriculum based approaches generalize data filtering and weighting 1 by allowing examples to be visited multiple times 1 Assuming integer weights.", "or not at all; and they additionally potentially enable steering the training trajectory toward a better global optimum than might be attainable with a static attribute-weighting scheme.", "Devising a good curriculum is a challenging task that is typically carried out manually using prior knowledge of the data and its attributes.", "Although powerful heuristics like fine-tuning are helpful, setting hyper-parameters to specify a curriculum is usually a matter of extensive trial and error.", "Automating this process with meta-learning is thus an attractive proposition.", "However, it comes with many potential pitfalls such as failing to match a human-designed curriculum, or significantly increasing training time.", "In this paper we present an initial study on meta-learning an NMT curriculum.", "Starting from scratch, we attempt to match the performance of a successful non-trivial reference curriculum proposed by Wang et al. (2018), in which training gradually focuses on increasingly cleaner data, as measured by an external scoring function.", "Inspired by Wu et al. (2018), we use a reinforcement-learning (RL) approach involving a learned agent whose task is to choose a corpus bin, representing a given noise level, at each NMT training step.", "A challenging aspect of this task is that choosing only the cleanest bin is sub-optimal; the reference curriculum uses all the data in the early stages of training, and only gradually anneals toward the cleanest.", "Furthermore, we impose the condition that the agent must learn its curriculum in the course of a single NMT training run.", "We demonstrate that our RL agent can learn a curriculum that works as well as the reference, obtaining a similar quality improvement over a random-curriculum baseline.", "Interestingly, it does so using a different strategy from the reference.", "This result opens the door to learning more sophisticated curricula that exploit multiple data at-Figure 1: The agent's interface with the NMT system.", "tributes and work with arbitrary corpora.", "Among the very extensive work on handling heterogeneous data in NMT, the closest to ours are techniques that re-weight (Chen et al., 2017) or re-order examples to deal with domain mismatch (van der Wees et al., 2017; Sajjad et al., 2017) or noise (Wang et al., 2018).", "The idea of a curriculum was popularized by Bengio et al. (2009), who viewed it as a way to improve convergence by presenting heuristically-identified easy examples first.", "Several recent papers (Kocmi and Bojar, 2017; Zhang et al., 2019; Platanios et al., 2019) explore similar ideas for NMT, and verify that this strategy can reduce training time and improve quality.", "Work on meta-learning a curriculum originated with Tsvetkov et al. (2016), who used Bayesian optimization to learn a linear model for ranking examples in a word-embedding task.", "This approach requires a large number of complete training runs, and is thus impractical for NMT.", "More recent work has explored bandit optimization for scheduling tasks in a multi-task problem (Graves et al., 2017), and reinforcement learning for selecting examples in a co-trained classifier (Wu et al., 2018).", "Finally, Liu et al. (2018) apply imitation learning to actively select monolingual training sentences for labeling in NMT, and show that the learned strategy can be transferred to a related language pair.", "The attribute we choose to learn a curriculum over is noise.", "To determine a per-sentence noise score, we use the contrastive data selection (CDS) method defined in Wang et al. (2018).", "Given the parameters n of an NMT model trained on a noisy corpus, and parameters c of the same model fine-tuned on a very small trusted corpus, the score Figure 2: Linearly-decaying (cid:15) -greedy exploration.", "Wang et al. (2018) show that this correlates very well with human judgments of data quality.", "They use the CDS score in a heuristic, online schedule that slowly anneals from sampling mini-batches from all the training data to sampling only from the highest-scoring (cleanest) data.", "Our goal is to replace this heuristic curriculum with a learned one.", "Our agent uses deep Q-learning (DQN) (Mnih et al., 2015) which is a model-free reinforcement learning procedure.", "The agent receives an observation from the environment and conditions on it to produce an action which is executed upon the environment.", "It then receives a reward representing the goodness of the executed action.", "The agent chooses actions according to a state-action value (Q) function, and attempts to learn the Q-function so as to maximize expected total rewards.", "In our setup, the environment is the NMT system and its training data, as illustrated in Figure 1. We divide the training data into a small number of equal-sized bins according to CDS scores.", "At each step, the agent selects a bin (action) from which a mini-batch is sampled to train the NMT system.", "Our RL agent must balance exploration (choos-ing an action at random) versus exploitation (choosing the action which maximizes the Q-function).", "In our setup, this is done using a linearly-decaying (cid:15) -greedy exploration strategy (Figure 2).", "This strategy has three phases: (1) The warmup period where we always explore; (2) the decay period where the probability of exploration decreases and exploitation increases; (3) the floor where we almost always exploit.", "Since we do not want to exploit an uninformed Q-function, the duration of exploration needs to be set carefully.", "In our experiments, we found that longer decays were useful and the best performance was achieved when the decay was set to about 50% of the expected NMT training steps.", "The observation is meant to be a summary of the state of the environment.", "The NMT parameters are too numerous to use as a sensible observation at each time step.", "Inspired by Wu et al. (2018), we propose an observation type which is a function of the NMT system's current performance at various levels of noise.", "We first create a prototype batch by sampling a fixed number of prototypical sentences from each bin of the training data.", "At each time step, the observation is the vector containing sentence-level log-likelihoods produced by the NMT system for this prototype batch.", "Since the observations are based on likelihood, a metric which aggressively decays at the beginning of NMT training, we use an NMT warmup period to exclude this period from RL training.", "Otherwise, the initial observations would be unlike any that occur later.", "Our objective is to find a curriculum which maximizes the likelihood of the NMT system on a development set.", "The RL reward that directly corresponds to this goal would be the highest likelihood value reached during an NMT training run.", "However, as we use only one NMT training run, having a single reward per run is infeasible.", "To provide a denser signal to the RL agent, we de-fine the reward at a step to be the change in likelihood since the most recent previous step for which development-set likelihood is available.", "This has the desired property that the sum of per-step rewards maximized by the RL agent is equal to the NMT maximum-likelihood objective (on development data).", "We rely on the WMT warmup period described in the previous section to eliminate spuriously large rewards at the beginning of training.", "Our NMT model is similar to RNMT+ (Chen et al., 2018), but with only four layers in both encoder and decoder.", "Rewards (dev-set log-likelihood) are provided approximately every 10 training steps by an asynchronous process.", "We use the DQN agent implementation in Dopamine, 2 which includes an experience replay buffer to remove temporal correlations from the observations, among other DQN best practices.", "Due to the sparse and asynchronous nature of our rewards, we store observation, action transitions in a temporary buffer until a new reward arrives.", "At this point, transitions are moved from the temporary buffer to the DQN agent's replay buffer.", "The RL agent is trained after each NMT training step by sampling an RL mini-batch from the replay buffer.", "Our RL hyper-parameter settings are listed in the appendix.", "Following Wang et al. (2018), we use the Paracrawl and WMT English-French corpora for our experiments.", "These contain 290M and 36M training sentences respectively.", "WMT is relatively clean, while a large majority of Paracrawl sentence pairs contain noise.", "We process both corpora with BPE, using a vocabulary size of 32k.", "Both corpora are split into 6 equal-sized bins according to their noise level, as provided by CDS score.", "In both settings, the WMT newstest 2010-2011 corpus is used as trusted data for CDS scores, which are computed using the models and procedure described in Wang et al. (2018).", "For the prototype batch used to generate observations, we extracted the 32 sentences whose CDS scores are closest to the mean in each bin, giving a total of 192 sentences.", "We use WMT 2012-2013 for development and WMT 2014 for test, and report tokenized, naturally-cased BLEU scores from the test checkpoint closest to the highest-BLEU dev checkpoint.", "To combat variance caused by sampling different batches per bin (which produces somewhat different results even when bins are visited in fixed or-der), all models were run twice with different random seeds, and the model with the best score on the dev set was chosen.", "Our results are presented in Table 1. Uniform baselines consist of:", "Uniform (6-bins) sample a bin uniformly at random, and then sample a mini-batch from that bin", "2 github.com/google/dopamine", "Surprisingly, 6-bins performs better than the standard NMT baseline.", "We hypothesize that this can be attributed to more homogeneous mini-batches.", "Filtered train only on the highest-quality data as determined by CDS scores: top 20% of the data for Paracrawl, top 33% for WMT.", "Fixed (cid:15) -schedule we use the (cid:15) -decay strategy of our best RL experiment, but always choose the cleanest bin when we exploit.", "Online the online schedule from Wang et al. (2018) adapted to the 6-bin setting.", "We verified experimentally that our performance matched the original schedule, which did not use hard binning.", "Learned curricula were trained over 2 bookend (worst and best) bins and all 6 bins.", "On the Paracrawl dataset, in the 2-bin setting, the learned curriculum beats all uniform baselines and almost matches the optimized filtering baseline.", "3 With 6-bins, it beats all uniform baselines by up to 2.5 BLEU and matches the hand-designed online baseline of Wang et al. (2018).", "On WMT, with 2 bins, the learned curriculum beats the 2-bin baseline, but not the uniform baseline over all data.", "3 The clean data available in the 2-bin setup is limited to the best bin (16%), while filtering uses slightly more data (20%).", "With 6 bins, the learned curriculum beats the uniform baseline by 1.5 BLEU, and matches the fil-tered baseline, which in this case outperforms the online curriculum by 0.6 BLEU.", "Our exploration strategy for Q-learning (see Figure", "2) forces the agent to visit all bins during initial training, and only gradually rely on its learned policy.", "This mimics the gradual annealing of the online curriculum, so one possibility is that the agent is simply choosing the cleanest bin whenever it can, and its good performance comes from the enforced period of exploration.", "However, the fact that the agent beats the fixed (cid:15) -schedule (see Table", "1) described above on both corpora makes this unlikely.", "Task-specific reward and observation engineering is critical when building an RL model.", "We performed ablation experiments to determine if the rewards and observations we have chosen contain information which aids us in the curriculum learning task.", "Table 2 shows the results of our experiments.", "The fixed reward experiments were conducted by replacing the default delta-perplexity based reward with a static reward which returns a reward of one when the cleanest bin was selected and zero otherwise.", "The fixed observation experiments used a static vector of zeroes as input at all time steps.", "Using fixed observations matches the performance of dynamic observations, from which we can draw two conclusions.", "First, the agent's good performance is due to associating higher rewards with better bins, but it learns to do so slowly (partly modulated by its (cid:15) -greedy schedule) so that it avoids the sub-optimal strategy of choosing only the best bin.", "Second, its ability to distinguish among bins is not impeded by the use of an observation vector that slowly evolves through time and never returns to previous states.", "Figure 3 shows a coarse visualization of the hand-optimized policy of Wang et al. (2018), adapted to our 6-bin scenario, compared to the Q-learning policy on the same scenario.", "The former, by design, telescopes towards the clean bins.", "Note that the latter policy is masked by the agent's exploration schedule, which slowly anneals toward nearly complete policy control, beginning at step 30,000.", "After this point, the learned policy takes over and continues to evolve.", "This learned policy has little in common with the hand-designed one.", "Instead of focusing on a mixture of the clean bins, it focuses on the cleanest bin and the second-to-noisiest.", "We hypothesize that returning to the noisy bin acts as a form of regularization, though this requires further study.", "We have presented a method to learn a curriculum for presenting training samples to an NMT system.", "Using reinforcement learning, our approach learns the curriculum jointly with the NMT system during the course of a single NMT training run.", "Empirical analysis on the Paracrawl and WMT English-French corpora shows that this approach beats the uniform sampling and filtering baselines.", "In addition, we were able to match a state-of-the-art hand designed curriculum on Paracrawl and beat it on WMT.", "We see this a first step toward enabling NMT systems to manage their own training data.", "In the future, we intend to improve our approach by eliminating the static exploration schedule and binning strategy, and extend it to handle additional data attributes such as domain, style, and grammatical complexity.", "The authors would like to thank Wei Wang for his advice and help in replicating the CDS baselines." ]
[ "method", "method", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "objective", "objective", "other" ]
[ "Traditional generative dialogue models generate responses solely from input queries.", "Such information is insufficient for generating a specific response since a certain query could be answered in multiple ways.", "Recently, researchers have attempted to fill the information gap by exploiting information retrieval techniques.", "For a given query, similar dialogues are retrieved from the entire training data and considered as an additional knowledge source.", "While the use of retrieval may harvest extensive information, the generative models could be overwhelmed, leading to unsatisfactory performance.", "In this paper, we propose a new framework which exploits retrieval results via a skeleton-to-response paradigm.", "At first, a skeleton is extracted from the retrieved dialogues.", "Then, both the generated skeleton and the original query are used for response generation via a novel response generator.", "Experimental results show that our approach significantly improves the informativeness of the generated responses.", "This paper focuses on tackling the challenges to develop a chit-chat style dialogue system (also known as chatbot).", "Chit-chat style dialogue system aims at giving meaningful and coherent responses given a dialogue query in the open domain.", "Most modern chit-chat systems can be categorized into two categories, namely, information retrieval-based (IR) models and generative models.", "The IR-based models (Ji et al., 2014; Hu et al., 2014) directly copy an existing response from a training corpus when receiving a response request.", "Since the training corpus is usually collected from real-world conversations and possibly post-edited Work done while DC was interning at Tencent AI Lab.", "by a human, the retrieved responses are informative and grammatical.", "However, the performance of such systems drops when a given dialogue history is substantially different from those in the training corpus.", "The generative models (Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2016a), on the other hand, generate a new utterance from scratch.", "While those generative models have better generalization capacity in rare dialogue contexts, the generated responses tend to be universal and noninformative (e.g., I don't know, I think so etc.) (Li et al., 2016a).", "It is partly due to the diversity of possible responses to a single query (i.e., the one-to-many problem).", "The dialogue query alone cannot decide a meaningful and specific response.", "Thus a well-trained model tends to generate the most frequent (safe but boring) responses instead.", "To summarize, IR-based models may give informative but inappropriate responses while generative models often do the opposite.", "It is desirable to combine both merits.", "Song et al. (2016) used an extra encoder for the retrieved response.", "The resulted dense representation, together with the original query, is used to feed the decoder in a standard SEQ 2S EQ model (Bahdanau et al., 2014).", "Weston et al. (2018) used a single encoder that takes the concatenation of the original query and the retrieved as input.", "Wu et al. (2019) noted that the retrieved information should be used in awareness of the context difference, and further proposed to construct an edit vector by explicitly encoding the lexical differences between the input query and the retrieved query.", "However, in our preliminary experiments, we found that the IR-guided models are inclined to degenerate into a copy mechanism, in which the generative models simply repeat the retrieved response without necessary modifications.", "Sharp performance drop is caused when the retrieved response is irrelevant to the input query.", "A possible reason is that both useful and useless information is mixed in the dense vector space, which is uninterpretable and uncontrollable.", "To address the above issue, we propose a new framework, skeleton-to-response, for response generation.", "Our motivations are two folds: (1) The guidance from IR results should only specify a response aspect or pattern, but leave the query-specific details to be elaborated by the generative model itself; (2) The retrieval results typically contain excessive information, such as inappropriate words or entities.", "It is necessary to filter out irrelevant words and derive a useful skeleton before use.", "Our approach consists of two components: a skeleton generator and a response generator.", "The skeleton generator extracts a response skeleton by detecting and removing unwanted words in a retrieved response.", "The response generator is responsible for adding query-specific details to the generated skeleton for query-to-response generation.", "A dialogue example illustrating our idea is shown in Fig.", "1. Due to the discrete choice of skeleton words, the gradient in the training process is no longer differentiable from the response to the skeleton generator.", "Two techniques are proposed to solve this issue.", "The first technique is to employ the policy gradient method for rewarding the output of the skeleton generator based on the feedback from a pre-trained critic.", "An alternative technique is to solve both the skeleton generation and the response generation in a multi-task learning fashion.", "Our contributions are summarized as below: (1) We develop a novel framework to inject the power of IR results into generative response models by introducing the idea of skeleton generation; (2) Our approach generates response skeletons by detecting and removing unnecessary words, which facilitates the generation of specific responses while not spoiling the generalization ability of the underlying generative models; (3) Experimental results show that our approach significantly outperforms other compared methods, resulting in more informative and specific responses.", "In this work, we propose to construct a response skeleton based on the results of IR systems for guiding the response generation.", "The skeleton-to-Query : My son loves Disneyland.", "response paradigm helps reduce the search space of possible responses and provides useful elements missing in the given query.", "Our model consists of two components, namely, the skeleton generator and the response generator.", "These components are parameterized by the above two probabilistic models, denoted by ske and res respectively.", "Fig. 2 depicts the overall architecture of our proposed framework.", "The skeleton generator transforms a retrieved response into a skeleton by explicitly removing inappropriate or useless information regarding the input query q .", "We consider this procedure as a series of word-level masking actions.", "Following Wu et al. (2019), we first construct an edit vector by comparing the difference between the original query q and the retrieved query q (cid:48) .", "In (Wu et al., 2019) the edit vector is used to guide the response generation directly.", "In our model, the edit vector is used to estimate the probability of being reserved or being masked for every word in a sentence.", "We define two word sets, namely insertion words I and deletion words D .", "The insertion words include words that are in the original query q , but not in the retrieved query q (cid:48) , while the deletion words do the opposite.", "The two bags of words highlight the changes in the dialogue context, corresponding to the changes in the response.", "The edit vector z is thus defined as the concatenation of the representations of the two bags of words.", "We use the weighted sum of the apple Do you like banana Retrieval System deletion words insertion words edit vector Skeleton Generator Response Generator Yes , __ is my favorite query memories Decoder Binary Classifier retrieved query retrievedresponse skeleton skeleton memories j o i n t c a s c a d e d skeleton memories Input Query: Generated response : Yes, banana is my favorite Do you like Yes , apple is my favorite Figure 2: The architecture of our framework.", "word embeddings to get the dense representations of I and D .", "The edit vector is computed as: z = (cid:88) w 1 I w 1 ( w 1 ) (cid:88) w 2 D w 2 ( w 2 ) , (1) where is the concatenation operation.", "maps a word to its corresponding embedding vector, w 1 and w 2 are the weights of an insertion word w 1 and a deletion word w 2 respectively.", "The weights of different words are derived by an attention mechanism (Luong et al., 2015).", "Formally, the retrieved response r (cid:48) = ( r (cid:48) 1 , r (cid:48) 2 . . . , r (cid:48)| r (cid:48) | ) is processed by a bidirectional GRU network (biGRU).", "We denote the states of the biGRU (i.e. concatenation of forward and backward GRU states) as ( h 1 , h 2 , . . . , h | r (cid:48) | ) .", "The weight w 1 is calculated by: w 1 = exp( s w 1 ) (cid:80) w I exp( s w ) , s w 1 = v (cid:62) I tanh( WI [( w 1 ) h | r (cid:48) | ]) , (2) where v I and WI are learnable parameters.", "The weight w 2 is obtained in a similar way with an-other set of parameters v D and WD .", "After acquiring the edit vector, we transform the prototype response r (cid:48) to a skeleton t by the following equations: t = ( ( r (cid:48) 1 , h 1 , z ) , ( r (cid:48) 2 , h 2 , z ) , , ( r (cid:48)| r (cid:48) | , h | r (cid:48) | , z )) , ( r (cid:48) i , h i , z ) = (cid:40) < blank > if m i = 0 , r (cid:48) i else , (3) where m i is the indicator and equals 0 if r (cid:48) i is replaced with a placeholder < blank > and 1 otherwise.", "The probability of m i = 1 is computed by P ( m i = 1) = sigmoid ( W m [ h i z ] + b m ) .", "The response generator can be implemented using most existing IR-augmented models (Song et al., 2016; Weston et al., 2018; Pandey et al., 2018), just by replacing the retrieved response input with the corresponding skeleton.", "We discuss our choices below.", "Encoders Two separate bidirectional LSTM (biLSTM) networks are used to obtain the distributed representations of the query memories and the skeleton memories, respectively.", "For biLSTM, the concatenation of the forward and the backward hidden states at each token position is considered a memory slot, producing two memory pools: M q = { h 1 , h 2 , . . . , h | q | } for the input query, and M t = { h (cid:48) 1 , h (cid:48) 2 , . . . , h (cid:48)| t | } for the skeleton.", "1 Decoder During the generation process, our decoder reads information from both the query and the skeleton using attention mechanism (Bah-danau et al., 2014; Luong et al., 2015).", "To query the memory pools, the decoder uses the hidden state s t of itself as the searching key.", "The matching score function is implemented by bilinear functions: ( h k , s t ) = h kT W q s t ; ( h (cid:48) k , s t ) = h (cid:48) kT W t s t , (5) where W q and W t are trainable parameters.", "A query context vector c t is then computed as a weighted sum of all memory slots in M q , where the weight for a memory slot h k is exp( ( h k , s t )) / ( (cid:80) | q | i =1 exp( ( h i , s t ))) .", "A skeleton context vector c (cid:48) t is computed in a similar spirit by using ( h (cid:48) k , s t ) 's.", "The probability of generating the next word r t is then jointly determined by the decoder's state s t , the query context c t and the skeleton context c (cid:48) t .", "We first fuse the information of s t and c t by a linear transformation.", "For c (cid:48) t , a gating mechanism is additionally introduced to control the information flow from skeleton memories.", "Formally, the probability of the next token r t is estimated by y t followed by a softmax function over the vocabulary: y t = ( W c [ s t c t ]) g t + c (cid:48) t (1 g t ) , (6) where g t = f g ( s t , c t , c (cid:48) t ) is implemented by a single layer neural network with sigmoid output layer.", "Given that our skeleton generator performs nondifferentiable hard masking, the overall model cannot be trained end-to-end using the standard maximum likelihood estimate (MLE).", "A possible solution that circumvents this problem is to treat the skeleton generation and the response generation as two parallel tasks and solve them jointly 1 Note the skeleton memory pool M t could contain multiple response skeletons, further discussed in the experiment section.", "in a multi-task learning fashion.", "An alternative is to bridge the skeleton generator and the final response output using reinforcement learning (RL) methods, which can exclusively inform the skeleton generator with the ultimate goal.", "The latter option is referred as cascaded integration while the former is called joint integration .", "Recall that we have formulated the skeleton generation as a series of binary classifications.", "Nevertheless, most of the dialogue datasets are end-to-end query-response pairs without explicit skeletons.", "Hence, we propose to construct proxy skeleton s to facilitate the training.", "Definition 1 Proxy Skeleton: Given a training quadruplet ( q, q (cid:48) , r, r (cid:48) ) and a stop word list S , the proxy skeleton for r is generated by replacing some tokens in r (cid:48) with a placeholder < blank > .", "A token r (cid:48) i is kept if and only if it meets the following conditions", "1. r (cid:48) i / S 2. r (cid:48) i is a part of the longest common subsequence (LCS) (Wagner and Fischer, 1974) of r and r (cid:48) .", "The proxy skeletons are used in different man-ners according to the integration method, which we will introduce below.", "To avoid breaking the differentiable computation, we connect the skeleton generator and the response generator via a shared network architecture rather than by passing the discrete skeletons.", "Concretely, the last hidden states in our skeleton generator (i.e, the hidden states that are utilized to make the masking decisions) are used as the skeleton memories in response generation.", "The training objective is the sum of the proxy skeleton labels likelihood L ( ske ) and the response likelihood L ( res ) : L ( res ske ) = L ( res ) + L ( ske ) , (7) where is a harmonic weight, and it is set as 1 .", "Policy gradient methods (Williams, 1992) can be applied to optimize the full model while keeping it running as cascaded process.", "We regard the skeleton generator as the first RL agent, and the response generator as the second one.", "The final output generated by the pipeline process and the intermediate skeleton are denoted by r and t respectively.", "Given the original query q and the generated response r , a reward R ( q, r ) for generating r is calculated.", "All network parameters are then optimized to maximize the expected reward by the policy gradient.", "The reward function R should convey both the naturalness of the generated response and its relevance to the given query q .", "A pre-trained critic is utilized to make the judgment.", "Inspired by comparative adversarial learning in (Li et al., 2018), we design the critic as a classifier that receives four inputs every time: the query q , a human-written response r , a machine-generated response r and a random response r (yet written by human).", "The critic is trained to pick the human-written response r among others correctly.", "Formally, the following objective is maximized: log D ( r | q, r, r, r ) = log exp( h r TMD h q ) (cid:80) x { r,r,r } exp( h x TMD h q ) , (8) where h x is a vector representation of x , produced by a bidirectional LSTM (the last hidden state), and MD is a trainable matrix.", "2 4 Related Work Multi-source Dialogue Generation Chit-chat style dialogue system dates back to ELIZA (Weizenbaum, 1966).", "Early work uses handcrafted rules, while modern systems usually use data-driven approaches, e.g., information retrieval techniques.", "Recently, end-to-end neural approaches (Vinyals and Le, 2015; Serban et al., 2016; Li et al., 2016a; Sordoni et al., 2015) have attracted increasing interest.", "For those generative models, a notorious problem is the safe re-sponse problem: the generated responses are dull and generic, which may attribute to the lack of suf-ficient input information.", "The query alone cannot specify an informative response.", "To mitigate the issue, many research efforts have been paid to introducing other information source, such as unsupervised latent variable (Serban et al., 2017; Zhao et al., 2018; Cao and Clark, 2017; Shen et al., 2017), discourse-level variations (Zhao et al., 2017), topic information (Xing et al., 2017), speaker personality (Li et al., 2016b) and knowl-2 Note the classifier could be fine-tuned with the training of our generators, which falls into the adversarial learning setting (Goodfellow et al., 2014).", "edge base (Ghazvininejad et al., 2018; Zhou et al., 2018).", "Our work follows the similar motivation and uses the output of IR systems as the additional knowledge source.", "Combination of IR and Generative models To combine IR and generative models, early work (Qiu et al., 2017) tried to re-rank the output from both models.", "However, the performance of such models is limited by the capacity of individual methods.", "Most related to our work, Song et al. (2016); Weston et al. (2018) and Wu et al. (2019) encoded the retrieved result into distributed representation and used it as the additional conditionals along with the standard query representation.", "While the former two only used the target side of the retrieved pairs, the latter took advantages of both sides.", "In a closed domain conversation setting, Pandey et al. (2018) further proposed to weight different training instances by context similarity.", "Our model differs from them in that we take an extra intermediate step for skeleton generation to filter the retrieval information before use, which shows the effectiveness in avoiding the erroneous copy in our experiments.", "Multi-step Language Generation Our work is also inspired by the recent success of decomposing an end-to-end language generation task into several sequential sub-tasks.", "For document summarization, Chen and Bansal (2018) first select salient sentences and then rewrite them in parallel.", "For sentiment-to-sentiment translation, Xu et al. (2018) first use a neutralization module to remove emotional words and then add sentiment to the neutralized content.", "Not only does their decomposition improve the overall performance, but also makes the whole generation process more interpretable.", "Our skeleton-to-response framework also sheds some light on the use of retrieval memories.", "We use the preprocessed data in (Wu et al., 2019) as our test bed.", "The total dataset consists of about 20 million single-turn query-response pairs collected from Douban Group 3 .", "Since similar contexts may correspond to totally different responses, the training quadruples ( q, r, q (cid:48) , r (cid:48) ) for 3 https://www.douban.com/group IR-augmented models are constructed based on response similarity.", "All response are indexed by Lucene.", "4 For each ( q, r ) pair, top 30 similar responses with their corresponding contexts are retrieved { ( q (cid:48) i , r (cid:48) i ) } 30 i =1 .", "However, only those satisfying 0 .", "3 Jaccard ( r, r (cid:48) i ) 0 .", "7 are leveraged for training, where Jaccard measures the Jaccard distance.", "The reason for the data filter is that nearly identical responses drive the model to do simple copy while distantly different responses make the model ignore the retrieval input.", "About 42 million quadruples are obtained afterward.", "For computational efficiency, we randomly sample 5 million quadruples as training data for all experiments.", "The test set consists of 1,000 randomly selected queries that are not in our training data.", "5 For a fair comparison, when training a generative model without the help of IR, the quadruples are split into pairs.", "We implement the skeleton generator based on a bidirectional recurrent neural network with 500 LSTM units.", "We concatenate the hidden states from both directions.", "The word embedding size is set to 300.", "For the response generator, the encoder for queries, the encoder for skeletons and the decoder are three two-layer recurrent neural networks with 500 LSTM units, where both encoders are bidirectional.", "We use dropout (Sri-vastava et al., 2014) to alleviate overfitting.", "The dropout rate is set to 0.3 across different layers.", "The same architecture for the encoders and the decoder is shared across the following baseline models, if applicable.", "Seq2Seq the standard attention-based RNN encoder-decoder model (Bahdanau et al., 2014).", "MMISEQ 2S EQ with Maximum Mutual Information (MMI) objective in decoding (Li et al., 2016a).", "In practice, an inverse (response-to-query) SEQ 2S EQ model is used to rerank the N -best hypothesizes from the standard SEQ 2S EQ model ( N equals 100 in our experiments).", "EditVec the model proposed by Wu et al. (2019), where the edit vector z is used directly at each decoding step by concatenating it to the word embeddings.", "IR the Lucene system is also used a benchmark.", "6 IR+rerank rerank the results of IR by MMI .", "Besides, We use JNT to denote our model with joint integration, and CAS for our model with cascaded integration.", "To validate the usefulness of the proposed skeletons.", "We design a response generator that takes an intact retrieval response as its skeleton input (i.e., to completely skip the skeleton generation step), denoted by SKP .", "7 5.4 Evaluation Metrics Our method is designed to improve the informativeness of the generative model and alleviate the inappropriateness problem of the retrieval model.", "To measure the performance effectively, we use 6 Note IR selects response candidates from the entire data collection, not restricted to the filtered one.", "7 There are some other IR-augmented models using standard SEQ 2 SEQ models as SKP.", "Weston et al. (2018) used a rule to select either the generated response or the retrieved response as output, while we would like to focus on improving the quality of generated responses.", "Pandey et al. (2018) concentrated on closed domain conversations, their hierarchical encoder is not suitable for our open domain setting.", "We thus omit the empirical comparison with them.", "human evaluation along with two automatic evaluation metrics.", "Human evaluation We asked three experienced annotators to score the group of responses (the best output of each model) for 300 test queries.", "The responses are rated on a five-point scale.", "A response should be scored 1 if it can hardly be considered a valid response, 3 if it is a valid but not informative response, 5 if it is an informative response, which can deepen the discussion of the current topic or lead to a new topic.", "2 and 4 are for decision dilemmas.", "dist-1 & dist-2 It is defined as the number of unique uni-grams ( dist-1 ) or bi-grams ( dist-2 ) dividing by the total number of tokens, measuring the diversity of the generated responses (Li et al., 2016a).", "Note the two metrics do not necessarily reflect the response quality as the target queries are not taken into consideration.", "The results are depicted in Table", "1. Overall, both of our models surpass all other methods, and our cascaded model (CAS) gives the best performance according to human evaluation.", "The contrast with the SKP model illustrates that the use of skeletons brings a significant performance gain.", "According to the dist-1&2 metrics, the generative models achieve significantly better diversity by the use of retrieval results.", "The retrieval method yields the highest diversity, which is consistent with our intuition that the retrieval responses typically contain a large amount of information though they are not necessarily appropriate.", "The model of MMI also gives strong diversity, yet we find that it tends to merely repeat the words in queries.", "By removing the words in queries, the dist-2 of MMI and CAS become 0.710 and 0.751 respectively.", "It indicates our models are better at generating new words.", "To further reveal the source of performance gain, we study the relation between response quality and query similarity (measured by the Jaccard similarity between the input query and the retrieved query).", "Our best model (CAS) is compared with the strong IR system (IR-rerank) and the previous state-of-the-art (EditVec) in Fig.", "3. The CAS model significantly boosts the performance when query similarity is relatively low, which indicates that introducing skeletons can alleviate erroneous copy and keep a strong generalization ability of the underlying generative model.", "Generated Skeletons Although generating skeletons is not our primary goal, it is interesting to assess the skeleton generation.", "The word-level precision (P), recall (R), F 1 score (F 1 ) and accuracy (Acc.) of the well-trained skeleton generators are reported in Table 2, taking the proxy skeletons as golden references.", "Table 3 shows some skeleton-to-response examples of the CAS model and a case study among different models.", "In the leftmost example in Table 3, the MMI and the EditVec simply repeat the query while the retrieved response is weakly related to the query.", "Our CAS model extracts a useful word 'boy' from the retrieved response and generates a more interesting response.", "In the mid-dle example, the MMI response makes less sense, and some private information is included in the retrieved response.", "Our CAS model removes the privacy without the loss of informativeness, while the outputs by other models are less informative.", "The rightmost case shows that our response generator is able to recover the possible mistakes made by the skeleton generator.", "Retrieved Response v.s. Generated Response To measure the extent that the generative models are copying the retrieval, we compute the edit distances between generated responses and retrieved 8 We merge the ranges [0 . 6 , 0 . 8] and [0 . 8 , 1 . 0] due to the sparsity of highly similar pairs.", "responses.", "As shown in Fig. 4, in the comparison between the SKP and other models, the use of skeletons makes the generated response deviate more from its prototype response.", "Ideally, when the retrieved context is very similar to the input query, the changes between the generated response and the prototype response should be minor.", "Conversely, the changes should be drastic.", "Fig. 4 also shows that our models can learn this intuition.", "Single v.s. Multiple Retrieval Pair(s) For a given query q , the retrieval pair set R q could contain multiple query-response pairs.", "We investigate two ways of using it under the CAS setting.", "Single For each query-response pair ( q (cid:48) i , r (cid:48) i ) R q , a response r i is generated solely based on q , and ( q (cid:48) i , r (cid:48) i ) .", "The resulted responses are re-ranked by generation probability.", "Multiple The whole retrieval set R q is used in a single run.", "Multiple skeletons are generated and concatenated in the response generation stage.", "The results are shown in Table", "4. We attribute the failure of Multiple to the huge variety of the retrieved responses.", "The response generator receives many heterogeneous skeletons, yet it has no idea which to use.", "It remains an open question on how to effectively use multiple retrieval pairs for generating one single response, and we leave it for future work.", "In this paper, we proposed a new methodology to enhance generative models with information retrieval technologies for dialogue response generation.", "Given a dialogue context, our methods generate a skeleton based on historical responses that respond to a similar context.", "The skeleton serves as an additional knowledge source that helps specify the response direction and complement the response content.", "Experiments on real world data validated the effectiveness of our method for more informative and appropriate responses.", "We thank the anonymous reviewers for their helpful comments.", "The work described in this paper is partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "method", "abstain", "result", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "other", "other" ]
[ "We conduct a thorough study to diagnose the behaviors of pre-trained language encoders (ELMo, BERT, and RoBERTa) when confronted with natural grammatical errors.", "Specifically, we collect real grammatical errors from non-native speakers and conduct adversarial attacks to simulate these errors on clean text data.", "We use this approach to facilitate debugging models on downstream applications.", "Results confirm that the performance of all tested models is affected but the degree of impact varies.", "To interpret model behaviors, we further design a linguistic acceptability task to reveal their abilities in identifying ungrammatical sentences and the position of errors.", "We find that fixed contextual encoders with a simple classifier trained on the prediction of sentence correctness are able to locate error positions.", "We also design a cloze test for BERT and discover that BERT captures the interaction between errors and specific tokens in context.", "Our results shed light on understanding the robustness and behaviors of language encoders against grammatical errors.", "Pre-trained language encoders have achieved great success in facilitating various downstream natural language processing (NLP) tasks (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019b).", "However, they usually assume training and test corpora are clean and it is unclear how the models behave when confronted with noisy input.", "Grammatical error is an important type of noise since it naturally and frequently occurs in natural language, especially in spoken and written materials from non-native speakers.", "Dealing with such a noise reflects model robustness in representing language and grammatical knowledge.", "It would also have a positive social impact if language encoders can model texts from non-native speakers appropriately.", "Recent work on evaluating model's behaviors against grammatical errors employs various methods, including (1) manually constructing minimal edited pairs on specific linguistic phenomena (Marvin and Linzen, 2018; Goldberg, 2019; Warstadt et al., 2019a,b); (2) labeling or creating acceptability judgment resources (Linzen et al., 2016; Warstadt and Bowman, 2019; Warstadt et al., 2019a); and (3) simulating noises for a specific NLP task such as neural machine translation (Lui et al., 2018; Anastasopoulos, 2019), sentiment classification (Baldwin et al., 2017).", "These studies either focus on specific phenomena and mainly conduct experiments on designated corpora or rely heavily on human annotations and expert knowledge in linguistics.", "In contrast, our work automatically simulates natural occurring data and various types of grammatical errors and systematically analyzes how these noises affect downstream applications.", "This holds more practical significance to understand the robustness of several language encoders against grammatical errors.", "Specifically, we first propose an effective approach to simulating diverse grammatical errors, which applies black-box adversarial attack algorithms based on real errors observed on NUS Corpus of Learner English (NUCLE) (Dahlmeier et al., 2013), a grammatical error correction benchmark.", "This approach transforms clean corpora into corrupted ones and facilitates debugging language encoders on downstream tasks.", "We demonstrate its flexibility by evaluating models on four language understanding tasks and a sequence tagging task.", "We next quantify model's capacities of identifying grammatical errors by probing individual layers of pre-trained encoders through a linguistic acceptability task.", "We construct separate datasets for eight error types.", "Then, we freeze encoder layers and add a simple classifier on top of each layer to predict the correctness of input texts and locate error positions.", "This probing task assumes if a simple classifier behaves well on a designated type of error, then the encoder layer is likely to contain knowledge of that error (Conneau et al., 2017; Adi et al., 2017).", "Finally, we investigate how models capture the interaction between grammatical errors and contexts.", "We use BERT as an example and design an unsupervised cloze test to evaluate its intrinsic functionality as a masked language model (MLM).", "Our contributions are summarized as follows:", "1. We propose a novel approach to simulating various grammatical errors.", "The proposed method is flexible and can be used to verify the robustness of language encoders against grammatical errors.", "2. We conduct a systematic analysis of the robustness of language encoders and enhance previous work by studying the performance of models on downstream tasks with various grammatical error types.", "3. We demonstrate: (1) the robustness of existing language encoders against grammatical errors varies; (2) the contextual layers of language encoders acquire stronger abilities in identifying and locating grammatical errors than token embedding layers; and (3) BERT captures the interaction between errors and specific tokens in context, in particular the neighboring tokens of errors.", "Probing Pre-trained Language Encoders The recent success of pre-trained language encoders across a diverse set of downstream tasks has stimulated significant interest in understanding their advantages.", "A portion of past work on analyzing pre-trained encoders is mainly based on clean data.", "As mentioned in Tenney et al. (2019a), these studies can be roughly divided into two categories: (1) designing controlled tasks to probe whether a specific linguistic phenomenon is captured by models (Conneau et al., 2018; Peters et al., 2019; Tenney et al., 2019b; Liu et al., 2019a; Kim et al., 2019), or (2) decomposing the model structure and exploring what linguistic property is encoded (Tenney et al., 2019a; Jawahar et al., 2019; Clark et al., 2019).", "However, these studies do not analyze how grammatical errors affect model behaviors.", "Our work is related to studies on analyzing models with manually created noise.", "For example, Linzen et al. (2016) evaluate whether LSTMs capture the hierarchical structure of language by using verbal inflection to violate subject-verb agreement.", "Marvin and Linzen (2018) present a new dataset consisting of minimal edited pairs with the opposite linguistic acceptability on three specific linguistic phenomena and use it to evaluate RNN's syntactic ability.", "Goldberg (2019) adjusts previous method to evaluate BERT.", "Warstadt et al. (2019a) further compare five analysis methods under a single phenomenon.", "Despite the diversity in methodology, these studies share common limitations.", "First, they employ only a single or specific aspects of linguistic knowledge; second, their experiments are mainly based on constructed datasets instead of real-world downstream applications.", "In contrast, we propose a method to cover a broader range of grammatical errors and evaluate on downstream tasks.", "A concurrent work (Warstadt et al., 2019b) facilitates diagnosing language models by creating linguistic minimal pairs datasets for 67 isolate grammatical paradigms in English using linguist-crafted templates.", "In contrast, we do not rely heavily on artificial vocabulary and templates.", "Synthesized Errors To evaluate and promote the robustness of neural models against noise, some studies manually create new datasets with specific linguistic phenomena (Linzen et al., 2016; Marvin and Linzen, 2018; Goldberg, 2019; Warstadt et al., 2019a).", "Others have introduced various methods to generate synthetic errors on clean downstream datasets, in particular, machine translation corpora.", "Belinkov and Bisk (2018); Anastasopoulos (2019) demonstrate that synthetic grammatical errors induced by character manipulation and word substitution can degrade the performance of NMT systems.", "Baldwin et al. (2017) augment original sentiment classification datasets with syntactically (reorder-ing) and semantically (word substitution) noisy sentences and achieve higher performance.", "Our method is partly inspired by Lui et al. (2018), who synthesize semi-natural ungrammatical sentences by maintaining confusion matrices for five simple error types.", "Another line of studies uses black-box adversarial attack methods to create adversarial examples for debugging NLP models (Ribeiro et al., 2018; Jin et al., 2019; Alzantot et al., 2018; Burstein et al., 2019).", "These methods create a more challenging scenario for target models compared to the above data generation procedure.", "Our proposed simulation benefits from both adversarial attack algorithms and semi-natural grammatical errors.", "We first explain how we simulate ungrammatical scenarios.", "Then, we describe target models and the evaluation design.", "Most downstream datasets contain only clean and grammatical sentences.", "Despite that recent language encoders achieve promising performance, it is unclear if they perform equally well on text data with grammatical errors.", "Therefore, we synthesize grammatical errors on clean corpora to test the robustness of language encoders.", "We use a controllable rule-based method to collect and mimic errors observed on NUCLE following previous work (Lui et al., 2018; Sperber et al., 2017) and apply two ways to introduce errors to clean corpora: (1) we sample errors based on the frequency distribution of NUCLE and introduce them to plausible positions; (2) inspired by the literature of adversarial attacks (Ribeiro et al., 2018; Jin et al., 2019; Alzantot et al., 2018), we conduct search algorithms to introduce grammatical errors that causing the largest performance drop on a given downstream task.", "Mimic Error Distribution on NUCLE We first describe how to extract the error distribution on NUCLE (Dahlmeier et al., 2013).", "NUCLE is constructed with naturally occurring data (student essays at NUS) annotated with error tags.", "Each ungrammatical sentence is paired with its correction that differs only in local edits.", "The two sentences make up a minimal edited pair .", "An example is like:", "1. Will the child blame the parents after he growing up?", "2. Will the child blame the parents after he grows up?", "(cid:88)", "NUCLE corpus contains around 59,800 sentences with average length 20.38.", "About 6% of tokens in each sentence contain grammatical errors.", "There are 27 error tags, including Prep (indicating preposition errors), ArtOrDet (indicating article or determiner errors), Vform (indicating incorrect verb form) and so forth.", "We consider eight frequently-occurred, token-level error types in NUCLE as shown in Table", "1. These error types perturb a sentence in terms of syntax ( SVA , Worder ), semantics ( Nn , Wchoice , Trans ) and both ( ArtOrDet , Prep , Vform ), and thus cover a wide range of noise in natural language.", "Then, we construct a confusion set for each error type based on the observation on NUCLE.", "Each member of a confusion set is a token.", "We assign a weight w ij between token t i and t j in the same set to indicate the probability that t i will be replaced by t j .", "In particular, for ArtOrDet , Prep and Trans , the confusion set consists of a set of tokens that frequently occur as errors or corrections on NUCLE.", "For each token t i in the set, we compute w ij based on how many times t i is replaced by t j in minimal edited pairs on NUCLE.", "Notice that we add a special token to represent deletion and insertion.", "For Nn , when we find a noun, we add it and its singular (SG) or plural (PL) counterpart to the set.", "For SVA , when we find a verb with present tense, we add it and its third-person-singular (3SG) or non-third (not 3SG) counterpart to the set.", "For Worder , we exchange the position of an adverb with its neighboring adjective, participle or modal.", "For Vform , we use NLTK (Bird and Loper, 2004) to extract present, past, progressive, and perfect tense of a verb and add to the set.", "For Wchoice , we select ten synonyms of a target word from WordNet.", "The substitution weight is set to be uniform for both Vform and Wchoice .", "Grammatical Error Introduction We introduce errors in two ways.", "The first is called probabilistic transformation .", "Similar to Lui et al. (2018), we first obtain the parse tree of the target sentence using the Berkeley syntactic parser (Petrov et al., 2006).", "Then, we sample an error type from the error type distribution estimated from NUCLE and randomly choose a position that can apply this type of error according to the parse tree.", "Finally, we sample an error token based on the weights from the confusion set of the sampled error type and introduce the error token to the selected position.", "However, probabilistic transformation only represents the average case.", "To debug and analyze the robustness of language encoders, we consider another more challenging setting worst-case transformation , where we leverage search algorithms Error type Error Description Confusion Set ArtOrDet Article/determiner errors { a, an, the, } Prep Preposition errors { on, in, at, from, for, under, over, with, into, during, until, against, among, throughout, to, by, about, like, before, across, behind, but, out, up, after, since, down, off, of, } Trans Link words/phrase errors { and, but, so, however, as, that, thus, also, because, therefore, if, although, which, where, moreover, besides, of, } Nn Noun number errors { SG, PL } SVA Subject-verb agreement errors { 3SG, not 3SG } Vform Verb form errors { Present, Past, Progressive, Perfect } Wchoice Word choice errors { Ten synonyms from WordNet Synsets } Worder Word positions errors { Adverb w/ Adjective, Participle, Modal } Table 1: The target error types and the corresponding confusion sets.", "from the black-box adversarial attack to determine error positions.", "More concretely, we obtain an operation set for each token in a sentence by considering all possible substitutions based on all confusion sets.", "Note that some confusion sets are not applicable, for example the confusion set of Nn to a verb.", "Each operation in the operation set is to replace the target token or to change its position.", "Then, we apply a searching algorithm to select operations from these operation sets that change the prediction of the tested model and apply them to generate error sentences.", "Three search algorithms are considered: greedy search , beam search , and genetic algorithm .", "Greedy search attack is a two-step procedure.", "First, we evaluate the importance of tokens in a sentence.", "The importance of a token is represented by the likelihood decrease on the model prediction when it is deleted.", "The larger the decrease is, the more important the token is.", "After comparing all tokens, we obtain a sorted list of tokens in descending order of their importance.", "Then, we walk through the list.", "For each token in the list, we try out all operations from the operation set associated with that token and then practice the operation that degrades the likelihood of the model prediction the most.", "We keep repeating step two until the prediction changes or a budget (e.g., number of operations per sentence) is reached.", "Beam search is similar to greedy search .", "The only difference is that when we walk through the sorted list of tokens, we maintain a beam with fixed size k that contains the top k operation streams with the highest global degradation.", "Genetic algorithm is a population-based iterative method for finding more suitable examples.", "We start by randomly selecting operations to build a generation and then use a combination of crossover and mutation to find better candidates.", "We refer the readers to Alzantot et al. (2018) for details of the genetic algorithm in adversarial attack.", "Comprehensive descriptions of all methods are found in Appendix C. 3.2 Target Models We evaluate the following three pre-trained language encoders.", "Detailed descriptions of models and training settings are in Appendix B. ELMo (Peters et al., 2018) is a three-layer LSTM-based model pre-trained on the bidirectional language modeling task on 1B Word Benchmark (Chelba et al., 2014).", "We fix ELMo as a contextual embedding and add two layers of BiLSTM with attention mechanism on top of it.", "BERT (Devlin et al., 2019) is a transformer-based (Vaswani et al., 2017) model pre-trained on masked language modeling and next sentence prediction tasks.", "It uses 16GB English text and adapts to downstream tasks by fine-tuning.", "We use BERT-base-cased for Named Entity Recognition (NER) and BERT-base-uncased for other tasks and perform task-specific fine-tuning.", "RoBERTa (Liu et al., 2019b) is a robustly pretrained BERT model using larger pre-training data (160GB in total), longer pre-training time, the dynamic masking strategy and other optimized pretraining methods.", "We use RoBERTa-base and perform task-specific fine-tuning.", "We design the following three evaluation methods to systematically analyze how language encoders are affected by grammatical errors in input.", "Simulate Errors on Downstream Tasks Using the simulation methods discussed in Section 3.1, we are able to perform evaluation on existing benchmark corpora.", "In our experiments, we consider the target models independently.", "The whole procedure is: given a dataset, the target model is first trained (fine-tuned) and evaluated on the clean training and development set.", "Then, we discard those wrongly predicted examples from the development set and apply simulation methods to perturb each remaining example.", "We compute the attack success rate (attacked examples / all examples) as an indicator of model robustness against grammatical errors.", "The smaller the rate is, the more robust a model is.", "Linguistic Acceptability Probing We design a linguistic acceptability probing task to evaluate each individual type of error.", "We consider two aspects: (1) if the model can tell whether a sentence is grammatically correct or not (i.e., a binary classification task); (2) if the model can locate error positions in the token-level.", "We fix the target model and train a self-attention classifier to perform both probing tasks.", "Cloze test for BERT We design an unsupervised cloze test to evaluate the masked language model component of BERT based on minimal edited pairs.", "For each minimal pair that differs only in one token, we quantify how the probability of predicting a single masked token in the rest of the sentence affected by this grammatical error.", "This method analyzes how error token affects clean context, which is complementary to Goldberg (2019) who focuses on SVA error and discusses how clean contexts influence the prediction of the masked error token.", "In this section, we simulate grammatical errors and analyze performance drops on downstream tasks.", "Datasets We use four language understanding datasets: MRPC (Dolan and Brockett, 2005), MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016), and SST-2 (Socher et al., 2013) from GLUE (Wang et al., 2019a) and a sequence tagging benchmark: CoNLL-2013 for NER.", "Detailed descriptions of these corpora are in Appendix A. We do not use other datasets from GLUE since they are either small in size or only contain short sentences.", "Attack Settings For all tasks, we limit the maximum percentage of allowed modifications in a sentence to be 15% of tokens, which is a reasonable rate according to the statistics estimated from the real data.", "As shown in Table 3, the worst-case transformation only modifies around 9% of tokens overall under such a limitation.", "For MNLI and QNLI, we only modify the second sentence, i.e., hypothesis and answer, respectively.", "For MRPC, we only modify the first sentence.", "We do not apply the genetic algorithm to MNLI and QNLI due to their relatively large number of examples in the development sets, which induce an extremely long time for attacking.", "For NER, we keep the named entities and only modify the remaining tokens.", "Results and Discussion Table 2 presents the test performance of four target models on the standard development set of each task.", "Table 3 summarizes the attack success rates on language understanding tasks, the decreases of F1 score on NER, and the mean percentage of modified tokens (number in brackets).", "All numbers are formatted in percentage.", "As shown in Table 3, with the probabilistic transformation , the attack success rates fall between 2% (RoBERTa, QNLI) and 10% (ELMo, MRPC).", "With the worst-case transformation , we obtain the highest attacked rate of 81.1% (ELMo, genetic algorithm, MRPC) and an average attacked rate across all tasks of 29% by perturbing only around 9% of tokens.", "This result confirms that all models are influenced by ungrammatical inputs.", "NER task is Model Alg.", "in general harder to be influenced by grammatical errors.", "In terms of the probabilistic transformation , the drop of F1 scores ranges from 2% to 4%.", "For the worst-case transformation , the highest drop for NER is 18.33% (ElMo, beam search).", "Considering different target models, we observe that the impact of grammatical errors varies among models.", "Specifically, RoBERTa exhibits a strong robustness against the impact of grammatical errors, with consistently lower attack success rates (20.28% on average) and F1 score decreases (17.50% on average) across all tasks, especially on MRPC and MNLI.", "On the other hand, BERT, ELMo, and InferSent experience an average attack rate of 26.03%, 33.06%, 36.07% respectively on NLU tasks.", "Given the differences in pre-training strategies, we speculate that pretraining with more data might benefit model robustness against noised data.", "This speculation is consistent with (Warstadt et al., 2019b), where the authors also give a lightweight demonstration on LSTM and Transformer-XL (Dai et al., 2019) with varying training data.", "We leave a further exploration of this speculation and a detailed analysis of model architecture to future work.", "Note that in the experiment setting, for each model, we follow the literature to compute the attack success rate only on the instances where the model makes correct predictions.", "Therefore, the attack success rates across different models are not comparable.", "To compare the robustness of different encoders, we further examine the attack success rates on the common part in the development set that all the models make correct predictions.", "We find that the overall trend is similar to that in Table", "3. For example, the greedy attack success rates of RoBERTa, BERT, and ELMo on MRPC and SST-2 are 14.4%, 22.1%, 46.8%, and 28.2%, 30.0%, 33.9% respectively.", "To better understand the effect of grammatical errors, we also analyze (1) which error type harms the performance most, (2) how different error rates affect the performance.", "For the first question, we represent the harm of an error type by the total time it is chosen in successful greedy attack examples.", "We conduct experiments to analyze BERT and RoBERTa on the development sets of MRPC, MNLI-m, and SST-2 as shown in Table 0.12 0.14 0.16 0.18 0.20 0.22 Perturbed rate 0.3 0.4 0.5 0.6 0.7 0.8 A tt a c k s u cc e ss r a t e InferSent ELMo BERT RoBERTa Figure 1: Attack success rate when the numbers of modified tokens in a sentence increase.", "4. Among all, Wchoice is the most harmful type while Worder the least.", "SVA ranks the second most harmful type.", "Notice that though Nn changes a token in a similar way with SVA (both adding or dropping -s or -es in most cases), they have different influences to the model.", "As for errors related to function words, Prep plays a more important role in general but ArtOrDet harms MNLI more.", "For the second one, we increase the allowed modifications of greedy attack from 15% to 45% of tokens in one sentence, resulting the actual percentage of modified tokens under 20%.", "We evaluate all models on the development set of MNLI-m.", "Results are shown in Fig", "1. We find that all attack success rates grow almost linearly as we increase modifications.", "ELMo and BERT perform almost the same while InferSent grows faster at the beginning and RoBERTa grows slower when reaching the end.", "The average attack success rate comes to 70% when the error rate is around 20%.", "Our goal in this section is to assess the ability of the pre-trained encoders in identifying grammatical errors.", "We use a binary linguistic acceptability task to test the model ability in judging the grammatical correctness of a sentence.", "We further study whether the model can precisely locate error positions, which reflects the token-level ability.", "Data We construct separate datasets for each specific type of grammatical error.", "For each dataset, we extract 10,000 sentences whose lengths fall within 10 to 60 tokens from 1B Word Benchmark (Chelba et al., 2014).", "Then, we introduce the target error type to half of these sentences using probabilistic transformation and keep the error rate over each dataset around 3% (resulting in one or two layer 12 layer 8 layer 1 layer 0 85.6 84.3 93.3 68.1 88.3 88.4 94.5 73.8 63.4 70.3 78.4 52.2 60.8 64.6 70.3 48.2 Sentence-level Acc layer 12 layer 9 layer 1 layer 0 50.5 79.8 93.2 58.8 48.0 85.3 59.6 62.2 33.9 59.3 54.4 13.1 39.1 58.7 63.6 17.5 Token-level Acc 50 60 70 80 90 20 30 40 50 60 70 80 90 Figure 2: Probing four layers of BERT on four error types. The left side shows the accuracy of the binary linguistic acceptability task. The right side shows the accuracy of locating error positions. Each row represents a specific layer, and each column represents a type of errors, ArtOrDet , Nn , SVA , Worder from left to right. Full results are given in Appendix D errors in each sentence).", "Sentences are split into training (80%), development (10%) and test (10%).", "Models We study individual layers of ELMo (2 layers), BERT-base-uncased (12 layers) and RoBERTa-base (12 layers).", "In particular, we fix each layer and attach a trainable self-attention layer on top of it to obtain a sentence representation.", "The sentence representation is fed into a linear classifier to output the probability of whether the sentence is linguistically acceptable.", "See details about the self-attention layer and the linear classifier in Appendix B.3.", "We next extract the top two positions with the heaviest weights from the trained self-attention layer.", "If the positions with error token are included, we consider the errors are correctly located by the model in the token-level.", "This suggests whether contextual encoders are providing enough information for the classifier to identify error locations.", "For comparisons, we also evaluate the input embedding layer (non-contextualized, layer 0) of each model as a baseline.", "We compute accuracy for both sentence-level and token-level evaluations.", "Results and Discussion We visualize the results of four layers of BERT on four error types, ArtOrDet , Nn , SVA , and Worder in Fig", "2. Complete results of all layers and other error types are in Appendix D. We find that the mean sentence-level accuracy of the best contextual layers of BERT, ELMo, and RoBERTa across error types are 87.8%, 84.3%, and 90.4%, respectively, while input embedding layers achieve 64.7%, 65.8%, and 66.0%.", "In token-level, despite trained only on the 2 4 6 8 10 12 Layer 0.0 0.1 0.2 0.3 0.4 0.5 A cc .", "prediction of whether a sentence is acceptable, the mean accuracy of classifiers upon the best layers of BERT, ELMo, and RoBERTa are 79.3%, 63.3%, and 80.3%, compared to 48.6%, 18.7%, and 53.4% of input embedding layers.", "The two facts indicate that these pre-trained encoder layers possess stronger grammatical error detecting and locating abilities compared to input embedding layers.", "We also observe patterns related to a specific model.", "Specifically, middle layers (layers 7-9) of BERT are better at identifying errors than lower or higher layers, as shown in Fig", "2. But higher layers of BERT locate errors related to long-range dependencies and verbs such as SVA and Vform more accurately.", "To further investigate BERT's knowledge of error locations.", "We conduct the same token-level evaluation to the 144 attention heads in BERT.", "Results for Prep and SVA are visualized in Fig", "3. We find that even in a completely unsupervised manner, some attention heads results for 50%-60% accuracy in locating errors.", "Consistent with self-attention layers, attention heads from middle layers perform the best.", "See Appendix F for all error types.", "Due to space limit, we present results of RoBERTa and ELMo in Appendix D and summarize the observations in the following.", "RoBERTa exhibits better ability in detecting and locating errors in lower layers compared to BERT and achieves the best performance in top layers (layers 10-11).", "It is also good at capturing verb and dependency errors.", "On the other hand, the first layer of ELMo consistently gives the highest sentence-level classification accuracy.", "But its best performing layer in locating errors depends on the error type and varies between the first and the second layer.", "In particular, The second layer of ELMo exhibits strong ability in locating Nn and outperforms BERT in accuracy.", "This is surprising given the fact that Nn is not obvious with character embeddings -6 -5 -4 -3 -2 -1 1 2 3 4 5 6 Prep Art Wci Tras Nn SVA Vform Vt 0.00 -0.00 0.01 0.02 0.02 0.09 0.02 0.02 0.02 0.01 0.01 0.00 0.00 0.01 0.00 0.00 0.01 0.02 0.06 0.03 0.01 0.00 0.00 -0.00 0.01 0.01 0.00 0.01 0.03 0.05 0.05 0.02 0.02 0.01 0.01 0.01 0.00 0.00 -0.00 -0.02 0.01 0.01 0.04 -0.00 -0.01 0.00 -0.00 -0.02 0.00 0.01 0.00 0.02 0.03 0.06 0.04 0.00 0.00 0.00 0.01 0.01 -0.00 0.00 0.00 0.01 0.02 0.04 0.01 0.00 0.00 -0.00 0.01 0.00 0.01 0.00 0.00 0.01 0.06 0.14 0.03 0.00 0.00 -0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.02 0.06 0.01 0.00 0.00 0.00 0.00 0.00 Figure 4: Probing BERT as an MLM.", "from layer 0 of ELMo.", "We further notice that for all models, Worder is the hardest type to detect in the sentence-level and ArtOrDet and Worder are the hardest types to locate in the token-level.", "We hypothesize this is related to the locality of these errors which induces a weak signal for models to identify them.", "Appendix E demonstrates some examples of the token-level evaluation of BERT.", "We aim to reveal the interaction between grammatical errors and their nearby tokens through studying the masked language model (MLM) component of BERT.", "We investigate BERT as it is a typical transformer-based encoder.", "Our analysis can be extended to other models.", "Experimental Settings We conduct experiments on minimal edited pairs from NUCLE.", "We extract pairs with error tags ArtOrDet , Prep , Vt , Vform , SVA , Nn , Wchoice , Trans and keep those that only have one token changed.", "This gives us eight collections of minimal edited pairs with sizes of 586, 1525, 1817, 943, 2513, 1359, 3340, and 452, respectively.", "Given a minimal edited pair, we consider tokens within six-token away from the error token.", "We replace the same token in the grammatical and ungrammatical sentence with [MASK] one at a time and use BERT as an MLM to predict its likelihood.", "Then we compute the likelihood drop in the ungrammatical sentence and obtain the average drop over all minimal edited pairs.", "in Fig", "4. In general, We find that the decrease of likelihood on specific positions are greater than others (cid:88) This would thus reduce the financial burden of this group of people based on their income ceilings .", "in the presence of errors.", "Given the fact that certain dependencies between tokens such as subject-verb and determiner-noun dependencies are accurately modeled by BERT as demonstrated in prior work (Jawahar et al., 2019), we suspect that the presence of an error token will mostly affect its neighboring tokens (both in terms of syntactic and physical neighbors).", "This is consistent with our observation in Fig 4 that in the case of SVA where a subject is mostly the preceding token of a verb (although agreement attractors can exist between subject and verb), the proceeding tokens of error positions get the largest likelihood decreases overall.", "In the case of ArtOrDet where an article or a determiner can be an indicator and a dependent of the subsequent noun, predicting the next tokens of error positions becomes much harder.", "We provide two running examples with ArtOrDet in Table 5 to further illustrate this point.", "Finally, we explore a data augmentation method based on the proposed grammatical error simulations.", "We apply the greedy search algorithm to introduce grammatical errors to the training examples of a target task and retrain the model on the combination of original examples and the generated examples.", "We take the MRPC (Dolan and Brockett, 2005) dataset as an example to demonstrate the results.", "We augment the training set of 0.0 0.2 0.4 0.6 0.8 1.0 proportion 0.5 0.6 0.7 0.8 0.9 a cc Original Corrupted Figure 5: Results of a data augmentation defense.", "MRPC with different proportions of adversarial examples, fine-tune BERT on the augmented training set and then evaluate on both the original development set and the corrupted development set.", "Results are shown in Figure", "5. we find that by adding a small number of adversarial examples, the accuracy is recovered from 46% to 82%.", "As the proportion of augmented adversarial examples increases, the accuracy continues to increase on the corrupted set, with negligible changes to the original validation accuracy.", "This fact also demonstrates that our simulated examples are potentially helpful for reducing the effect of grammatical errors.", "In this paper, we conducted a thorough study to evaluate the robustness of language encoders against grammatical errors.", "We proposed a novel method to simulating grammatical errors and facilitating our evaluations.", "We studied three pre-trained language encoders, ELMo, BERT, and RoBERTa and concentrated on three aspects of their abilities against grammatical errors: performance on downstream tasks when confronted with noised texts, ability in identifying errors and capturing the interaction between tokens in the presence of errors.", "This study shed light on understanding the behaviors of language encoders against grammatical errors and encouraged future work to enhance the robustness of these models.", "to thank the anonymous reviewers for their feedback.", "This work is supported by NSF Grant #IIS-1927554." ]
[ "method", "method", "method", "abstain", "result", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "abstain", "objective", "method", "objective", "objective", "objective", "method", "objective", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "objective", "other", "objective", "other", "other", "other", "other", "abstain", "other", "other", "objective", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "method", "other", "other" ]
[ "Preregistration refers to the practice of specifying what you are going to do, and what you expect to find in your study, before carrying out the study.", "This practice is increasingly common in medicine and psychology, but is rarely discussed in NLP.", "This paper discusses preregistration in more detail, explores how NLP researchers could preregister their work, and presents several preregistration questions for different kinds of studies.", "Finally, we argue in favour of registered reports , which could provide firmer grounds for slow science in NLP research.", "The goal of this paper is to elicit a discussion in the NLP community, which we hope to synthesise into a general NLP preregistration form in future research.", "Scientific results are only as reliable as the methods that we use to obtain those results.", "Recent years have seen growing concerns about the reproducibility of scientific research, leading some to speak of a reproducibility crisis' (see Fidler and Wilcox 2018 for an overview of the debate).", "Although the main focus of the debate has been on psychology (e.g. through Open Science Collaboration 2015) and medicine (Macleod et al., 2014), there are worries about the reproducibility of Natural Language Processing (NLP) research as well (Fokkens et al., 2013; Cohen et al., 2018; Moore and Rayson, 2018; Branco et al., 2020).", "The reproducibility debate has led to Munaf et", "al.'s (2017) Manifesto for reproducible science , where the authors discuss the different threats to reproducible science, and different ways to address these threats.", "We will first highlight some of their proposals, and discuss their adoption rate in NLP.", "Our main observation is that preregistration is rarely used.", "We believe this is an undesirable situation, and devote the rest of this paper to argue for preregistration of NLP research.", "Munaf et al. recommend more methodological training , so that e.g. statistical methods are applied correctly.", "al.'s final recommendation, preregistration , means that authors should specify what they are going to do, and what they expect to find, before carrying out their studies (Nosek et al., 1 A more radical proposal would be to always host methodology-focused tutorials, and to invite researchers to teach specific modules, similar to keynote talks.", "In NLP, we see different researchers picking up the gauntlet to teach others about statistics (Dror et al., 2018, 2020), achieving language-independence (Bender, 2011), or best practices in human evaluation (van der Lee et al., 2019, 2021).", "Moreover, every *ACL conference offers tutorials on a wide range of different top-ics.", "While efforts to improve methodology could be more systematic (e.g. by actively encouraging methodology tutorials, and working towards community standards), 1 the infrastructure is in place.", "Munaf et al. also recommend to diversify peer review .", "Instead of only having journals, that are responsible for both the evaluation and dissemination of research, we can now also solicit peer feedback after publishing our work on a platform like ArXiv or OpenReview.", "The NLP community is clearly ahead of the curve in terms of the adoption of preprints, and actively discussing ways to improve peer review (ACL Reviewing Committee 2020a,b; Rogers and Augenstein 2020).", "To improve the quality of the reviews themselves, ACL2020 featured a tutorial on peer reviewing (Cohen et al., 2020).", "Another advice from Munaf et al. is to adopt reporting guidelines , so that papers include all relevant details for others to reproduce the results.", "The NLP community is rapidly adopting such guidelines, in the form of Dodge et", "al.'s (2019) reproducibility checklist that authors for EMNLP2020 need to fill in.", "Beyond reproducibility, we are also seeing more and more researchers adopting Data statements (Bender and Friedman, 2018), Model cards (Mitchell et al., 2019), and Datasheets (Gebru et al., 2018) for ethical reasons.", "Munaf et", "2018).", "The goal of preregistration is to ensure that all hypotheses and research methods are made explicit before researchers are confronted with the data.", "Otherwise, researchers end up in a garden of forking paths , where all research decisions are made implicitly, based on common sense and the available data (Gelman and Loken, 2013).", "This negatively impacts the reliability and generalisability of any study.", "In other words: preregistration allows us to distinguish between exploratory and confirmatory research.", "Exploratory research does not require preregistration, because the goal is to get a sense of what is possible.", "Any pattern you come across during exploratory research, allows us to draw up hypotheses.", "For a subsequent confirmatory study you could/should preregister to test those hypotheses.", "By explicitly marking (parts of) your study as exploratory or confirmatory, it is easier to understand the status of your results.", "Compared to the work on reporting quality, there has been little talk of preregistration in the NLP literature; the terms preregister' or preregistra-tion' are hardly used in the ACL Anthology.", "2 For this reason, we will focus on preregistration and its application in NLP research.", "The next sections discuss how preregistration works (2), propose preregistration questions for NLP research (3), discuss the idea of registered reports' as an alter-2 Looking for these terms, we found four papers that mention preregistration: Cao et al. (2018) and van der Lee et al. (2019) mention it, and van Miltenburg et al. (2018) and Futrell and Levy (2019) share their own preregistration.", "Before you begin, you enter the hypotheses, design, and analysis plan of your study on a website like the Open Science Framework, AsPredicted, or ResearchBox.", "These sites provide a time stamp; evidence that you indeed made all the relevant decisions before carrying out the study.", "During your study, you follow the preregistered plans as closely as possible.", "In an ideal world, there would be an exact match between your plans and the actual study you carried out.", "But there are usually unforeseen circumstances that force you to change your study.", "This is fine, if the changes are clearly specified (in-cluding the reasons for those changes) in your final report (Nosek et al., 2018).", "A typical preregistration form.", "Table 1 shows questions from the preregistration form from AsPredicted.", "3 This form is geared towards hypothesis-driven, experimental research where human participants are assigned to different experimental conditions.", "Simmons et al. (2017) note that answers should state exactly how the study will be executed, but also that it should be short and easy to read.", "Data collection, hypothesis, dependent variable.", "The form first asks whether data collection has been carried out yet (ideally the answer should be no , but see Appendix A.1), and then asks researchers to 3 See https://osf.io/zab38/wiki/home/ for an overview of different forms.", "make their main hypothesis explicit so that it cannot be changed after the fact.", "Following the hypothesis, researchers should describe their key dependent variables (i.e. the main outcome variables) and how they will be measured.", "This includes cutoff points that will be used to discretise continuous variables (e.g. to divide participants in different groups).", "Conditions, analyses, outliers and exclusions.", "Next, the form asks about the design of the study, the analyses, and the process of determining outliers (and whether those should be excluded).", "The answer needs to be detailed enough so that other researchers are able to reproduce the study.", "Sample size and other.", "The form then asks how much data will be collected, so as to prevent optional stopping (where researchers keep collecting data until the results are in line with their preferred hypothesis).", "4 Finally, the form allows researchers to specify other aspects of the study they would like to preregister, such as secondary analyses, variables collected for exploratory purposes, [or] unusual analyses.", "Qualitative research.", "Preregistration is not only suitable for quantitative research; Haven and Grootel (2019) present a proposal to preregister qualitative studies as well.", "Their suggestions are also presented in Table 1. The authors argue that, although qualitative research differs in its goals from quantitative research (developing theories rather than testing them), it is still valuable to make your assumptions and research plans explicit before carrying out your planned study.", "Because qualitative research is more flexible than quantitative research, Haven and Grootel view qualitative preregistrations as living documents; continuously updated to track the research progress.", "This stimulates conscientiousness, and avoids sloppy research.", "Public preregistrations also allow for immediate feedback.", "To determine what a preregistration for NLP research should look like, we need to consider the different kinds of research contributions in NLP.", "For this, we use the paper types proposed for COLING 2018.", "5 These are: Computationally-aided linguistic analysis; NLP engineering experiment paper; Reproduction/Resource/Position/Survey Paper.", "Of these, position papers are less suitable for preregistration, since these are more opinion/experience-driven, and the process of writing them cannot be formalised.", "We treat the others below.", "Analysis, experiments, and reproduction papers typically have one or more hypotheses, even though they may not always be marked as such.", "6 This means we can ask many of the same questions for these studies as for experimental research.", "Table 2 provides a rough overview of important questions to ask before carrying out your research.", "If your study contains an error analysis, then you could ask the more qualitatively oriented questions in Table 3. They acknowledge that you always enter error analysis with some expectation (i.e. researcher bias) of what kinds of mistakes systems are likely to make, and where those mistakes may be found.", "The questions also stimulate researchers to go beyond the practice of providing some lemons' alongside cherry-picked examples showing good performance.", "The main benefit of asking these questions beforehand is that they force researchers to carefully consider their methodology, and they make re-searchers' expectations explicit.", "This also helps to identify unexpected findings, or changes that 5 https://coling2018.org/paper-types/ 6 Taking the best papers from COLING 2018 as an example, Ruppenhofer et al. (2018, analysis) test assumptions from the linguistics literature about affixoids, Thompson and Mimno (2018, experiment) test which subsampling methods improve the output generated by topic models, and Lan and Xu (2018, reproduction) test whether the reported performance for different neural network models generalises to other tasks.", "were made to the research design during the study.", "Resource papers are on the qualitative side of the spectrum, and as such the questions from Haven and Grootel (2019), presented at the bottom of Table 1, are generally appropriate for these kinds of papers as well.", "Particularly 1) the original purpose for collecting the data, 2) sampling decisions (what documents to include), and 3) annotation (what framework/perspective to use) are important.", "Because the former typically influences the latter two, it is useful to document how the goal of the study influenced decisions regarding sampling and annotation, in case the study at some point pivots towards another goal.", "Survey papers should follow the PRISMA guidelines for structured reviews (Moher et al., 2009; Liberati et al., 2009).", "According to these guidelines, researchers should state exactly where they searched for existing literature, what search terms they used, and what criteria they used to select relevant papers.", "This increases reproducibility, allows readers to find any gaps in the survey, and avoids a biased presentation of the literature (i.e. only citing researchers you know, or work that fits your preferred narrative).", "A recent NLP example of a structured review is provided by Reiter (2018).", "Registered reports [split] conventional peer review in half (Chambers, 2019).", "First, authors submit a well-motivated research plan for review, before carrying out the study (similar to a preregistration).", "This plan may go back-and-forth between the authors and the reviewers, but once the plan is accepted, the authors receive the guarantee that, if they carry out the study according to plan, their work will be published.", "As with preregistration, deviations from the original plan are allowed, but these should be indentified in the final report.", "The main advantage of registered reports is that they provide a means to avoid publication bias.", "Because studies aren't judged on the basis of their results, positive results are equally likely to be published as negative results.", "As long as the study is deemed valuable a priori , it should get published.", "An additional benefit of registered reports is that reviews may actually correct flaws in the research design, meaning that we reduce the chance of running an expensive study all for nothing.", "In the case of NLP research, this may save a lot of energy (cf. Strubell et al. 2019).", "We are not aware of any NLP journals that offer registered reports, but strongly encourage the NLP community to take steps in this direction.", "[f]or most of our own research projects this strategy hardly seems possible: in our many applied research projects, we have learned so much by looking at the data. Our most important hypotheses could never have been formulated ahead of time.", "This certainly rings true for NLP as well.", "However, we should be careful about conclusions that are drawn on the basis of pre-existing data.", "Gelman and Loken (2013) note that in such cases, if it is feasible to collect more data, it is good to follow up positive results with a pre-registered replication to confirm your initial findings.", "One way to do this is to collect and evaluate your model on a new test set (cf. Recht et al. 2019).", "This tells us to what extent trained models generalise to unseen data.", "Another idea could be to preregister the human evaluation (or error analysis) of the model output.", "We believe that preregistration, and especially registered reports, could ease the pressure to publish as soon as possible.", "If your analysis plan is accepted for publication, you can take as long as you want to actually carry out the study, without having to worry about being scooped.", "This provides new opportunities for slow science in NLP (also see Min-Yen Kan's keynote at COLING 2018).", "Below we address some common questions about preregistration.", "We thank our anonymous reviewers for raising some of these questions.", "Is preregistration more work?", "In our experience, preregistration adds little overhead to a research project.", "Especially if a project requires approval by an Institutional Review Board (IRB), you need to write a description along similar lines anyway.", "For projects not requiring IRB approval, it is good practice to provide a model card (Mitchell et al., 2019), data sheet (Gebru et al., 2018) or data statement (Bender and Friedman, 2018) with your model or resource.", "Given the ethical aspects of NLP research, it is advisable to consider all dimensions of your study before you carry it out.", "Moreover, preregistration is a good way to start writing the paper before carrying out the research, a practice 7 Cf.", "advocated by Eisner (2010) to maximise the im-pact of your work.", "Finally, it may be more work to prepare a registered report, but this comes with the benefit of having a pre-approved methodology.", "Once the project is completed, reviewers will not reject your paper based on methodological choices.", "Should I worry about being scooped?", "There is no need to worry.", "We already discussed registered reports, where research proposals are provisionally accepted before data collection starts.", "Otherwise, this worry has been addressed through the existence of both public and private preregistrations.", "A researcher can choose to keep a preregistration private until the research is completed.", "They can make their preregistration public whenever they like, for example to invite feedback from the community.", "In addition, preregistrations are also time-stamped, and you can use these time stamps during the review phase to show that you have had these ideas before similar work was published.", "8 What about citing preregistrations?", "In some regards, the discussion about preregistrations is similar to the discussion about preprints (i.e. papers on ArXiv), thus similar questions arise.", "Both preregistrations and published studies are being cited.", "For example, medical journals like BMC Public Health also publish study protocols (similar to preregistra-tions), without any results, that are also cited by others (e.g. work using a similar protocol).", "What should we do with concurrent work?", "It may of course happen that multiple researchers have similar ideas around the same time.", "We believe that it is still valuable to publish multiple independent studies with similar results.", "Even if they don't provide any new insights (which is rare), they do provide evidence towards the robustness of the findings.", "Where and how those findings should be published is a separate discussion.", "9 How should we teach preregistration?", "Preregistration is already being incorporated into Psychology courses (see, for example, Blincoe and Buchert 2020).", "It is relatively straightforward to implement as part of student research proposals during applied courses in NLP: specify what you plan to do 8 The public/private distinction has been implemented by both the Open Science Foundation and AsPredicted.org.", "The Open Science Foundation allows for a 4-year embargo, during which the preregistration is kept private.", "Aspredicted allows for preregistrations to be private indefinitely.", "9 However, if there is value in publishing the first' paper, there is probably also value in publishing the second' one.", "The same holds for the question of whether both studies should be cited; good scholarship considers all the available evidence.", "exactly, and what you expect to find.", "It is often useful for students to have an explicit format to think through their research plans, to make sure that they make sense.", "Although preregistration is offered as a solution to improve our work, it does not solve all of our problems.", "Van 't Veer and Giner-Sorolla (2016) mention three limitations: 1. Flexibility.", "It may be difficult or infeasible for authors to foresee all possible outcomes, and as such there may be gaps in the preregistration, which still allow for flexibil-ity in the analysis.", "2. Fraud.", "There is no way to prevent fraudulent researchers from, e.g., creating multiple preregistrations, or falsely preregistering' studies that were already run.", "At some point we just have to trust each other to do the right thing, but increased transparency does make it harder to commit fraud.", "3. Applicability.", "Preregistration may not be possible for all kinds of studies.", "As discussed above, it has mainly been developed for quantitative studies (particularly experiments), and there are proposals for the preregistration of qualitative research (Haven and Grootel, 2019), although we have yet to see whether this idea will catch on.", "Finally, Szollosi et al. (2020) argue that, although preregistration might offer greater transparency, it does not by itself improve scientific reasoning and theory development.", "Since large parts of NLP are pre-theoretical (we have observed effects but do not have any theoretical explanations for why these effects occur), one might reasonably argue that we should focus on theory development first, before we can carry out any meaningful experiments.", "We have discussed how preregistration could benefit NLP research, and how different kinds of contributions could be preregistered.", "We have also proposed an initial list of questions to ask before carrying out NLP research (and see Appendix A for example preregistration forms).", "With this paper, we hope to encourage other NLP researchers to consider preregistering their work, so that they will no longer get lost in the garden of forking paths.", "Still, there is no silver bullet to cure sloppy science.", "Although preregistration is certainly helpful, it does not guarantee high-quality research, and we do need to stay critical about preregistered studies, and the way they are carried out.", "Thanks to the anonymous reviewers for their constructive feedback, and to all the #NLProc Twitter people for discussion." ]
[ "abstain", "abstain", "objective", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "other" ]
[ "In this paper, we introduce an embedding model, named CapsE, exploring a capsule network to model relationship triples (subject, relation, object) .", "Our CapsE represents each triple as a 3-column matrix where each column vector represents the embedding of an element in the triple.", "This 3-column matrix is then fed to a convolution layer where multiple filters are operated to generate different feature maps.", "These feature maps are reconstructed into corresponding capsules which are then routed to another capsule to produce a continuous vector.", "The length of this vector is used to measure the plausibility score of the triple.", "Our proposed CapsE obtains better performance than previous state-of-the-art embedding models for knowledge graph completion on two benchmark datasets WN18RR and FB15k-237, and outperforms strong search personalization baselines on SEARCH17.", "Knowledge graphs (KGs) containing relationship triples (subject, relation, object) , denoted as (s, r, o) , are the useful resources for many NLP and especially information retrieval applications such as semantic search and question answering (Wang et al., 2017).", "However, large knowledge graphs, even containing billions of triples, are still incomplete, i.e., missing a lot of valid triples (West et al., 2014).", "Therefore, much research efforts have focused on the knowledge graph completion task which aims to predict missing triples in KGs, i.e., predicting whether a triple not in KGs is likely to be valid or not (Bordes et al., 2011, 2013; Socher et al., 2013).", "To this end, many embedding models have been proposed to learn vector representations for entities (i.e., subject /head entity and object /tail entity) and relations in KGs, and obtained state-of-the-art results as summarized by Nickel et al. (2016a) and Nguyen (2017).", "These embedding models score triples (s, r, o) , such that valid triples have higher plausibility scores than invalid ones (Bordes et al., 2011, 2013; Socher et al., 2013).", "For example, in the context of KGs, the score for (Melbourne, cityOf, Australia) is higher than the score for (Melbourne, cityOf, United Kingdom) .", "Triple modeling is applied not only to the KG completion, but also for other tasks which can be formulated as a triple-based prediction problem.", "An example is in search personalization, one would aim to tailor search results to each spe-cific user based on the user's personal interests and preferences (Teevan et al., 2005, 2009; Bennett et al., 2012; Harvey et al., 2013; Vu et al., 2015, 2017).", "Here the triples can be formulated as (submitted query, user profile, returned document) and used to re-rank documents returned to a user given an input query, by employing an existing KG embedding method such as TransE (Bordes et al., 2013), as proposed by Vu et al. (2017).", "Previous studies have shown the effectiveness of modeling triple for either KG completion or search personalization.", "However, there has been no single study investigating the performance on both tasks.", "Conventional embedding models, such as TransE (Bordes et al., 2013), DISTMULT (Yang et al., 2015) and ComplEx (Trouillon et al., 2016), use addition, subtraction or simple multiplication operators, thus only capture the linear relationships between entities.", "Recent research has raised interest in applying deep neural networks to triple-based prediction problems.", "For example, Nguyen et al. (2018) proposed ConvKBa convolutional neural network (CNN)-based model for KG completion and achieved state-of-the-art results.", "Most of KG embedding models are constructed to modeling entries at the same dimension of the given triple, where presumably each dimension captures some relation-specific attribute of entities.", "To the best of our knowledge, however, none of the existing models has a deep architecture for modeling the entries in a triple at the same dimension.", "Sabour et al. (2017) introduced capsule networks (CapsNet) that employ capsules (i.e., each capsule is a group of neurons ) to capture entities in images and then uses a routing process to specify connections from capsules in a layer to those in the next layer.", "Hence CapsNet could encode the intrinsic spatial relationship between a part and a whole constituting viewpoint invariant knowledge that automatically generalizes to novel viewpoints.", "Each capsule accounts for capturing variations of an object or object part in the image, which can be efficiently visualized.", "Our high-level hypothesis is that embedding entries at the same dimension of the triple also have these variations, although it is not straightforward to be visually examined.", "To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization.", "Different from the traditional modeling design of CapsNet where capsules are constructed by splitting feature maps, we use capsules to model the entries at the same dimension in the entity and relation embeddings.", "In our CapsE, v s , v r and v o are unique k -dimensional embeddings of s , r and o , respectively.", "The embedding triple [ v s , v r , v o ] of (s, r, o) is fed to the convolution layer where multiple filters of the same 1 3 shape are repeatedly operated over every row of the matrix to produce k -dimensional feature maps.", "Entries at the same dimension from all feature maps are then encapsulated into a capsule.", "Thus, each capsule can encode many characteristics in the embedding triple to represent the entries at the corresponding dimension .", "These capsules are then routed to another capsule which outputs a continuous vector whose length is used as a score for the triple.", "Finally, this score is used to predict whether the triple (s, r, o) is valid or not.", "We propose an embedding model CapsE using the capsule network (Sabour et al., 2017) for modeling relationship triples.", "To our best of knowledge, our work is the first consideration of exploring the capsule network to knowledge graph completion and search personalization.", "(Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015).", "CapsE obtains the best mean rank on WN18RR and the highest mean reciprocal rank and highest Hits@10 on FB15k-237.", "We restate the prospective strategy of expanding the triple embedding models to improve the ranking quality of the search personalization systems.", "We adapt our model to search personalization and evaluate on SEARCH17 (Vu et al., 2017) a dataset of the web search query logs.", "Experimental results show that our CapsE achieves the new state-of-the-art results with significant improvements over strong baselines.", "Let G be a collection of valid factual triples in the form of (subject, relation, object) denoted as (s, r, o) .", "Embedding models aim to define a score function giving a score for each triple, such that valid triples receive higher scores than invalid triples.", "We denote v s , v r and v o as the k -dimensional embeddings of s , r and o , respectively.", "In our proposed CapsE, we follow Nguyen et al. (2018) to view each embedding triple [ v s , v r , v o ] as a matrix A = [ v s , v r , v o ] R k 3 , and denote A i, : R 1 3 as the i -th row of A .", "We use a filter R 1 3 operated on the convolution layer.", "This filter is repeatedly operated over every row of A to generate a feature map q = [ q 1 , q 2 , ..., q k ] R k , in which q i = g ( A i, : + b ) where denotes a dot product, b R is a bias term and g is a non-linear activation function such as ReLU.", "Our model uses multiple filters R 1 3 to generate feature maps.", "We denote as the set of filters and N = | | as the number of filters, thus we have N k -dimensional feature maps, for which each feature map can capture one single characteristic among entries at the same dimension.", "We build our CapsE with two single capsule layers for a simplified architecture.", "In the first layer, we construct k capsules, wherein entries at the same dimension from all feature maps are encapsulated into a corresponding capsule.", "Therefore, each capsule can capture many characteristics among the entries at the corresponding dimension in the embedding triple.", "These characteristics are generalized into one capsule in the second layer which produces a vector output whose length is used as the score for the triple.", "output u i RN 1 .", "Vector outputs u i are multiplied by weight matrices W i R d N to produce vectors u i R d 1 which are summed to produce a vector input s R d 1 to the capsule in the second layer.", "The capsule then performs the non-linear squashing function to produce a vector output e R d 1 : e = squash ( s ) ; s = (cid:88) i c i u i ; u i = W i u i where squash ( s ) = (cid:107) s (cid:107) 2 1+ (cid:107) s (cid:107) 2 s (cid:107) s (cid:107) , and c i are coupling coefficients determined by the routing process as presented in Algorithm 1.", "Because there is one capsule in the second layer, we make only one difference in the routing process proposed by Sabour et al. (2017), for which we apply the softmax in a direction from all capsules in the previous layer to each of capsules in the next layer.", "1 for all capsule i the first layer do b i 0 for iteration = 1, 2, ..., m do c softmax ( b ) s (cid:80) i c i u i e = squash ( s ) for all capsule i the first layer do b i b i + u i e Algorithm 1: The routing process is extended from Sabour et al. (2017).", "1 The softmax in the original routing process proposed by Sabour et al. (2017) is applied in another direction from each of capsules in the previous layer to all capsules in the next layer.", "We illustrate our proposed model in Figure 1 where embedding size: k = 4 , the number of filters: N = 5 , the number of neurons within the capsules in the first layer is equal to N , and the number of neurons within the capsule in the second layer: d = 2 .", "The length of the vector output e is used as the score for the input triple.", "where the set of filters is shared parameters in the convolution layer; denotes a convolution operator; and capsnet denotes a capsule network operator.", "We use the Adam optimizer (Kingma and Ba, 2014) to train CapsE by minimizing the loss function (Trouillon et al., 2016; Nguyen et al., 2018) as follows: L = (cid:88) ( s,r,o ) {GG (cid:48) } log (cid:0) 1 + exp (cid:0) t ( s,r,o ) f ( s, r, o ) (cid:1)(cid:1) in which, t ( s,r,o ) = (cid:26) 1 for ( s, r, o ) G 1 for ( s, r, o ) G (cid:48) here G and G (cid:48) are collections of valid and invalid triples, respectively.", "In the knowledge graph completion task (Bordes et al., 2013), the goal is to predict a missing entity given a relation and another entity, i.e, inferring a head entity s given ( r, o ) or inferring a tail entity o given ( s, r ) .", "The results are calculated based on ranking the scores produced by the score function f on test triples.", "Datasets: We use two recent benchmark datasets WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015).", "These two datasets are created to avoid reversible relation problems, thus the prediction task becomes more realistic and hence more challenging (Toutanova and Chen, 2015).", "Table 1 presents the statistics of WN18RR and FB15k-237.", "Evaluation protocol: Following Bordes et al. (2013), for each valid test triple ( s, r, o ) , we replace either s or o by each of all other entities to create a set of corrupted triples.", "We use the Fil-tered setting protocol (Bordes et al., 2013), i.e., not taking any corrupted triples that appear in the KG into accounts.", "We rank the valid test triple and corrupted triples in descending order of their scores.", "We employ evaluation metrics: mean rank (MR), mean reciprocal rank (MRR) and Hits@10 (i.e., the proportion of the valid test triples ranking in top 10 predictions).", "Lower MR, higher MRR or higher Hits@10 indicate better performance.", "Final scores on the test set are reported for the model obtaining the highest Hits@10 on the validation set.", "Training protocol: We use the common Bernoulli strategy (Wang et al., 2014; Lin et al., 2015b) when sampling invalid triples.", "For WN18RR, Pinter and Eisenstein (2018) 2 found a strong evidence to support the necessity of a WordNet-related semantic setup, in which they averaged pre-trained word embeddings for word surface forms within the WordNet to create synset embeddings, and then used these synset embeddings to initialize entity embeddings for training their TransE association model.", "We follow this evidence in using the pre-trained 100-dimensional Glove word embeddings (Pennington et al., 2014) to train a TransE model on WN18RR.", "2 Pinter and Eisenstein (2018) considered WN18RR and evaluated their M3GM model only for 7 relations as they employed the inverse rule model (Dettmers et al., 2018) for 4 remaining symmetric relations.", "Regarding a fair comparison to other models, we use the M3GM implementation released by Pinter and Eisenstein (2018) to re-train and re-evaluate the M3GM model for all 11 relations.", "We thank Pinter and Eisenstein (2018) for their assistance running their code.", "We employ the TransE and ConvKB implementations provided by Nguyen et al. (2016b) and Nguyen et al. (2018).", "For ConvKB, we use a new process of training up to 100 epochs and monitor the Hits@10 score after every 10 training epochs to choose optimal hyper-parameters with the Adam initial learning rate in { 1 e 5 , 5 e 5 , 1 e 4 } and the number of filters N in { 50 , 100 , 200 , 400 } .", "We obtain the highest Hits@10 scores on the validation set when using N= 400 and the initial learning rate 5 e 5 on WN18RR; and N= 100 and the initial learning rate 1 e 5 on FB15k-237.", "Like in ConvKB, we use the same pre-trained entity and relation embeddings produced by TransE to initialize entity and relation embeddings in our CapsE for both WN18RR and FB15k-237 ( k = 100 ).", "We set the batch size to 128, the number of neurons within the capsule in the second capsule layer to 10 ( d = 10 ), and the number of iterations in the routing algorithm m in { 1 , 3 , 5 , 7 } .", "We run CapsE up to 50 epochs and monitor the Hits@10 score after each 10 training epochs to choose optimal hyper-parameters.", "The highest Hits@10 scores for our CapsE on the validation set are obtained when using m = 1 , N = 400 and the initial learning rate at 1 e 5 on WN18RR; and m = 1 , N = 50 and the initial learning rate at 1 e 4 on FB15k-237.", "Table 2 compares the experimental results of our CapsE with previous state-of-the-art published results, using the same evaluation protocol.", "Our CapsE performs better than its closely related CNN-based model ConvKB on both experimental datasets (except Hits@10 on WN18RR and MR on FB15k-237), especially on FB15k-237 where our CapsE gains significant improvements of 0 .", "523 0 .", "418 = 0 .", "105 in MRR (which is about 25.1% relative improvement), and 59 .", "3% 53 .", "2% = 6 .", "1 % absolute improvement in Hits@10.", "Table 2 also shows that our CapsE obtains the best MR score on WN18RR and the highest MRR and Hits@10 scores on FB15k-237.", "Following Bordes et al. (2013), for each relation r in FB15k-237, we calculate the averaged number s of head entities per tail entity and the averaged number o of tail entities per head entity.", "If s < 1.5 and o < 1.5, r is categorized one-to-one (1-1).", "If s < 1.5 and o 1.5, r is categorized one-to-many (1-M).", "If s 1.5 and o < 1.5, r is Method WN18RR FB15k-237 MR MRR H@10 MR MRR H@10 DISTMULT (Yang et al., 2015) 5110 0.425 49.1 254 0.241 41.9 ComplEx (Trouillon et al., 2016) 5261 0.444 50.7 339 0.247 42.8 ConvE (Dettmers et al., 2018) 4187 0.433 51.5 244 0.325 50.1 KBGAN (Cai and Wang, 2018) 0.213 48.1 0.278 45.8 M3GM (Pinter and Eisenstein, 2018) 1864 0.311 53.3 TransE (Bordes et al., 2013) 743 (cid:63) 0.245 (cid:63) 56.0 (cid:63) 347 0.294 46.5 ConvKB (Nguyen et al., 2018) 763 (cid:63) 0.253 (cid:63) 56.7 (cid:63) 254 (cid:63) 0.418 (cid:63) 53.2 (cid:63) Our CapsE 719 0.415 56.0 303 0.523 59.3 Table 2: Experimental results on the WN18RR and FB15k-237 test sets.", "categorized many-to-one (M-1).", "If s 1.5 and o 1.5, r is categorized many-to-many (M-M).", "As a result, 17, 26, 81 and 113 relations are labelled 1-1, 1-M, M-1 and M-M, respectively.", "And 0.9%, 6.3%, 20.5% and 72.3% of the test triples in FB15k-237 contain 1-1, 1-M, M-1 and M-M relations, respectively.", "Figure 2 shows the Hits@10 and MRR results for predicting head and tail entities w.r.t each relation category on FB15k-237.", "CapsE works better than ConvKB in predicting entities on the side M of triples (e.g., predicting head entities in M-1 and M-M; and predicting tail entities in 1-M and M-M), while ConvKB performs better than CapsE in predicting entities on the side 1 of triples (i.e., predicting head entities in 1-1 and 1-M; and predicting tail entities in 1-1 and M-1).", "Figure 3 shows the Hits@10 and MRR scores w.r.t each relation on WN18RR.", "also see , similar to , verb group and derivationally related form are symmetric relations which can be considered as M-M relations.", "Our CapsE also performs better than ConvKB on these 4 M-M relations.", "Thus, results m 10 20 30 40 50 1 48.37 52.60 53.14 53.33 53.21 3 47.78 52.34 52.93 52.99 52.86 5 47.03 52.25 45.80 45.99 45.76 7 40.46 45.36 45.79 45.85 45.93 Table 3: Hits@10 on the WN18RR validation set with N = 50 and the initial learning rate at 1 e 5 w.r.t each number of iterations in the routing algorithm m and each 10 training epochs.", "shown in Figures 2 and 3 are consistent.", "These also imply that our CapsE would be a potential candidate for applications which contain many M-M relations such as search personalization.", "We see that the length and orientation of each capsule in the first layer can also help to model the important entries in the corresponding dimension, thus CapsE can work well on the side M of triples where entities often appear less frequently than others appearing in the side 1 of triples.", "Additionally, existing models such as DISTMULT, ComplEx and ConvE can perform well for entities with high frequency, but may not for rare entities with low frequency.", "These are reasons why our CapsE can be considered as the best one on FB15k-237 and it outperforms most existing models on WN18RR.", "Effects of routing iterations: We study how the number of routing iterations affect the performance.", "Table 3 shows the Hits@10 scores on the WN18RR validation set for a comparison w.r.t each number value of the routing iterations and epochs with the number of filters N = 50 and the Adam initial learning rate at 1 e 5 .", "We see that the best performance for each setup over each 10 epochs is obtained by setting the number m of routing iterations to 1.", "This indicates the opposite side for knowledge graphs compared to images.", "In the image classification task, setting the number m of iterations in the routing process higher than 1 helps to capture the relative positions of entities in an image (e.g., eyes, nose and mouth) properly.", "In contrast, this property from images may be only right for the 1-1 relations, but not for the 1-M, M-1 and M-M relations in the KGs because of the high variant of each relation type (e.g., symmetric relations) among different entities.", "Given a user , a submitted query and the documents returned by a search system for that query, our", "approach is to re-rank the returned documents so that the more relevant documents should be ranked higher.", "Following Vu et al. (2017), we represent the relationship between the submitted query, the user and the returned document as a (s, r, o) -like triple (query, user, document) .", "The triple captures how much interest a user puts on a document given a query.", "Thus, we can evaluate the effectiveness of our CapsE for the search personalization task.", "Dataset: We use the SEARCH17 dataset (Vu et al., 2017) of query logs of 106 users collected by a large-scale web search engine.", "A log entity consists of a user identifier, a query, top-10 ranked documents returned by the search engine and clicked documents along with the user's dwell time.", "Vu et al. (2017) constructed short-term (session-based) user profiles and used the profiles to personalize the returned results.", "They then employed the SAT criteria (Fox et al., 2005) to identify whether a returned document is relevant from the query logs as either a clicked document with a dwell time of at least 30 seconds or the last clicked document in a search session (i.e., a SAT click).", "After that, they assigned a relevant label to a returned document if it is a SAT click and also assigned irrelevant labels to the remaining top-10 documents.", "The rank position of the relevant labeled documents is used as the ground truth to evaluate the search performance before and after re-ranking.", "The dataset was uniformly split into the training, validation and test sets.", "This split is for the purpose of using historical data in the training set to predict new data in the test set (Vu et al., 2017).", "The training, validation and test sets consist of 5,658, 1,184 and 1,210 relevant (i.e., valid) triples; and 40,239, 7,882 and 8,540 irrelevant (i.e., invalid) triples, respectively.", "Evaluation protocol: Our CapsE is used to rerank the original list of documents returned by a search engine as follows:", "(i) We train our model and employ the trained model to calculate the score for each ( s, r, o ) triple.", "(ii) We then sort the scores in the descending order to obtain a new ranked list.", "To evaluate the performance of our proposed model, we use two standard evaluation metrics: mean reciprocal rank (MRR) and Hits@1.", "3 For each metric, the higher value indi-3 We re-rank the list of top-10 documents returned by the cates better ranking performance.", "We compare CapsE with the following baselines using the same experimental setup: ( 1 ) SE: The original rank is returned by the search engine.", "( 2 )", "CI (Teevan et al., 2011): This baseline uses a personalized navigation method based on previously clicking returned documents.", "( 3 )", "SP (Bennett et al., 2012; Vu et al., 2015): A search personalization method makes use of the session-based user profiles.", "( 4 )", "Following Vu et al. (2017), we use TransE as a strong baseline model for the search personalization task.", "Previous work shows that the well-known embedding model TransE, despite its simplicity, obtains very competitive results for the knowledge graph completion (Lin et al., 2015a; Nickel et al., 2016b; Trouillon et al., 2016; Nguyen et al., 2016a, 2018).", "( 5 )", "The CNN-based model ConvKB is the most closely related model to our CapsE.", "Embedding initialization: We follow Vu et al. (2017) to initialize user profile, query and document embeddings for the baselines TransE and ConvKB, and our CapsE.", "We train a LDA topic model (Blei et al., 2003) with 200 topics only on the relevant documents (i.e., SAT clicks) extracted from the query logs.", "We then use the trained LDA model to infer the probability distribution over topics for every returned document.", "We use the topic proportion vector of each document as its document embedding (i.e. k = 200 ).", "In particular, the z th element ( z = 1 , 2 , ..., k ) of the vector embedding for document d is: v d,z = P( z | d ) where P( z | d ) is the probability of the topic z given the document d .", "We also represent each query by a probability distribution vector over topics.", "Let D q = { d 1 , d 2 , ..., d n } be the set of top n ranked documents returned for a query q (here, n = 10 ).", "The z th element of the vector embedding for query q is defined as in (Vu et al., 2017): v q,z = (cid:80) ni =1 i P( z | d i ) , where i = i 1 (cid:80) nj =1 j 1 is the exponential decay function of i which is the rank of d i in D q .", "And is the decay hyper-parameter ( 0 < < 1 ).", "Following Vu et al. (2017), we use = 0 .", "8 .", "Note that if we learn query and document embeddings during training, the models will over-fit to the data and will not work for new queries and documents.", "Thus, after the initialization process, we fix (i.e., not updating) query and document embeddings during training for TransE, Con-search engine, so Hits@10 scores are same for all models.", "In addition, as mentioned by Bennett et al. (2012), the more recently clicked document expresses more about the user current search interest.", "Hence, we make use of the user clicked documents in the training set with the temporal weighting scheme proposed by Vu et al. (2015) to initialize user profile embeddings for the three embedding models.", "Hyper-parameter tuning: For our CapsE model, we set batch size to 128, and also the number of neurons within the capsule in the second capsule layer to 10 ( d = 10 ).", "The number of iterations in the routing algorithm is set to 1 ( m = 1 ).", "For the training model, we use the Adam optimizer with the initial learning rate { 5 e 6 , 1 e 5 , 5 e 5 , 1 e 4 , 5 e 4 } .", "We also use ReLU as the activation function g .", "We select the number of filters N { 50 , 100 , 200 , 400 , 500 } .", "We run the model up to 200 epochs and perform a grid search to choose optimal hyper-parameters on the validation set.", "We monitor the MRR score after each training epoch and obtain the highest MRR score on the validation set when using N = 400 and the initial learning rate at 5 e 5 .", "We employ the TransE and ConvKB implementations provided by Nguyen et al. (2016b) and Nguyen et al. (2018) and then follow their training protocols to tune hyper-parameters for TransE and ConvKB, respectively.", "We also monitor the MRR score after each training epoch and attain the highest MRR score on the validation set when using margin = 5, l 1 -norm and SGD learning rate at 5 e 3 for TransE; and N = 500 and the Adam initial learning rate at 5 e 4 for ConvKB.", "Table 4 presents the experimental results of the baselines and our model.", "Embedding models TranE, ConvKB and CapsE produce better ranking performances than traditional learning-to-rank search personalization models CI and SP.", "This indicates a prospective strategy of expanding the triple embedding models to improve the ranking quality of the search personalization systems.", "In particular, our MRR and Hits@1 scores are higher than those of TransE (with relative improvements of 14.5% and 22% over TransE, respectively).", "Specifically, our CapsE achieves the highest performances in both MRR and Hits@1 (our improvements over all five baselines are statistically Method MRR H@1 SE [ (cid:63) ] 0.559 38.5 CI [ (cid:63) ] 0.597 41.6 SP [ (cid:63) ] 0.631 45.2 TransE [ (cid:63) ] 0.645 48.1 TransE (ours) 0.669 50.9 ConvKB 0.750 +12 .", "significant with p < 0 .", "05 using the paired t-test ).", "To illustrate our training progress, we plot performances of CapsE on the validation set over epochs in Figure 4.", "We observe that the performance is improved with the increase in the number of filters since capsules can encode more useful properties for a large embedding size.", "Other transition-based models extend TransE to additionally use projection vectors or matrices to translate embeddings of s and o into the vector space of r , such as: TransH (Wang et al., 2014), TransR (Lin et al., 2015b), TransD (Ji et al., 2015) and STransE (Nguyen et al., 2016b).", "Furthermore, DISTMULT (Yang et al., 2015) and ComplEx (Trouillon et al., 2016) use a tri-linear dot product to compute the score for each triple.", "Moreover, ConvKB (Nguyen et al., 2018) applies convolutional neural network, in which feature maps are concatenated into a single feature vector which is then computed with a weight vector via a dot product to produce the score for the input triple.", "ConvKB is the most closely related model to our CapsE.", "See an overview of embedding models for KG completion in (Nguyen, 2017).", "For search tasks, unlike classical methods, personalized search systems utilize the historical interactions between the user and the search system, such as submitted queries and clicked documents to tailor returned results to the need of that user (Teevan et al., 2005, 2009).", "That historical information can be used to build the user profile , which is crucial to an effective search personalization system.", "Widely used approaches consist of two separated steps: (1) building the user profile from the interactions between the user and the search system; and then (2) learning a ranking function to re-rank the search results using the user profile (Bennett et al., 2012; White et al., 2013; Harvey et al., 2013; Vu et al., 2015).", "The general goal is to re-rank the documents returned by the search system in such a way that the more relevant documents are ranked higher.", "In this case, apart from the user profile, dozens of other features have been proposed as the input of a learning-to-rank algorithm (Bennett et al., 2012; White et al., 2013).", "Alternatively, Vu et al. (2017) modeled the potential user -oriented relationship between the submitted query and the returned document by applying TransE to reward higher scores for more relevant documents (e.g., clicked documents).", "They achieved better performances than the standard ranker as well as competitive search personalization baselines (Teevan et al., 2011; Bennett et al., 2012; Vu et al., 2015).", "We propose CapsEa novel embedding model using the capsule network to model relationship triples for knowledge graph completion and search personalization.", "Experimental results show that our CapsE outperforms other state-of-the-art models on two benchmark datasets WN18RR and FB15k-237 for the knowledge graph completion.", "We then show the effectiveness of our CapsE for the search personalization, in which CapsE outperforms the competitive baselines on the dataset SEARCH17 of the web search query logs.", "In addition, our CapsE is capable to effectively model many-to-many relationships.", "Our code is available at: https://github.com/daiquocnguyen/CapsE.", "This research was partially supported by the ARC Discovery Projects DP150100031 and DP160103934.", "The authors thank Yuval Pinter for assisting us in running his code." ]
[ "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "result", "result", "abstain", "other", "other", "other" ]
[ "Tweets are short messages that often include specialized language such as hashtags and emojis.", "In this paper, we present a simple strategy to process emojis: replace them with their natural language description and use pretrained word embeddings as normally done with standard words.", "We show that this strategy is more effective than using pretrained emoji embeddings for tweet classification.", "Specifically, we obtain new state-of-the-art results in irony detection and sentiment analysis despite our neural network is simpler than previous proposals.", "Tweets are short messages shared on Twitter, one of the most popular social networking services with 326 million monthly active users word wide (Twitter, 2018).", "Tweets often use specialized language such as abbreviations (e.g., TBH: To be honest ), hashtags (e.g., #NBAFinals ), emoticons and emojis.", "The Oxford Dictionary defines an emoticon as a facial expression such as a smile or frown, formed by various combinations of keyboard characters (e.g., :), :-(), and an emoji as a small digital image or icon used to express an idea or emotion (e.g., , , ).", "While the number of emoticons is relatively small, the Uni-code Standard includes over 2,800 emojis.", "Emojis are interesting because they succinctly encode meaning that otherwise would require more than one word to convey (e.g., grinning face , clapping hands and face with medical mask for the emojis above).", "Additionally, emojis have become popular in social media.", "5 billion emojis are sent daily on Facebook (Burge, 2018).", "While only 6% of the top-100 Facebook headlines used emojis in 2015, 52% did so in 2017 (Boland, 2017).", "Over Irony?", "14% of tweets and 50% of Instagram posts contain at least one emoji (Cruse, 2015; Moon, 2015).", "Irony detection and sentiment analysis in tweets are two popular tasks.", "Sentiment analysis has received substantially more attention than irony detection.", "Irony, however, is a major error source in sentiment analysis (0.71 F1 overall but 0.29 F1 with ironic tweets (Hee et al., 2018)), and natural language understanding in general does not generalize well with ironic texts (Liu et al., 2012; Maynard and Greenwood, 2014).", "In this paper, we tackle both irony and sentiment analysis in tweetstwo classification tasks.", "In particular, we focus on modeling emojis.", "Consider the examples in Table 1.", "Understanding the emojis is critical to making irony and sentiment judgements.", "In the first example, the contrast between the emojis helps determining that irony is present (the hashtag #not also helps).", "In the second tweet, the OK hand sign and face blowing a kiss emojis help reinforcing that the author is praising somebody and not being ironic.", "Similarly, the smiling and sad emojis in the last two examples are a clear sign of the author's sentiment towards the movie Ted 2 and the incompatibility issue.", "The main contributions of this paper are twofold.", "First, we present a simple strategy to model emojis: replace them with their textual description.", "Second, we show that this strategy outperforms previous methods and yields a new state-of-the-art in two tweet classification tasks: irony detection and sentiment analysis.", "Irony is closely related to sarcasm.", "The Oxford Dictionary defines irony as The expression of one's meaning by using language that normally signifies the opposite, typically for humorous or emphatic effect, and sarcasm as The use of irony to mock or convey contempt.", "Given these defi-nitions, it is not surprising that many researchers do not distinguish between them (Maynard and Greenwood, 2014).", "The top-3 systems to detect irony are built with neural networks and pretrained word embeddings.", "Baziotis et al. (2018) build an ensemble of two stacks of BiLSTMs (word and character level) with attention.", "Wu et al. (2018) propose a BiLSTM and a multitask learning framework (hashtag, irony presence and irony type prediction), and complement the input text with sentiment features extracted from lexicons.", "Vu et al. (2018) propose a multilayer perceptron taking as input an embedding for the input text (average of word embeddings) as well as manually crafted lexical, syntactic, semantic and polarity features.", "Our strategy to incorporate emojis outperforms all of them (Table 3).", "Sentiment analysis in tweets has been studied for years (Nakov et al., 2013).", "At its core, it is the task of classifying a tweet into expressing positive, neutral or negative sentiment (Rosenthal et al., 2017).", "Initial systems were primarily based on sentiment lexicons and manually extracted features, but the state of the art uses neural networks and word embeddings.", "Baziotis et al. (2017) propose a stack of two BiLSTMs at the word level and do not use any lexicons.", "Cliche (2017) presents a CNN and BiLSTM ensemble and experiment with three pretrained embeddings.", "Rouvier (2017) also presents a CNN and BiLSTM ensemble but incorporates manually defined features (e.g., word presence in emotion lexicons, all-caps).", "The strategy presented here to incorporate emojis outperforms all these systems (Table 4).", "Within natural language processing and social media, emojis have received considerable attention.", "Barbieri et al. (2016) train emoji embeddings with word2vec and discover that the closest words are sound (e.g., : coffee, roasters, caffeine, latte).", "Eisner et al. (2016) propose a complementary approach to train emoji embeddings (Sec-tion 3).", "Emojis have also been used as labels for distant supervision to improve tweet classification (Felbo et al., 2017).", "The strategy presented here to incorporate emojis is simpler and more effective than previous ones, does not require additional pretraining or domain specific corpora, and can be used with any neural architecture that takes text as input without any modifications.", "Simply put, we replace emojis with their textual descriptions and leverage existing pretrained word embeddings.", "Neural networks that take as input text usually transform the input tokens into pretrained embeddings.", "When the input text are tweets, it is common to use embeddings pretrained with large collections of tweets as opposed to general purpose text (Li et al., 2017; Pennington et al., 2014).", "Emojis as Regular Tokens.", "The simplest option to incorporate emojis into a neural network is to consider them as any other token in the input text (Barbieri et al., 2016).", "This strategy relies on having seen enough instances of each emoji in the texts with which embeddings were pretrained otherwise the embeddings will not capture the semantics of emoji tokens properly.", "Emoji Embeddings.", "Another strategy is to use separate embeddings for emojis.", "Eisner et al. (2016) pretrain emoji embeddings using positive and negative (randomly sampled) emoji descriptions.", "Descriptions are transformed into a vector by adding the corresponding word2vec embeddings (Mikolov et al., 2013).", "Emoji embeddings are tuned quickly because only a positive and a negative description per emoji are considered.", "Our Strategy: Emoji Descriptions.", "Our strategy is simple: replace emojis with their textual descriptions.", "Effectively, this eliminates all emojis in the input and incorporates a rather detailed descriptionseveral tokensof the emojis (see examples in Table 2).", "Our rationale is as follows.", "First, lists of emojis and their textual descriptions Emoji Description Face with tears of joy Face blowing a kiss Grinning face with smiling eyes Relieved face Squinting face with tongue Sad but relieved face Angry face Loudly crying face Downcast face with sweat Anxious face with sweat Table 2: Emojis and their textual description.", "This material is based upon work supported by the National Science Foundation under Grants Nos. 1734730, 1832267 and 1845757.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.", "The Titan Xp used for this research was donated by the NVIDIA Corporation." ]
[ "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "other", "other" ]
[ "In recent years online shopping has gained momentum and became an important venue for customers wishing to save time and simplify their shopping process.", "A key advantage of shopping online is the ability to read what other customers are saying about products of interest.", "In this work, we aim to maintain this advantage in situations where extreme brevity is needed, for example, when shopping by voice.", "We suggest a novel task of extracting a single representative helpful sentence from a set of reviews for a given product.", "The selected sentence should meet two conditions: first, it should be helpful for a purchase decision and second, the opinion it expresses should be supported by multiple reviewers.", "This task is closely related to the task of Multi Document Summarization in the product reviews domain but differs in its objective and its level of conciseness.", "We collect a dataset in English of sentence helpfulness scores via crowd-sourcing and demonstrate its reliability despite the inherent subjectivity involved.", "Next, we describe a complete model that extracts representative helpful sentences with positive and negative sentiment towards the product and demonstrate that it outperforms several baselines.", "Customer reviews are known to be a valuable source of information for potential buyers.", "This is evident from the high engagement of customers with reviews, for example by up-voting a review for its helpfulness.", "1 As online shopping platforms attract more traffic it is becoming increasingly dif-ficult to consume the wealth of information customers share.", "For this reason, helpful reviews (de-fined as such by costumers) are made more visible than those that are less helpful.", "The topic of review helpfulness has attracted a lot of academic interest in which reviews were always considered as a whole (see Diaz and Ng (2018) for a survey).", "However, in some scenarios, such as the limited real-estate in mobile screens, or in voice interactions with a virtual assistant, presenting a full review is impractical and the need to automatically extract helpful excerpts arises.", "While in the mobile scenario, a persistent customer may still be able to read the entire review, the voice scenario is inherently more challenging as it demands patience and focus from the customer, while the assistant reads the text out loud.", "As a result, the need for extreme brevity and the ability to understand what matters most to customers becomes crucial.", "In addition to brevity and helpfulness, another desirable property from the extracted content is being faithful to the reviews as a whole.", "Indeed, a costumer looking for relevant and helpful reviews, often interacts with more than one review before making their decision, trying to pinpoint those helpful bits of information that are shared by multiple reviewers.", "This process is tedious because of the sheer amount of reviews and biased because of the order they appear in.", "A system that aims to replace this process while maintaining trust in the content it provides should be able to extract concise helpful texts that repeat across multiple reviews, indicating that they are faithful to the reviews' content (from here onward we shall refer to such texts as faithful).", "Our goal is to extract such sentences, i.e., sentences that are both helpful for a purchase decision and faithful .", "To this end, we first define two new notions: A Helpful Sentence is a sentence which is considered helpful by the average customer in their purchase decision process.", "A Representative Helpful Sentence (RHS) is a helpful sentence that is also highly supported, that is, the ideas it expresses appear in multiple reviews for the given product (not necessarily in the exact same wording).", "It is traditionally assumed that judging the importance of a text excerpt requires reading the entire text.", "We challenge this assumption, at least in the domain of product reviews, and collect a dataset of single review sentences with their helpfulness scores by averaging the scores assigned to them by multiple crowd workers.", "We show that despite the highly subjective nature of this task, and despite the fact that workers are exposed to sentences without their surrounding context, the resulting scores are reliable.", "Using the data we collected, from 6 different categories, ranging from Electronics to Books, we train and evaluate several supervised algorithms to predict helpfulness score, which achieve promising results.", "Finally, we present an initial implementation of a model that given a set of product reviews, extracts a single positive RHS (supports the purchase) and a single negative RHS (opposes the purchase).", "In summary, the main contributions of this work are: (1) We propose a novel task that given a set of reviews for a product, outputs a single sentence that is both helpful for a purchase decision and supported by multiple reviewers; (2) We show that the helpfulness of a sentence can be reliably rated based solely on the sentence, allowing for an ef-ficient dataset creation.", "These helpfulness scores can be leveraged for other tasks such as highlighting important parts of a review; (3) We publish a novel dataset of sentences taken from customer reviews along with their helpfulness score; 2 (4) We develop an end-to-end model for our task that shows promising results and outperforms several baselines.", "Review Helpfulness Modeling and Prediction Customer reviews are a valuable source of information for customers researching a product before making a purchase (Zhu and Zhang, 2010).", "Diaz and Ng (2018) survey recent work on the tasks of modeling and predicting review helpfulness.", "While some researchers treat helpfulness votes as ground-truth, others have argued that these votes are not good indicators for actual review helpfulness (Liu et al., 2007; Tsur and Rappoport, 2009; Yang et al., 2015).", "been shown to be strongly correlated to helpfulness (Kim et al., 2006; Liu et al., 2007; Otterbacher, 2009; Mudambi and Schuff, 2010; Pan and Zhang, 2011; Yang et al., 2015).", "Another widely-agreed indication for review helpfulness is the review star rating (Kim et al., 2006; Mudambi and Schuff, 2010; Pan and Zhang, 2011).", "A related dataset was presented in Almagrabi et al. (2018).", "The main advantages of the dataset we create over this previously suggested one are: (1) Binary vs. continuous scores We use continuous scores rather than binary scores.", "Our aim is to surface the most helpful sentences, which is not possible if many of the sentences are annotated as equally helpful; (2) Range of products/domains The previous dataset includes only 5 products, all from the Electronics domain.", "Our dataset is significantly more diverse, providing annotations for 123 products from 6 different domains, allowing to evaluate a model's ability to generalize across domains.", "Product Review Summarization The most common approach for product review summarization, which centers the summary around a set of extracted aspects and their respective sentiment, is termed aspect based summarization .", "One of the early abstractive works, by Hu and Liu (2004), was designed to output lists of aspects and sentiments.", "Other works target a traditional summarization output and at times somewhat simplify the task by assuming aspects or seed words are provided as input (Gerani et al., 2014; Angelidis and Lapata, 2018; Yu et al., 2016).", "Recently advances were made on unsupervised abstractive reviews summarization, by leveraging neural networks (Chu and Liu, 2019; Brainskas et al., 2020b) followed by a few shot variant (Brainskas et al., 2020a).", "Extractive summarization include earlier works such as Carenini et al. (2006); Lerman et al. (2009) and Xiong and Litman (2014) who suggested to use review helpfulness votes as means to improve the content extraction process.", "More recently, Tan et al. (2017) suggested a novel generative topic aspect sentiment model.", "Task Definition In this work, we focus on summarization of reviews in the setting of shopping over voice with the help of a virtual assistant.", "Our goal is to provide users with content that is both helpful and faithful in this challenging setting where the information the user can absorb is extremely limited.", "First, we aim to maximize the informativeness, while maintaining brevity.", "To this end, we introduce a new notion of helpful sentences sentences which the average costumer will consider as helpful for making a purchase decision.", "Next, to ensure faithfulness, we introduce the notion of support for a given sentence the number of review sentences with a highly similar content.", "We seek to automatically identify a helpful sentence with a wide support, which we term representative helpful sentence (RHS) .", "Note that Representative Helpful Sentences, being supported by many similar sentences, are by construction faithful to the review pool from which they are extracted.", "We restrict ourselves to single sentences that are extracted as-is from product reviews, as this serves as another mechanism to ensure faithfulness.", "We do not restrict the number of reviews in the input.", "Table 1 presents a few helpful sentences for example, as extracted by our model (see Section 5).", "Our task resembles the well known (extractive) customer review summarization task (Hu and Liu, 2004) but differs in several important aspects.", "First, its output is very concise due to the extreme space constraint, resembling the extreme summarization task (Narayan et al., 2018), which however, deals with news articles and outputs an abstractive summary.", "In our application there is low tolerance for factually incorrect summaries, so we choose extraction over abstraction.", "Second, we do not restrict the system's output to aspect based opinions, as we find that sometimes factual content may also be quite helpful.", "Third, while traditional summarization systems favor information that appears frequently in the source documents, we target information that is both frequent and helpful.", "Subjectivity As mentioned above, review helpfulness scores are derived from votes of actual customers.", "Deciding on whether or not to up-vote a review is a subjective decision as different customers may value different product qualities.", "However, the underlying assumption of the voting mechanism is that reviews with many up-votes are indeed helpful for the average customer.", "Restricting the user to a single sentence makes matters even more challenging as it cannot possibly discuss all the product merits and shortcomings.", "To emphasize the subjectivity involved in assigning a helpfulness score for a standalone sentence, consider the examples in Table 2.", "The first example may be helpful for parents looking to buy a book for their children but entirely unhelpful for adults who wish to purchase the book for themselves.", "Similarly, the second one is more helpful to readers of extreme height (short or tall) than to those of medium height.", "Despite the evident subjectivity, we assume that there exists an average helpfulness score for every sentence, which can be estimated by averaging the ratings of multiple crowd workers.", "In the following section we establish this assumption by compiling a new dataset of sentences along with their helpfulness scores, and showing quantitatively that the annotations in our dataset are consistent and reliable.", "Our main contribution in this work lies in the notion of helpful sentences and the ability to identify such sentences without observing entire reviews.", "In what follows, we describe the process of compiling a dataset of sentences along with their helpfulness scores using crowdsourcing.", "Note that this dataset is intended solely for scoring helpfulness of sentences.", "Faithfulness is ensured by other means which are not reflected in the dataset, i.e. by requiring a RHS to have a wide support of similar sentences, as discussed in section 3 and implemented in our model, as described in Section", "5. 4.1 Annotation Task We consider a subset of 123 products arbitrarily selected from the Amazon.com website, so that each has at least 100 customer reviews and they (approximately) equally represent 6 different categories (Toys, Books, Movies, Music, Camera and Electronics).", "We started with 45,091 reviews, split them into 210,121 sentences and randomly selected a train set with 20,000 sentences, and a test set with 2,000 sentences.", "We asked annotators to rate each sentence according to how helpful it is for reaching a purchase decision, using the Appen platform.", "3 Ratings were provided on a 3-level scale of Not Helpful (0), Somewhat Helpful (1), or Very Helpful (2).", "The final helpfulness score of a given sentence was set to the average rating.", "See Section A in the Appendix for more details on the annotation task guidelines.", "Each example was rated by 10 different annotators in the training set and 30 different annotators in the test set.", "Initial experiments revealed that 10 annotations per sentence, while noisy, are still sufficient to train a model.", "We observed that the number of annotators used to calculate the test set affects the evaluation.", "This is due to the subjective nature of this task and the observed helpfulness score that becomes closer to its real score as the number of votes collected for each sentence increases.", "Table 3 demonstrates the effect the number of annotators used to rate each example in the test set has on the final evaluation.", "It shows that after fixing the model and predictions, the evaluation score (Pearson correlation in this case) increases as we average more votes.", "From our experience, there is no gain beyond 30 votes per sentence for this particular task.", "We observe a skewed helpfulness distribution with a fairly high mode of 1 .", "3 which shows that the raters did not provide random answers.", "Furthermore, under the assumption that most review authors aim for their reviews to be helpful, we should expect a distribution that is skewed towards higher scores.", "See Section A in the Appendix for a depiction of the helpfulness distribution within the train set.", "Table 4 presents the most helpful sentence, a sentence that is somewhat helpful (with a median score) and the least helpful sentence from the test set for particular headphones as perceived by the annotators.", "As mentioned earlier, rating sentence helpfulness is a highly subjective task, and some disagreement is expected.", "Nevertheless, we argue that the data we collected is reliable and demonstrate it through the three following experiments.", "Inter-annotator Agreement We compute agreement in the spirit of the analysis performed in (Snow et al., 2008).", "For each annotator, we restrict the data to the set of rows that they completed and compute the Pearson correlation between their answers against the average of all other annotators.", "Finally, we take the average across all annotators after removing the worst 10% annotators according to the method of (Dawid and Skene, 1979).", "We get an average of 0 .", "44 0 .", "01 Pearson correlation on the train set (10 annotators per row) and 0 .", "57 0 .", "02 on the test set (30 annotators per row), which demonstrates good agreement given the subjective nature of this task.", "4 We also randomly split the annotators into two disjoint sets and calculated the correlation between the corresponding scores.", "There was a correlation of 0 .", "49 for the train set and 0 .", "81 for the test set.", "Internal Consistency A necessary condition for ensuring reliability is that similar sentences get similar helpfulness scores.", "We verify that our crowd-sourced test data meets this requirement by measuring the standard deviation of the helpfulness scores within groups of similar sentences.", "We use 4 This scores are comparable, for example, with the scores reported in Snow et al. (2008) for the highly subjective Affective Text Analysis task.", "the sentence-transformers embeddings of Reimers and Gurevych (2019) which were optimized for computing semantic similarity.", "For each sentence in the test set, we construct its semantic neighborhood by grouping together all sentences with high similarity.", "For each non-singleton group, we measure the standard deviation of the helpfulness score and compare it with the standard deviation of a similarly sized group of random sentences from the test set.", "We expect to get a tighter distribution of helpfulness scores within the similarity groups (compared to the random groups) if the data is internally consistent.", "Indeed, we found 217 groups with an average standard deviation of 0 .", "16 while the average standard deviation of the corresponding random groups was 0 .", "29 .", "5 Sentence Helpfulness vs. Review Helpfulness As the third and final reliability analysis, we compare the crowd helpfulness scores with review helpfulness votes taken from the Amazon.com website.", "We consider reviews for the 123 products selected earlier, and extract two subsets.", "The first (the helpful set) is the set of all reviews with at least 50 helpful votes.", "The second (the unhelpful set) is the set of all reviews with no helpful votes.", "See Section B in the Appendix for statistics on the two subsets.", "We randomly select 500 sentences from each set and collect crowd helpfulness ratings.", "For each set we calculate the mean helpfulness score and the ratio of sentences with helpfulness score greater than 1 and 1.5 respectively.", "Table 5 shows the results which demonstrate a higher mean helpfulness score in the helpful set.", "6 Mean Ratio Ratio Score Score > 1 Score > 1 .", "These results indicate that helpful reviews tend to include more helpful sentences on average.", "However, as can be expected, the differences are not dramatic.", "Looking at the average length of reviews 5 The differences were statistically significant with a p-value of 7 E 20 using a paired two-tailed t-test.", "6 The difference is statistically significant with a p-value of approximately 0 .", "0079 using a t-test with an equal variance assumption as well as t-test with different variance assumption, a.k.a Welch's t-test.", "sheds some more light on the differences: a helpful review is almost 10 times longer than a non helpful review on average.", "This means that in order for a review to be helpful it must provide details, a requirement that a single sentence simply cannot meet.", "Therefore, we conjecture that a helpful sentence captures the most essential statements made in the review while a helpful review is one that includes details and justifies its rating.", "A brief examination of the crowd-sourced data reveals two sentence characteristics that contribute to the helpfulness of a sentence: the length of the sentence and the sentiment, which is more strongly correlated with helpfulness.", "Length The Pearson correlation between the length (in characters) and the helpfulness score on the test set is 0 .", "37 .", "This correlation is expected, since longer sentences can potentially convey more information and thus tend to be more helpful.", "Sentiment We use Amazon AWS comprehend sentiment analysis tool 7 to classify each sentence into one of four sentiment classes: positive, negative, neutral and mixed.", "We got a negative Pearson correlation of 0 .", "53 between the helpfulness scores of the sentences and the scores assigned to the neutral class.", "To better understand this relationship, we define a helpful sentence as one with score greater or equal to 1 .", "5 and a sentence with sentiment as one that is not in the neutral class, and estimate two conditional probabilities: P ( Helpful | Sentiment ) = 0 .", "15 P ( Sentiment | Helpful ) = 0 .", "68 This shows that having sentiment is an important condition for a sentence to be helpful, but it is not a sufficient condition.", "We indeed observed that sentences with sentiment that do not provide additional reasoning or details do not get high helpfulness scores.", "Some related examples from reviews can be found in Section C in the Appendix.", "We now turn to create an end-to-end model for surfacing representative helpful sentences (RHS): given a set of reviews for a certain product, we aim", "to output a single RHS with positive sentiment and a single RHS with negative sentiment.", "Figure 1 depicts the different sub-components of our model.", "Given a set of reviews, we preprocess the input and predict helpfulness scores for each of the sentences.", "Next, we analyze the sentiment of each sentence and separate into positive and negative sets.", "Following that, the support of each sentence is determined, and finally we select the RHS sentence based on its helpfulness score and its support.", "In what follows, we describe each of the components in details.", "Preprocessing We remove HTML tags and split the cleaned reviews into sentences.", "The sentences are then filtered by removing sentences of extreme length (both short and long).", "See Section D in the Appendix for additional details.", "Helpfulness Estimation This component assigns a helpfulness score for each sentence and removes all sentences with score below 1 .", "This filtering serves two purposes: First, it ensures that we do not output any sentence in case there is no helpful sentence in the product reviews.", "Second, it reduces the runtime of the downstream Similarity and Support component which is quadratic in the number of sentences.", "We experiment with three helpfulness models and find that a pre-trained BERT (Devlin et al., 2018) fine-tuned on our training data performs best.", "The two other models we compare are: (1) TF-IDF: a model that treats each sentence as a bag-of-words.", "We use TfidfVectorizer from the sklearn package to convert each sentence into a vector and then fit a Ridge regression model on top of it; (2) ST-RIDGE : a model that fits a Ridge regression on top of the Sentence-Transformers embedding (Reimers and Gurevych, 2019).", "We use 3 measures for evaluation: Mean Squared Error (MSE), which is the traditional measure for regression, Pearson correlation between the predicted score and the ground-truth score, and finally a ranking measure that evaluates the quality of the top ranked sentence (NDCG@1).", "The results are depicted in Table", "6. The TF-IDF model has an acceptable performance but it suffers from out-of-vocabulary problem and ignores the sentence as a whole, for example, the model predicts a higher score than that of the annotators to the sentence fantastic brilliant amazing superb good.", "In order to gain some understanding into what constitutes a helpful sentence, we checked the top positive and negative features of this model.", "8 We observed that the top positive words include sentiment words and product aspects.", "The results, however, indicate that these features are not sufficient to evaluate the helpfulness in a more fine-grained manner.", "The ST-RIDGE model significantly outperforms the TF-IDF model in all metrics.", "Finally, the BERT model is significantly better than the ST-RIDGE model in terms of MSE and Pearson correlation.", "Sentiment Analysis In this step, we employ the Amazon AWS comprehend sentiment analysis tool to assign each sentence a sentiment class and a score for each of the four classes: positive, negative, neutral and mixed.", "Sentences with a neutral or mixed classes are removed and all the rest are divided into a positive set and a negative set.", "The purpose of this step is twofold: first, the separation allows us to output a final sentence for both positive and negative sentiments.", "Second, we gain more confidence that semantically similar sentences (as measured in the downstream Similarity and Support component) have indeed the same meaning (and not the exact opposite).", "Similarity and Support At this stage we aim to compute the support of each sentence, which we define as the size of the set of highly similar sentences.", "Formally, for a given sentence s i , its support is |{ s j (cid:54) = i | sim ( s i , s j ) > }| , where is a predefined threshold.", "8 Top-10 positive features: great, sound, quality, good, excellent, price, easy, lens, recommend, perfect.", "Top-10 negative features: bought, review, know, don, got, amazon, gift, reviews, christmas, order.", "To compute the similarity, we convert each sentence pair to the corresponding representations and compute the cosine similarity.", "In order to get the most accurate results, we compare several sentence representations on the semantic similarity task: the Sentence Transformers (Reimers and Gurevych, 2019), the Universal Sentence Encoder (USE) (Cer et al., 2018), FastText (Mikolov et al., 2018), and a bag-of-words representation weighted by the inverse document frequency.", "We find that the Sentence Transformers embeddings perform best.", "To compare the methods, we sample 300,000 sentence pairs from the reviews of our 123 products, compute the similarity scores on this sample and select the top 500 pairs using each of the methods.", "We next consider the union of the above pairs to form a dataset of 2,035 pairs.", "We ask human annotators to determine if the sentences of each pair have a roughly similar meaning or not.", "We then calculate the precision at K (for K between 1 and 2,035) for each of the methods.", "As can be seen from Figure 2, Sentence-Transformers is superior to the other methods.", "Finally, we derived a precision-oriented similarity score threshold ( = 0 . 876 ) for Sentence Transformers that achieves a precision of 0 .", "9 0 .", "286 and a recall of 0 .", "46 0 .", "022 where the recall is estimated based on the set of 2,035 pairs.", "Sentence Selection The Sentence Selection component is in charge of selecting a single sentence that is both helpful and well supported.", "We enforce a minimum support of 5 , as we observed that such a limit increases the overall quality of the sentence and avoids surfacing esoteric opinions.", "After applying this threshold, we rank the remaining sentences according to the formula: support helpful , where is a boosting parameter.", "To derive an appropriate value for we conducted another annotation task and obtained a value of = 38 .", "8 that gives a lot of emphasis to the helpfulness score.", "We describe this in detail in Section D in the Appendix.", "The evaluation of our end-to-end model is challenging and does not have a natural scheme.", "Recall that we do not restrict our input to small random samples of the review set, as commonly done in review summarization, and was shown to produce biased results (Shapira and Levy, 2020).", "Instead, we allow for dozens or hundreds of reviews per product.", "Thus, we cannot expect annotators to carefully read the full input before choosing an RHS.", "Nonetheless, we show that our notion of helpfulness is indeed useful for surfacing important review content by comparing our models to previous summarization works in two different settings.", "Single Review Summarization In this evaluation we only consider the helpfulness component, as means to create an extractive summary comprised of a single sentence.", "Abstractive single review summarizers (Ma et al., 2018; Isonuma et al., 2019; Wang and Ren, 2018) are not suitable for comparison as these works are trained on header-like summaries of 4.36 words on average, much shorter than our extractive, one-sentence output.", "Instead, we consider the unsupervised single document summarization algorithm Textrank 9 (Mihalcea and Tarau, 2004).", "Textrank, which is extractive and can output any number of sentences, is a viable candidate for comparison as our goal is not to achieve SOTA results on this task, but rather to demonstrate that the helpfulness model can produce good extractive summaries without being trained on reference summaries.", "We selected a sample of 300 reviews, in which the prediction of the two algorithms differed (the output was exactly the same on 28% of the re-views), and asked crowd workers to rate each of the selected sentences in a 5-level scale according to how helpful the selected sentence was for a purchase decision (our objective) and according to how well the selected sentence summarized the review (the traditional objective).", "Each sentence was annotated by 5 workers, where the sentences of the two algorithms appeared one next to the other but in random order.", "Table 7 summarizes the results, 9 We used the implementation of https://pypi.org/ project/sumy/ showing our method is superior in both aspects.", "10 Helpfulness Summarization Mean Std Mean Std Helpful Sentence 3 .", "End-To-End Evaluation Our complete model resembles the task of Multi-Document Summarization (MDS) which ideally consumes the entire set of reviews related to a specific product and outputs a single summary or a single sentence in our case.", "In practice, MDS is applied to document sets of relatively small sizes, which significantly reduces the potential impact of our Similarity and Support sub-component.", "In order to put our evaluation in context of prior work, we evaluate our model with two minor modifications tailored for small review sets: we relax the similarity threshold to 0.75 and remove the minimal support constraint.", "We only consider the positive sentences in this evaluation, as the majority of the reviews are positive.", "We use the dataset published in Brainskas et al. (2020b) which covers 60 products from 4 different categories (Cloth, Electronics, Health Personal Care and Home Kitchen) 11 of which only 1 category is included in our own data.", "Each product has 8 reviews and 3 reference summaries written by humans.", "We evaluate our model in a straight forward manner by comparing the sentences selected by our model to sentence rankings provided by humans.", "We ask expert annotators (one annotator per example) to read the reviews and rate each sentence from the reviews on a scale of 1 to", "5. A score of 1 means that the sentence does not help to make a purchase decision or it does not reflect the overall theme of the reviews where a score of 5 means that the sentence is both helpful and aligns well with the common opinions expressed in the reviews.", "The mean score of the top sentence for each product is 4 .", "31 , which means that even for products with only 8 reviews it is common to find a sentence that is both helpful and supported by the reviews.", "We evaluate our model by averaging NDCG@K over all products for K { 1 , 10 } .", "We compare the 10 The results are statistically significant using a 1-tail paired t-test, with a p-value of 1 .", "05 E 06 for helpfulness and 0 .", "005 for summarization.", "11 4 of the products in this dataset are no longer available on amazon.com and we omitted them from the evaluation.", "performance of our model with two baselines: ranking the sentences in a random order and from the longest to the shortest.", "Our method outperforms the baselines by a large margin, see Table 8.", "For the sake of completeness we also report the common MDS evaluation metric, ROUGE (Lin, 2004), which does not fully suit our setting, as it is based on n-gram comparisons between the output and golden summaries written by humans, which are typically much longer than a single sentence.", "In Table 9 we compare the ROUGE scores of 3 sentence selection variants: our model, a random sentence and an Oracle, i.e., the sentence that maximizes the ROUGE-L score.", "We also report the results of Copycat (Brainskas et al., 2020b), 12 a state-of-the-art review MDS model.", "We note that Copycat is not truly comparable to our model due to the significantly different summary length requirement (in this dataset an average sentence contains 74 characters while an average reference summary contains 293 characters).", "Note, however, that in terms of precision, which is what we aim for with such an extreme summary, the RHS is almost as good as the Oracle and much better than Copycat.", "Examples of RHS We pick two examples from Brainskas et al. (2020a), depicted in Table 10, and use our model to extract a single sentence for each.", "Each of the examples consists of 8 reviews and a reference summary written by a 12 Results are based on our own computation using https: //pypi.org/project/py-rouge/ Reviews from Yelp!", "While our extracted sentence is less elaborative compared to the human and abstractive summaries, it gives enough information to make a decision.", "Note also, that the abstractive summary does not refer to the high pricing.", "As for the second example, while not covering all aspects of the product, the helpful sentence is faithful to the reviews and aligns with the overall sentiment.", "The summarizer, on the other hand, contradicts the reviews with regarding to the sandals size.", "Recall that these examples are constructed from 8 reviews only, while our model benefits considerably from large number of reviews, which is often the case for popular products.", "This is due to the greater sentence variety it can choose from and the fact that the support becomes more meaningful as more reviews are available.", "See Section E in the Appendix for additional examples and some statistics of our model outputs.", "In this paper we address the challenge of summarizing product reviews with limited space, like when using a virtual assistant.", "We define a new notion that fits the needs of this setting, a representative 13 We only show the summaries, the complete set of reviews are available in Brainskas et al. (2020a).", "helpful sentence , and propose a new task accordingly: given a set of product reviews, extract a sentence that is both helpful for a purchase decision and well supported by the opinions expressed in the reviews.", "As a first step, we collect and annotate a new dataset of review sentences with their helpfulness scores, and make this dataset available to facilitate further research.", "Next, we develop an end-to-end model for surfacing representative helpful sentences.", "Our model combines several necessary components which are optimized for our goal.", "In order to get a feeling for the performance of our model, we compare our results to summarization tasks that are similar in nature, and show that our model performs better in the aspects we target.", "In this work, we make use of customer reviews published on Amazon.com.", "The reviews must comply with Amazon's Community Guidelines 14 which prohibit offensive, infringing, or illegal content.", "Amazon encourages anyone who suspects that content manipulation is taking place or that its Guidelines are being violated to notify Amazon.", "Amazon investigates concerns thoroughly and takes appropriate actions, including removal of reviews that violate these Guidelines, including reviews that contain hatred or intolerance for people on the basis of race, ethnicity, nationality, gender or gender identity, religion, sexual orientation, age, or disability.", "Among other things, Amazon has a broad license to use, reproduce, publish, and create derivative works from the customer reviews on Amazon.com.", "The authors of this paper are employees of Amazon and are authorized to use customer reviews in this work.", "A small sample of annotated review sentences is released for research purposes according to the provided license.", "15 Annotations were conducted by a service provider pursuant to a Service Agreement with Amazon.", "Under that Service Agreement, the service provider represents and warrants that it complies with all applicable laws, regulations, and ordinances when performing those services." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "method", "result", "result", "method", "objective", "objective", "other", "other", "other", "other", "other", "other", "method", "objective", "method", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain" ]
[ "Although many end-to-end context-aware neural machine translation models have been proposed to incorporate inter-sentential contexts in translation, these models can be trained only in domains where parallel documents with sentential alignments exist.", "We therefore present a simple method to perform context-aware decoding with any pre-trained sentence-level translation model by using a document-level language model.", "Our context-aware decoder is built upon sentence-level parallel data and target-side document-level monolingual data.", "From a theoretical viewpoint, our core contribution is the novel representation of contextual information using point-wise mutual information between context and the current sentence.", "We demonstrate the effectiveness of our method on English to Russian translation, by evaluating with BLEU and contrastive tests for context-aware translation.", "Neural machine translation ( NMT ) has typically been explored in sentence-level translation settings.", "Such sentence-level NMT models inevitably suffer from ambiguities when a source sentence has multiple plausible interpretations.", "Examples of such ambiguities include anaphora, ellipsis, and lexical coherence (Voita et al., 2019b); although resolving these ambiguities has only a minor im-pact on the translation performance measured by BLEU scores (Papineni et al., 2002), they are vital in smoothly reading the translated documents.", "To address this issue, context-aware NMT models which incorporate document-level information in translation have recently been explored (Jean et al., 2017; Wang et al., 2017; Tiedemann and Scherrer, 2017; Maruf and Haffari, 2018; Voita et al., 2018; Bawden et al., 2018; Miculicich et al., 2018; Maruf et al., 2019; Voita et al., 2019b; Yu et al., 2020; Currently at Mitsubishi UFJ Morgan Stanley Securities Kang et al., 2020; Zhang et al., 2020).", "Most of these models are end-to-end models that require document-level parallel data with sentential alignments for training.", "However, this data is available in only a few domains (Sugiyama and Yoshinaga, 2019).", "Researchers have therefore started to utilize target-side monolingual data to construct auxiliary models which help a sentence-level NMT model perform context-aware translation (Voita et al., 2019a; Stahlberg et al., 2019; Yu et al., 2020).", "In this study, we propose a simple yet effective approach to context-aware NMT using two primitive components, a sentence-level NMT model and a document-level language model ( LM ).", "We can independently train the two components on common sentence-level parallel data and document-level monolingual data, respectively, without using document-level parallel data.", "Our approach thereby makes it possible to perform context-aware translation with any pre-trained sentence-level NMT model, using a pre-trained document-level LM .", "To give a probabilistic foundation to this combination of two independent models, we exploit the probabilistic nature of NMT decoding.", "When generating a sequence, a left-to-right decoder outputs a categorical probability distribution over the vocabulary at every time step.", "The decoder assigns higher probabilities to the tokens that would be more suitable at that step.", "Therefore, when multiple valid translations are possible for the source sentence, the decoder just gives a higher probability to the translation that is plausible without considering contexts.", "We thus adjust the probability distributions in a context-aware manner using a target-side document-level LM which models inter-sentential dependencies in the target-side document.", "We evaluate our methods on English to Russian translations with the OpenSubtitles2018 corpus (Lison et al., 2018) in terms of the BLEU scores and contrastive discourse test sets (Voita et al., 2019b).", "Experimental results confirm that our method achieved comparable performance with existing context-aware NMT models that require either document-level parallel data (Zhang et al., 2018; Sugiyama and Yoshinaga, 2019) or more than one additional model (Voita et al., 2019a; Yu et al., 2020) for capturing contexts in translation.", "The contributions of this paper are as follows: We theoretically derived C-SCORE , a score to qualify context-aware translation without the need for document-level parallel data .", "Two formulations with C-SCORE turn any pre-trained sentence-level NMT model into a context-aware model , if it generates n -best outputs or performs left-to-right decoding.", "A comparison between our approach and shallow fusion (Gulcehre et al., 2015) reveals that our approach reformulates shallow fusion while adding a probabilistic foundation .", "2 Context-aware Decoding using Document-level Language Model In this section, assuming a sentence-level encoder-decoder model (Bahdanau et al., 2015; Vaswani et al., 2017), we first derive context-aware score ( CSCORE for short), a context-aware objective function of outputs to be maximized in decoding.", "We then describe how to compute the C-SCORE using the decoder with a document-level language model ( D-LM ) ( 2.1).", "We finally detail how to perform context-aware decoding based on C-SCORE ( 2.2).", "Let us consider the problem of finding a translation y of a source sentence x in a document.", "The target-side context sentence(s) preceding y , c ( y ) , are to be given by the past translations.", "We formulate context-aware translation conditioned on c ( y ) as the maximization of the conditional probability p ( y | x , c ( y ) ) , y = arg max y log p ( y | x , c ( y ) ) = arg max y log p ( c ( y ) | x , y ) p ( y | x ) p ( c ( y ) | x ) = arg max y log p ( c ( y ) | x , y ) p ( y | x ) .", "(1) Assuming that x and y are semantically similar, we make the following approximation, p ( c ( y ) | y , x ) p ( c ( y ) | y ) .", "From Eq.", "1 and Eq.", "2, we obtain y arg max y log p ( c ( y ) | y ) p ( y | x ) = arg max y log p ( c ( y ) , y ) p ( c ( y ) ) p ( y ) p ( y | x ) = arg max y C-SCORE ( y ; x , c ( y ) ) where C-SCORE ( y ; x , c ( y ) ) = log p ( y | x ) + PMI ( c ( y ) , y ) (3) PMI ( c ( y ) , y ) = log p ( c ( y ) , y ) p ( c ( y ) ) p ( y ) = log p ( y | c ( y ) ) p ( y ) (4) PMI ( c ( y ) , y ) is the point-wise mutual information of c ( y ) and y which represents the degree of co-occurrence of y and c ( y ) .", "Given x , y and c ( y ) , we can evaluate the C-SCORE by computing the two terms in Eq.", "3 using a sentence-level NMT ( S-NMT ) and a document-level LM ( D-LM ), respectively.", "Notations We first introduce some notation to explain the computation in Eq.", "3 and Eq.", "4 using (auto-regressive) neural sequence generation models in NMT and LM .", "For a sequence s ( | s | 0 ) and token w , a neural sequence generation model parameterized by can compute the log probability that w follows s , which we denote by log p ( w | s )) : log p ( w folows s ) = log p ( s w ) p ( s ) = log p ( w | s ) where denotes sequence concatenation.", "Applying this auto-regressively, for any sequence s (1) ( | s (1) | 0 ) and s (2) ( | s (2) | 1 ), the probability that s (2) follows s (1) is thereby computed as: log p ( s (2) follows s (1) ) = log p ( s (2) | s (1) ) = | s (2) | (cid:88) t =1 log p ( s (2) t | s (1) s (2) <t ) , where s (2) <t = [ s 1 , . . . , s t 1 ] .", "(5) p ( y | x ) computed by sentence-level NMT Computing log p ( y | x ) using an S-NMT is straightforward.", "Suppose y to be a sequence of raw tokens, y = [ y 1 , . . . , y T ] .", "Then log p ( y | x ) is computed by log p ( y | x ) = log p S-NMT ( y ; x ) (6) where y = [ y 1 , . . . , y T , </s> ] and </s> is a special token to indicate the end of sentence.", "PMI computed by document-level LM To compute the components of PMI ( c ( y ) , y ) , p ( y ) and p ( y | c ( y ) ) , we use a document-level language model ( D-LM ) which can handle long text spans containing multiple sentences.", "We generate training examples for D-LM from a document as follows.", "We assume D-LM explicitly models sentence boundaries.", "We first insert the special token </s> into every sentence boundary including the start and end of the document.", "With this preprocessing, all the sentences start immediately after an </s> token and end immediately before an </s> token.", "We then sample text spans from the document using a sliding window, where the start and end of the span do not have to match sentence boundaries.", "The sliding window's size is larger than the stride size, so adjacent spans may overlap.", "The resulting sequence is fed to the D-LM for training.", "Note that </s> for D-LM indicates sentence boundaries, in other words, both the start and end of the sequence.", "Using D-LM , p ( y ) is computed by p ( y ) = p D-LM ( y | </s> ) .", "where y = [ y 1 , . . . , y T , </s> ] .", "To compute p ( y | c ( y ) ) , we first obtain the context sequence c ( y ) by concatenating all the sentences in c ( y ) with </s> .", "We then compute the conditional probability p ( y | c ( y ) ) by p ( y | c ( y ) ) = p D-LM ( y | c ( y ) ) (8) where y = [ y 1 , . . . , y T , </s> ] .", "Firstly, boundary-agnostic LM s cannot compute the probability that a sentence is closed with a certain length, namely, Eq.", "7 cannot be computed.", "Secondly, they also cannot compute p ( y | c ( y ) ) correctly.", "For example, suppose the context c ( y ) is he's my friend (with the punctuation . omitted), and the current target sentence y is he's nice.", "In this case, Eq.", "8 is computed by p ( y | c ( y ) ) = p D-LM ([ he,'s,nice ] | [ he,'s,my,friend ]) .", "However, this estimation of p ( y | c ( y ) ) can underestimate the actual p ( y | c ( y ) ) because Eq.", "8 inevitably gives significant probabilities to other y such as 's father as well, since He's my friend's father is 1 We cannot rely on punctuations to know sentence boundaries, since they can be omitted in some domains.", "Searching for the optimal output y that maximizes the C-SCORE is not trivial since there are O ( VT ) candidate sequences where V is the vocabulary size and T is the maximum length of sequences to be searched.", "We investigate two approaches to obtain approximate solutions: reranking ( 2.2.1) and context-aware beam search ( 2.2.2).", "We first generate B hypotheses of the translation HB = { y 1 , . . . , y B } with beam search of beam size B using the sentence-level NMT model.", "We then choose the one that maximizes the C-SCORE .", "An issue with reranking is that we need to set B to a large value when the diversity of models' outputs is limited (Yu et al., 2020), which increases the cost of decoding.", "We therefore attempt to integrate C-SCORE into the decoding with beam search.", "Context-aware beam search ( C-AWARE beam) is beam search that is extended to work with CSCORE .", "C-SCORE (Eq. 3) can be decomposed into token-wise C-SCORE s (Eq. 5 through Eq. 8).", "C-SCORE ( y ; x , c ( y ) ) = log p ( y | x ) + PMI ( c ( y ) , y ) = T +1 (cid:88) t =1 C-SCORE w ( y t | y <t ) (10) where C-SCORE w ( y t | y <t ) = log p S-NMT ( y t | y <t ; x ) + log p D-LM ( y t | c ( y ) y <t ) p D-LM ( y t | </s> y <t ) (11) By this decomposition, C-SCORE w is conditioned on the partial sequence generated by time step t .", "We can therefore apply beam search to generate sequences in an auto-regressive manner.", "The first term of Eq.", "11 represents the translation probability for the t -th token.", "The second term can 2 Strictly speaking, we assume y to be a realization of a random variable Y which is a sentence sampled from the space of an infinitely large document.", "be interpreted as PMI between the t -th token and the context, that is, how consistent the t -th token is with the context.", "Compared to the reranking approach, C-AWARE beam can be considered to maximize the C-SCORE more directly in the sense that disambiguation and token selection based on the context are performed at every step in beam search.", "Thus C-AWARE beam will more space-efficiently consider diverse hypotheses with the same beam size B than C-AWARE rerank.", "In our preliminary experiments, we observe that the original C-AWARE beam significantly improves contrastive tests but deteriorates BLEU at the same time.", "By analyzing contextual PMI correlation between source and target texts, we find the PMI term in the C-SCORE sometimes takes an excessively large value against the translation probability term, which destroys the C-SCORE .", "This is understood intuitively by the fact that the calculation of PMI includes subtraction of log probability, and log probability may take a very small negative value to represent a probability close to zero.", "To alleviate this problem, we adopt a smoothing method for probabilities.", "For simplicity, in this paper, we only present the temperature scaling ( T scaling, for short) (Guo et al., 2017).", "T -scaling replaces p y = w by p y = w = p 1 /Ty = w (cid:80) w (cid:48) p 1 /Ty = w (cid:48) (12) where T is a hyper-parameter.", "T = 1 is equivalent to no smoothing.", "We choose T from [1 , ) to flatten the probability distribution.", "T -scaling is applied to both the numerator and denominator using the same T .", "Shallow fusion (Gulcehre et al., 2015) is a method to integrate probability distribution outputs obtained by NMT and LM at sentence level to form a new translation objective that is expected to promote fluency of translations.", "The original shallow fusion score is computed using a sentence-level NMT ( S-NMT ) and language model ( S-LM ).", "The token-wise formula of the computation is log p ( y t ) = log p S-NMT ( y t ; x ) + log p S-LM ( y t ) , (13) where is a hyper-parameter.", "where context c ( y ) is integrated into the condition.", "We call this conditional (document-level) shallow fusion .", "Obviously, this is what we obtain from Eq.", "11 by ignoring the discount of the unconditional LM probability p D-LM ( y t | </s> y <t ) .", "Due to the absence of discounting with the unconditional LM , conditional shallow fusion would prefer tokens which frequently occur regardless of the context.", "It is also worth noting that, when the context is empty, conditional shallow fusion falls back to the original shallow fusion, whereas our CSCORE falls back to sentence-level NMT .", "Therefore, we view C-SCORE as a reformulation of shallow fusion for context-aware translation.", "We evaluate our methods on English to Russian translation, in terms of BLEU scores (Papineni et al., 2002) and contrastive tests (Voita et al., 2019b).", "We use the OpenSubtitles2018 corpus (Lison et al., 2018) for parallel and monolingual data.", "Following the criteria for document segmentation and filter-ing on sentence pairs presented by (Voita et al., 2019b), we build monolingual and parallel data as follows.", "To build monolingual data, we add document boundary information into each document such that they consist of contiguous subtitle sentences from the same movie and the timestamp difference of any two adjacent sentences is no more than seven seconds.", "To build parallel data, we pick subtitle pairs where the time overlap between the source and target language subtitles is at least 0.9 (to reduce alignment errors).", "For the training of multi-encoder NMT models, document boundary information is added to the parallel data based on the source-side timestamps as with the monolingual data.", "Prior to building the Russian data, we Train Dev.", "remove the movies from which the contrastive test sets ( 3.4) were made.", "We perform punctuation normalization, tokeniza-tion, and truecasing on the source and target texts using Moses toolkit v4.0.", "3 We then encode the texts into subwords using SentencePiece (v0.1.81) 4 with unigram LM .", "The subword vocabularies are of 16,000 tokens and trained for each language.", "The statistics of the datasets are listed in Table 1.", "We compare our methods to one sentence-level translation model ( SentTransformer ) (Vaswani et al., 2017) and three context-aware translation models: Document transformer (Zhang et al., 2018), DocRepair (Voita et al., 2019a), and Bayes Document Reranker (Yu et al., 2020).", "All the context-aware models use the previous three sentences as context.", "Document Transformer ( DocTransformer , for short) is a multi-encoder document-level NMT model which takes source-side context as an auxiliary input and can be thus trained from document-level parallel data.", "We follow (Zhang et al., 2018)'s configuration for DocTransformer.", "DocRepair is a sequence-to-sequence post-editing model.", "It repairs document-level inconsistencies in a text, each sentence of which has been translated separately by a sentence-level NMT model.", "DocRepair is trained on a pseudo parallel data made by pairing a monolingual corpus and its round-trip translations obtained using a back-translation model and a forward-translation model.", "Bayes Document Reranker (hereafter, Bayes DocReranker ) performs document-level translation on a document containing D sentences in the following steps.", "First, it produces B -best translations for each sentence in the document and then produces a lattice of width B and depth D , where each node corresponds to a candidate sentence.", "It 3 http://www.statmt.org/moses/ 4 https://github.com/google/ sentencepiece then performs document-level beam search of beam size B (cid:48) on the lattice using the following score: Score( y i ; y <i , x i ) = p D-LM ( y i | y <i ) + Score( y i 1 ; y <i 1 , x i 1 ) + 1 p NMT ( y i | x i ) + 2 p BACK-NMT ( x i | y i ) + 3 | y i | (16) Note that this document-level beam search is equivalent to the reranking procedure ( 2.2.1) when B (cid:48) = 1 .", "Therefore, the essential difference between Bayes DocReranker and our C-SCORE reranking is the score function.", "SentTransformer, the post-editing model of DocRepair, and the back-translation models are based on the same configuration of Transformer base (see (Vaswani et al., 2017) for hyperparame-ter settings).", "The SentTransformer is trained using the 5.8M sentence pairs and is also used as the sentence-level NMT model in DocRepair, Bayes DocReranker, and our methods.", "For the training of DocTransformer, we use the 5.8M sentence pairs with document-level source context, which share the target-side sentences with the training data of SentTransformer.", "Consequently, scores obtained from the model are for reference.", "5 We also evaluate DocTransformer and SentTransformer using back-translation ( BT ) (Sennrich et al., 2016) with the same monolingual data as the other models.", "We use no pre-existing document-level parallel data to train the neural networks of DocRepair, Bayes DocReranker, and our methods, although we use a small amount of document-level parallel data as the development set to tune hyperparame-ters in the methods that combine multiple models.", "Instead, document-level information is fed to the models via the round-trip augmented data (DocRe-pair) or language models (Bayes DocReranker and our methods).", "Hyper-parameters We tune the models' hyper-parameters based on BLEU score on the development set in the evaluation with BLEU , while we tune these hyper-parameters in the evaluation of contrastive tests by maximizing the coefficient of D-LM under the constraint that it does not deteriorate BLEU compared to the SentTransformer.", "For beam search to produce B -best outputs in Bayes DocReranker and our C-AWARE Rerank, we 5 Although we can train DocTransformer only on pseudo document-level parallel data generated by back-translation, we confirmed in preliminary experiments that the resulting model exhibited poor performance.", "use a beam size of B = 20 .", "For document-level beam search of Bayes DocReranker, we use a beam size B (cid:48) = 5 .", "For beam search of SentTransformer, DocTransformer, C-AWARE beam, and shallow fusion, we use a beam size of B = 4 .", "The architecture of the document-level LM is the decoder part of a Transformer.", "The number of decoder blocks is 12 .", "The model size is 768 with 12 attention heads, and the inner layer of the feed-forward networks has 3072 units.", "We use position embeddings to represent position information.", "As described in 2.1, when training the language models, a special control symbol </s> is inserted at every sentence boundary.", "Each training mini-batch contains text spans each of which is a randomly sampled fragment of a document with a maximum span length of W = 384 .", "Text spans are batched such that about 32,000 tokens are in a training batch.", "The existing automatic metrics are not adequate to evaluate gains from additional contexts (Baw-den et al., 2018; Lubli et al., 2018; Mller et al., 2018; Voita et al., 2019b; Sugiyama and Yoshinaga, 2019).", "We thus adopt a contrastive test set (Voita et al., 2019b) to evaluate the model's ability to capture contextual information in translation, in addition to the evaluation by BLEU scores (Papineni et al., 2002) to confirm that the methods do not sacrifice general translation performance.", "BLEU is computed using multi-bleu.perl from the Moses Toolkit after decoding the subword repre-Models deixis lex.c ell.infl ell.vp SentTransformer 50.0 45.9 53.2 27.0 w/ BT 50.0 45.9 51.6 26.8 baselines Doc-Transformer 50.0 45.9 56.0 57.2 w/ BT 50.0 45.9 64.4 68.2 DocRepair 89.1 75.8 82.2 67.2 Bayes DocReranker 65.2 72.2 59.6 44.6 proposed C-SCORE 86.9 94.9 78.2 77.0 Cond.", "sentation of the models' outputs into words using SentencePiece.", "The contrastive test set consists of contrastive questions for context-aware NMT models to answer.", "Each question has a source sentence x , a source context c ( x ) , a target context c ( y ) , and translation candidates Y = { y 1 , . . . , y M } .", "Models must answer with a candidate y Y which would be the most appropriate translation of x , i.e. y = arg max y Y p ( y | x , c ( x ) , c ( y ) ) The test sets consist of 6000 examples in total.", "Table 2 lists the performance of the models in terms of BLEU scores.", "Bayes DocReranker and our C-AWARE Rerank consistently outperformed the baseline SentTransformer, even when it used data augmentation by back-translation, while the other methods are just comparable to the baseline.", "Althogh Bayes DocReranker performed the best among all the models, the comparison to Bayes DocReranker without context information (using p S-LM ( y i ) instead of p D-LM ( y i | y <i ) ) reveals that most of the improvement is not obtained by the use of contexts.", "Back-translation did not contribute to BLEU possibly because the original parallel data is already large and there was little room for improvement with additional pseudo data.", "Tables 3 lists evaluation results (accuracy) of the contrastive tests with models using 30M monolingual data.", "The highest scores on each column", "are in bold, and additionally, the higher one of the two D-LM -based scores is shown in bold.", "The contrastive test include four test sets: deixis is for person deixis, lex.c is for lexical cohesion, ell.infl is for inflection of Russian nouns caused by ellipsis in the source sentence, and ell.vp is for verb ellipsis in English text which is not allowed in Russian.", "Although the contrastive test is targeted at context-aware NMT models, it is possible to answer the contrastive questions by arg max y PMI ( c ( y ) , y ) or arg max y p ( y | c ( y ) ) .", "Scores obtained by these two objectives are also reported in the table in addition to the scores obtained by SentTransformer.", "Our C-SCORE outperforms all the context-aware models other than DocRepair.", "The performance of C-SCORE is slightly worse than DocRepair for deixis (2.2 points) and ell.infl (4.0 points), while achieving large improvements for lex.c (19.1 points) and ell.vp (9.8 points) over DocRepair.", "D-LM only objectives achieve higher scores than C-SCORE , except for ell.infl .", "This is not surprising because the choices in the tests are guaranteed to be valid as translation for the source sentences if given some appropriate context, so the questions can be solved without translation.", "This result still indicates that the D-LM scores give good hints for tackling contextual ambiguities.", "The advantage of C-SCORE over the SentTransformer is demonstrated by the excellent performance of D-LM in capturing contexts in translation.", "The inference speed depends mainly on the model size and beam size.", "In our experiments on a single TITAN Xp GPU, SentTransformer decoded the fastest at 66 sents/sec, followed by DocTransformer that ran in 40 sents/sec.", "DocRepair ran in about 28 sents/sec, slightly slower because it decodes in two passes.", "C-AWARE Rerank and Bayes DocReranker were about 4.3 sents/sec and 7.7 sents/sec respectively.", "We expect that these models would be accelerated by using a language model with a better cache mechanism (e.g. TransformerXL (Dai et al., 2019)).", "C-AWARE Beam ran in about 13 sents/sec.", "6 We leave thorough analysis on speed/performance trade-offs to future work.", "In 4.2 we have confirmed the effectiveness of PMI as a measure of a valid translation given context using contrastive tests.", "To gain a deeper insight into how well PMI conveys semantic connections between the current sentence and its context, we analyze the correlation of PMI between source and target sentences.", "The main result we show in this section is that the PMI of the source and target correlate well.", "This is important because this supports the idea that PMI is a language-independent measure of the connection between the current sentence and its context.", "Although we have discussed only target-side PMI ( c ( y ) , y ) defined by Eq.", "4, we can compute the source-side PMI ( c ( x ) , x ) in the same way.", "Given a document-level parallel corpus, we measure a correlation between PMI ( c ( x ) , x ) and PMI ( c ( y ) , y ) for each sentence pair ( x , y ) in the corpus.", "Figure 1a shows the PMI correlation for about 6 Note that the running time of NMT decoding also depends on the degree of parallelism, and for C-AWARE Beam, decoding multiple sentences in parallel is less trivial since it demands that all the previous sentences in the document are translated by the time it starts to translate the current one.", "In our experiments, assuming a practical scenario where a large number of users input their documents for translation, we translate multiple documents in parallel so that multiple sentences from different documents can be translated in parallel.", "4000 sentence pairs taken from the dev data.", "The pairs of PMI values are computed using English and Russian language models trained on the training data.", "We observe a clear correlation between source and target, which agrees with the intuition that if the target sentence matches well in the context, so does the source sentence.", "What is also obvious in Figure 1a is that most of the points lay in the first quadrant where both the source and target contextual PMI is greater than 0 , which is explained by the simple intuition that most sentences should have positive co-occurrence relation with their contexts.", "This behavior is lost when computing the contextual PMI using an incorrect context c randomly chosen in the dataset as shown in Figure 1b.", "The effectiveness of PMI as a measure of the valid translation of the current sentence given context is further emphasized when compared to the conditional probability p ( y | c ( y ) ) , which could be an alternative measure of how suitable y is in the context as described in 2.2.4.", "Figure 1c and 1d are the conditional probability version of Figure 1a and 1b: ( p ( x | c ( x ) ) , p ( y | c ( y ) )) for each sentence pair ( x, y ) in the same dataset are plotted in Figure 1c and the same tuples but with random contexts are plotted in Figure 1d.", "Unlike the contextual PMI correlation, conditional probability correlation remains high even when we give wrong contexts.", "This is because the conditional probability of a sentence is highly affected by how frequently the sentence is observed regardless of context; if the source sentence is written with common expressions, then so is the target sentence and they are likely to be observed regardless of the context.", "PMI correlation gives us a good explanation of how C-AWARE beam without T -scaling fails.", "We plot the PMI correlation between the source sentences and their translations obtained with NMT models (Figure 2).", "We can find some outliers in the bottom right area of the plot for C-AWARE beam without T -scaling, which is the cause of the low correlation coefficient R = 0 .", "610 < R src ref = 0 .", "695 .", "This result suggests that C-AWARE beam without T -scaling chooses some tokens based on excessively high token-wise PMI , which breaks some translations resulting in the low BLEU .", "Translation of the SentTransformer shows a higher correlation with the source texts than the reference translation (Figure 1a).", "One possible explanation for this is alignment errors in the corpus: although worse than the reference translations in quality, outputs of SentTransformer are considered to be perfectly aligned to the source sentences.", "C-AWARE beam with T scaling ( T = 4 ) seems to solve this issue and achieves the highest PMI correlation R = 0 .", "740 .", "The effectiveness of incorporating context into translation was shown in earlier literature on document-level NMT (Tiedemann and Scherrer, 2017; Bawden et al., 2018) using the single encoder architecture.", "Multi-encoder architectures were explored to better capture contextual information (Wang et al., 2017; Tu et al., 2018; Jean et al., 2017; Miculicich et al., 2018; Voita et al., 2018; Bawden et al., 2018; Maruf and Haffari, 2018; Maruf et al., 2019; Kang et al., 2020; Zhang et al., 2020).", "However, since parallel data is often constructed by picking up reliable sentential alignments from comparable documents, document-level sentence-aligned parallel data for training these document-level NMT models are expensive to obtain and available in only a few domains and language pairs (Sugiyama and Yoshinaga, 2019).", "Recent studies have therefore started to focus on modeling contexts using document-level monolingual data.", "The current approaches are grouped into three categories: data augmentation via back-translation (Sugiyama and Yoshinaga, 2019), a post-editing model (Voita et al., 2019a), and modeling document-level fluency via document-level LM s (Stahlberg et al., 2019; Yu et al., 2020; Jean and Cho, 2020).", "In what follows, we review these approaches in detail.", "Sugiyama and Yoshinaga (2019) reported that the data augmentation by back-translation (Sen-nrich et al., 2016) enhances a document-level NMT model with a single encoder architecture in low-resource settings.", "However, we have obtained limited improvements in our settings (Table 2 and Table 3).", "Moreover, this approach is expensive since it learns a document-level NMT model from a massive amount of pseudo parallel data.", "Voita et al. (2019a) proposed DocRepair, a context-aware post-editing model that corrects outputs of a sentence-level NMT model.", "Because DocRepair ignores the confidence of the first-stage sentence-level translation and possible alternative translations, it can miscorrect outputs of the sentence-level NMT model when they are irregular but correct.", "Moreover, when we change the target sentence-level NMT model, the accompanying post-editing model must be trained from its outputs.", "Our approaches, on the other hand, attempt a more soft revision, taking into account the output probabilities, i.e., confidence of the sentence-level NMT , and can perform context-aware decoding with any sentence-level NMT model, reusing a pre-trained document-level LM .", "Stahlberg et al. (2019) and Yu et al. (2020) utilize a document-level LM to model document-level fluency of outputs; these approaches are similar to shallow fusion (Gulcehre et al., 2015) 7 with document-level LM ( 2.2.4), although they perform a document-level reranking of translation hypotheses generated for individual source sentences by using sentence-level NMT .", "In particular, Yu's formulation has a probabilistic foundation like our approaches, and additionally utilizes a backward translation model.", "Although their formulation brings a significant improvement in BLEU (Table 2), the score is not obtained by better document-level 7 Our work is also related to shallow fusion (Gulcehre et al., 2015), in which token-wise probabilities output by an NMT model and a sentence-level LM are combined to be used as translation scores in decoding.", "The theoretical background of shallow fusion and our C-SCORE are different: in shallow fusion, the LM is intended to promote fluency of translations, whereas in our C-SCORE , we use the probability ratio of two LM probabilities which only provides contextual difference and fluency is still left to the translation model.", "translation; the comparable BLEU score of the no-context version of the method (Table 2) and the results of the contrastive tests (Table 3) reveal that the improvement is mostly due to the context-agnostic language model prior and the backward translation model.", "As we have discussed in 2.2.4, document-level LM scores prefer tokens which frequently appear regardless of context and are unlikely to lead to better document-level translation.", "Moreover, their method requires training a back-translation model corresponding to the target sentence-level NMT model.", "Finally, we noticed that Jean and Cho (2020) (which appeared after the preprint version of this paper (Sugiyama and Yoshinaga, 2020) 8 had been submitted) have reached a formulation that is very similar to the one presented in this paper by reformulating a noisy channel model of Bayes DocReranker (Yu et al., 2020).", "Concrete differences between our work and theirs include the fact that we conducted thorough analysis on the performance of different decoding strategies (not only beam search but also reranking).", "We also interpreted the subtraction of LM scores as point-wise mutual information and analyzed it by observing PMI correlation between source and target PMI to deepen the understanding of the formulation.", "We present an approach to context-aware NMT based on PMI between the context and the current sentence.", "We first provide the formulation of the objective, C-SCORE , and the computation process of the C-SCORE using a sentence-level translation model and a document-level language model.", "We investigate two search methods, reranking and beam search, and evaluate the methods for English-Russian translation.", "We also provide some analysis and visualization to better understand the nature of PMI between the context and the current sentence.", "We plan to design context-aware BLEU using PMI for evaluating context-aware NMT models.", "We will evaluate our method on non-autoregressive NMT (Gu et al., 2017).", "We will release all code and data to promote the reproducibility of results.", "9 8 This preprint is submitted to and rejected from EMNLP 2020; the interested reader may refer to this paper for experiments on other language pairs such as English to French and English to Japanese translation.", "9 http://www.tkl.iis.u-tokyo.ac.jp/ ~sugi/NAACL2021/ Acknowledgements We thank anonymous reviewers for their valuable comments.", "We also thank Joshua Tanner for proofreading this paper.", "We also thank Masato Neishi for technical advice on implementations of neural machine translation.", "The research was supported by NII CRIS collaborative research program operated by NII CRIS and LINE Corporation." ]
[ "abstain", "method", "method", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "objective", "objective", "result", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "other", "other", "method", "method", "other", "method", "abstain", "method", "other", "abstain", "other", "method", "method", "abstain", "method", "objective", "objective", "method", "method", "method", "result", "method", "other", "abstain", "abstain", "abstain" ]
[ "Partha Talukdar", "Indian Institute of Science, Bangalore", "Abstract", "The recent growth in the popularity and success of deep learning models on NLP classification tasks has accompanied the need for generating some form of natural language explanation of the predicted labels.", "Such generated natural language (NL) explanations are expected to be faithful , i.e., they should correlate well with the model's internal decision making.", "In this work, we focus on the task of natural language inference (NLI) and address the following question: can we build NLI systems which produce labels with high accuracy, while also generating faithful explanations of its decisions?", "We propose Natural-language Inference over Label-specific Explanations (NILE), a novel NLI method which utilizes auto-generated label-specific NL explanations to produce labels along with its faithful explanation.", "We demonstrate NILE's effectiveness over previously reported methods through automated and human evaluation of the produced labels and explanations.", "Our evaluation of NILE also supports the claim that accurate systems capable of providing testable explanations of their decisions can be designed.", "We discuss the faithfulness of NILE's explanations in terms of sensitivity of the decisions to the corresponding explanations.", "We argue that explicit evaluation of faithfulness, in addition to label and explanation accuracy, is an important step in evaluating model's explanations.", "Further, we demonstrate that task-specific probes are necessary to establish such sensitivity.", "Deep learning methods have been employed to improve performance on several benchmark classification tasks in NLP (Wang et al., 2018, 2019).", "Typically, these models aim at improving label accuracy, while it is often desirable to also produce explanations for these decisions (Lipton, 2016; Chakraborty et al., 2017).", "In this work, we focus on producing natural language explanations for Natural Language Inference (NLI), without sacrificing much on label accuracy.", "There has been growing interest in producing natural language explanations for deep learning systems (Huk Park et al., 2018; Kim et al., 2018; Ling et al., 2017), including NLI (Camburu et al., 2018).", "In general, the explanations from these methods can typically be categorized as post-hoc explanations (Lipton, 2016).", "Camburu et al. (2018) propose an NLI system which first produces an explanation and then processes the explanation to produce the final label.", "We argue that these explanations also resemble post-hoc explanations (Sec-tion 4.2).", "Further, existing methods don't provide a natural way to test the faithfulness of the generated explanations, i.e., how well do the provided explanations correlate with the model's decision making.", "We therefore propose Natural-language Inference over Label-specific Explanations (NILE) 1 , which we train and evaluate on English language examples.", "Through NILE, we aim to answer the following question: Can we build NLI systems which produce faithful natural language explanations of predicted labels, while maintaining high accuracy?", "Briefly, in NILE, we first generate natural language explanations for each possible decision, and subsequently process these explanations to produce the final decision.", "We argue that such a system provides a natural way of explaining its decisions.", "The key advantage is the testability of these explanations, in themselves, as well as in terms of the sensitivity of the system's prediction 1 NILE source code available at https://github.com/SawanKumar28/nile V8 S Step I: Generate Label-specific candidate explanations Step II: Process explanations to infer the task label 0.8 0.0 0.2 G entail G contradict G neutral A white dog with long hair jumps to catch a red and green toy.", "PredictedExplanation Figure 1: Overview of NILE : A Premise and Hypothesis pair is input to label-specific Candidate Explanation Generators G which generate natural language explanations supporting the corresponding label.", "The generated explanations are then fed to the Explanation Processor S , which generates label scores using the evidence present in these explanations (see Figure 3 for the architectures used in this work).", "In addition to the explanations, NILE also utilizes the premise and hypothesis pair (See Section 4.4.2 for a discussion on the challenges in building such a system).", "Please see Section 4 for details.", "We choose NLI due to its importance as an NLP task, and the availability of e-SNLI, a large dataset annotated both with entailment relation labels and natural language human explanations of those labels (Camburu et al., 2018; Bowman et al., 2015).", "In summary, we make the following contributions in this work.", "1. We propose NILE, an NLI system which generates and processes label-specific explanations to infer the task label, naturally providing explanations for its decisions.", "2. We demonstrate the effectiveness of NILE compared to existing systems, in terms of label and explanation accuracy.", "3. Through NILE, we provide a framework for generating falsifiable explanations.", "We propose ways to evaluate and improve the faithfulness of the system's predictions to the generated explanations.", "We claim that task-specific probes of sensitivity are crucial for such evaluation.", "Explainability of a model's predictions has been studied from different perspectives, including feature importance based explanations (Ribeiro et al., 2016; Lundberg and Lee, 2017; Chen et al., 2018), or post-hoc natural language explanations (Huk Park et al., 2018; Kim et al., 2018; Ling et al.,", "2017).", "Hendricks et al. (2018) produce counterfactual natural language explanations for image classification given an image and a counter-class label.", "Camburu et al. (2018) propose a model for NLI to first generate a free-form natural language explanation and then infer the label from the explanation.", "However, as noted by Oana-Maria et al. (2019a), the system tends to generate inconsistent explanations.", "We reason that requiring a model to generate an explanation of the correct output requires it to first infer the output, and the system thus resembles post-hoc explanation generation methods.", "Given the diversity of desiderata and techniques for interpretability, the need for understanding interpretation methods and evaluating them has grown.", "Difficulty in building interpretation models and the lack of robustness of the same are some of the major issues in existing deep neural networks systems (Feng et al., 2018; Ghorbani et al., 2019; Oana-Maria et al., 2019b).", "Given these observations, measuring faithfulness, i.e., how well do the provided explanations correlate with the model's decision making, is crucial.", "DeYoung et al. (2019) propose metrics to evaluate such faithfulness of rationales (supporting evidence) for NLP tasks.", "Through NILE, we propose a framework for generating faithful natural language explanations by requiring the model to condition on generated natural language explanations.", "The idea of using natural language strings as a latent space has been explored to capture compositional task structure (Andreas et al., 2018).", "Wu et al. (2019) explore improving visual question answering by learning to generate question-relevant captions.", "Rajani et al. (2019) aim to improve commonsense question answering by first generating commonsense explanations for multiple-choice questions, where the question and the choices are provided as the prompt.", "Similar to (Camburu et al., 2018), they learn by trying to generate human-provided explanations and subsequently conditioning on the generated explanation.", "In NILE, we instead aim to produce an explanation for each possible label and subsequently condition on the generated label-specific explanations to produce the final decision.", "In this section, we discuss the datasets (Section 3.1) and pre-trained models (Section 3.2) used to build NILE.", "SNLI: The Stanford NLI dataset (Bowman et al., 2015) contains samples of premise and hypothesis pairs with human annotations, using Amazon Mechanical Turk.", "The premises were obtained from pre-existing crowdsourced corpus of image captions.", "The hypotheses were obtained by presenting workers with a premise and asking for a hypothesis for each label (entailment, neutral and contradic-tion), resulting in a balanced set of 570K pairs.", "e-SNLI: Camburu et al. (2018) extend the SNLI dataset with natural language explanations of the ground truth labels.", "The explanations were crowdsourced using Amazon Mechanical Turk.", "Annotators were first asked to highlight words in the premise and hypothesis pairs which could explain the labels.", "Next, they were asked to write a natural language explanation using the highlighted words.", "Similar to Camburu et al. (2018), for all our experiments, we filter out non-informative examples where the explanations contain the entire text of the premise or hypothesis.", "In particular, we drop any training example where the uncased premise or hypothesis text appears entirely in the uncased explanation.", "This leads to a training data size of 532K examples.", "Transformer architectures (Vaswani et al., 2017) pre-trained on large corpora with self-supervision have shown significant improvements on various NLP benchmarks (Devlin et al., 2019; Radford", "et al., 2019; Yang et al., 2019; Liu et al., 2019; Lan et al., 2019).", "Improvements have been demonstrated for text classification as well as text generation tasks (Lewis et al., 2019; Raffel et al., 2019).", "In this work, we leverage the implementation of transformer architectures and pre-trained models provided by Wolf et al. (2019).", "GPT-2: We use the GPT-2 architecture (Radford et al., 2019), which is trained using a causal language modeling loss (CLM), and includes a left-to-right decoder suitable for text generation.", "In particular, we use the gpt2-medium model.", "This model has 24 layers, 16 attention heads and a hidden size of 1024 ( 345M parameters).", "For text generation, the model can be finetuned using CLM on desired text sequences.", "RoBERTa: For classification modules, we leverage RoBERTa (Liu et al., 2019), which is trained using a masked language modeling loss (MLM).", "In particular, we use the roberta-base model.", "This model has 12 layers, 12 attention heads and a hidden size of 768 ( 125M parameters).", "For downstream classifications tasks, a classification layer is added over the hidden-state of the first token in the last layer.", "The overall architecture employed in NILE is shown in Figure", "1. We introduce the notation used in this paper in Section 4.1.", "We then discuss the motivation for the major design choices in Section 4.2.", "NILE performs the following steps to produce labels and explanations:", "1. Candidate Explanation Generators: Label-specific Candidate Explanation Generators first generate explanations supporting the respective labels (Section 4.3).", "2. Explanation Processor: The Explanation Processor takes the explanations and also the premise and hypothesis pairs as input to produce the task label (Section 4.4).", "We also build NILE-PH, where the Explanation Processor has access only to the generated explanations (Section 4.4.1).", "We note that NILE-PH more naturally fits the desiderata described in Section 1, while we design and evaluate NILE for the more general case where the Explanation Processor also accesses the premise and hypothesis pair.", "We denote each data point by ( p , h ), where p is the premise and h the hypothesis sentence.", "G denotes a model trained to generate natural language explanations.", "Specifically, G x denotes a model which generates natural language explanations t x of type x , where x { entail, contradict, neutral } .", "We denote the human-provided gold explanation for the correct predictions as t g .", "S denotes a module which predicts label scores.", "The true label for an example is denoted by y , while a model prediction is denoted by y (cid:48) , and label scores by l x .", "V2 Hypothesis Premise Explanation G pre S post B Hypothesis Premise Explanation G post S pre A Figure 2: Existing alternative architectures.", ": A. Posthoc generation : Given an input instance, first the label is predicted and then an explanation generated conditioned on the label and the input text.", "B. ExplainThen-Predict (Camburu et al., 2018): Given the input instance, first the desired explanation is generated, and then the label is predicted using only the generated explanation.", "We argue that neither architecture provides a natural way to test the sensitivity of the model's predictions to the generated explanation.", "Please see Section 4.2 for details.", "Label-specific explanations: Consider two alternative existing architectures in Figure", "2. In Figure 2A, a model S pre is trained directly on the example sentences ( p & h ) to produce a label ( y (cid:48) ), which together with the example sentences are used to produce an explanation t (cid:48) g using G post .", "It can be argued that while the target explanations may regularize the system, there is no reason for t (cid:48) g to be aligned with the reason why the model chose a particular label.", "Figure 2B corresponds to a model which has also been trained on e-SNLI (Camburu et al., 2018).", "G pre is first trained to produce natural language explanations t (cid:48) g using human-provided explanations ( t g ) as targets, using only the example sentences as inputs.", "A model S post then chooses the label corresponding to the generated explanation t (cid:48) g .", "While at first, it appears that this system may provide faithful explanations of its decisions, i.e., the generated explanations are the reason for the label prediction, we argue that it may not be so.", "In Figure 2B, G pre is required to generate the explanation of the correct label for an example.", "It must first infer that label and then produce the corresponding explanation.", "Further analysis of the free-form human-provided explanations has revealed clear differences in the form of explanations, through alignment to label-specific templates (Camburu et al., 2018; Oana-Maria et al., 2019a).", "The Explanation Processor S post then only needs to infer the form of t (cid:48) g .", "G pre then resembles post-hoc generation methods, with the label (as the form of t (cid:48) g ) and explanation t (cid:48) g being produced jointly.", "The claim is supported by inconsistencies found in the generated explanations (Oana-Maria et al., 2019a).", "Neither architecture allows a natural way to test the sensitivity of the model's predictions to its explanations.", "In NILE, we first allow explanations for each label, and then require the Explanation Processor to select the correct explanation.", "This allows us to naturally test whether the model's predictions are indeed due to the selected explanation.", "This can be done, for example, by perturbing the input to the Explanation Processor.", "A pipelined approach: We use a pipelined approach in NILE (Figure 1).", "The Candidate Explanation Generators are first trained using human-provided explanations.", "The Explanation Processor takes as input the generated label-specific explanations.", "This prevents the system from producing degenerate explanations to aid task performance.", "It also allows perturbing the generated explanations to probe the system in a more natural way compared to an unintelligible intermediate state of a learnt model.", "We believe that systems can be designed to work in this setting without compromising task performance.", "We train label-specific explanation generators, G x , x { entail, contradict, neutral } , using human-provided explanations of examples with the corresponding label.", "For example, to train G entail , we F Agg F Agg F Agg t entail t contradict t neutral F Apn t entail t contradict t neutral B C V6 Aggregate Append F Ind F Ind F Ind t entail t contradict t neutral A Independent 0.8 0.0 0.2 l entail l contradict l neutral Figure 3: Explanation Processor architectures.", "collect all triplets ( p , h , t g ) annotated as entailment.", "We create text sequences of the form: Premise: p Hypothesis: h [EXP] t g [EOS] to fine-tune a pretrained language model, where [EXP] and [EOS] are special tokens added to the vocabulary.", "During fine-tuning, the language modeling loss function is used only over the explanation tokens.", "Next, we create prompts of the form Premise: p Hypothesis: h [EXP] and require each trained language model to independently complete the sequence.", "In this way we obtain label specific explanations t x , t x = G x ( p, h ) , for x { entail, contradict, neutral } .", "The Explanation Processor in NILE takes as input the generated label-specific explanations, as well as the premise and hypothesis pair to generate label scores l x , x { entail, contradict, neutral } .", "During training, these scores are passed through a softmax layer and a cross-entropy loss is used to generate the training signal.", "During testing, the label with the maximum score is selected.", "We leverage a pre-trained roberta-base model for all our experiments, and fine-tune it as speci-fied in the following subsections.", "In each case, any intermediate scores are generated through transformations of the first token ( [CLS] ) embedding from the last layer.", "We define: F model ( inp ) = tanh( W. CLS embed ( inp )) where inp is a pair of sequences in NILE, a single sequence in NILE-PH, and W are the learnable parameters for the model.", "For simplicity, and to elucidate the desired behavior, we first describe how explanations are processed in NILE-PH (Section 4.4.1).", "We then discuss the construction of NILE, a potential issue, and a fix for the same (Section 4.4.2).", "In this section, we describe how explanations are processed in NILE-PH, which is generalized in NILE (Section 4.4.2).", "We experiment with three architectures, described below (also see Figure 3).", "where x { entail, contradict, neutral } .", "We expect this score to represent the truthfulness of the input explanation.", "B. Aggregate: The Independent model would need all three explanations to be available to reliably produce label scores.", "We believe a system should be able to handle one or more missing or ambiguous explanations.", "For example, the entailment explanation: t entail : A dog is a cat would provide evidence for contradiction.", "To capture this notion, we require the Explanation Processor to produce two intermediate scores V 1 and V 2 , where we expect V 1 to collect evidence supporting an input claim and V 2 to collect evidence against an input claim: V i ( x ) = W Agg ,i F Agg ( t x ) , where i { 1 , 2 } (2) The intermediate score are then aggregated into the final label scores: l entail = Cmb( V 1 ( t entail ) , V 2 ( t contradict )) l contradict = Cmb( V 1 ( t contradict ) , V 2 ( t entail )) l neutral = V 1 ( t neutral ) (3) where Cmb is the LogSumExp function.", "The reason for this choice of aggregation is that while evidence against entailment might point to contradiction and vice versa, evidence against neutral doesn't necessarily provide any information about entailment or contradiction relations.", "C. Append: Finally, to allow the model to reason arbitrarily between the three generated explanations, we created a single sequence, concat ecn : entailment: t entail contradiction: t contradict neutral: t neutral , and generate the scores as follows: l x = W Apn ,x F Apn (concat ecn ) (4) where x { entail, contradict, neutral } .", "In NILE, to process premise p and hypothesis h , we first concatenate p and h into concat ph : Premise: p Hypothesis: h .", "The label scores are then obtained as in Section 4.4.1, by modifying Equation 1, 2 and 4 as follows: replace F z ( x ) by F z (concat ph , x ) , where z { Ind, Agg, Apn } .", "We note that appending the example sentences to the generated explanations (as in Append) would result in having no control over whether the explanations are used for the final prediction.", "The case for Independent and Aggregate is not immediately clear.", "We now discuss a potential issue with these architectures when processing premise and hypothesis text, and suggest a fix for the same.", "The issue: We expect NILE to answer the question: Is ( concat ph , t x ), where x { entail, contradict, neutral } , a valid instance-explanation pair?", "The Independent and Aggregate architectures for NILE have been designed such that the model can't ignore the label-specific explanations.", "For example, the Independent model will produce identical scores for each output label, if it chooses to completely ignore the input explanations.", "However, the model is still free to learn a different kind of bias which is an outcome of the fact that natural language explanations convey ideas through both content and form.", "If the form for explanations of different labels is discriminative, an unconstrained learning algorithm could learn to infer first the type of explanation and use it to infer the task.", "For example, given the input ( concat ph , t x ), where x { entail, contradict, neutral } , if a model could learn whether t x is an entailment explanation, it then only has to output whether concat ph corresponds to an entailment relation.", "Essentially, high label accuracy can be achieved by inferring first what task to do using only the form of t x .", "The fix: To prevent NILE from exploiting the form of an explanation as described above, we create additional training examples, where we require NILE to score valid instance-explanation pairs higher.", "In particular, we sample negative explanations for an instance, of the same form as the correct label.", "For example, an instance labeled as entailment would have an additional training signal: Score ( concat ph , t entail ) higher than ( concat ph , t (cid:48) entail ) and ( concat ph , t (cid:48)(cid:48) entail ), where t (cid:48) entail and t (cid:48)(cid:48) entail are randomly sampled entailment form explanations.", "We note that the fix leaves room for other kinds of biases to be learnt.", "However, the key advantage with NILE is that it is easy to design probes to test for such biases and subsequently fix them (see Section 5.3).", "We now describe baselines which use the same underlying blocks as NILE, for generating explanations and classification.", "NILE:post-hoc: To understand the drop in performance which could be associated with constraining models as we have done, we train a model with full access to input examples (See Figure 2A).", "where x { entail, contradict, neutral } .", "Further, we provide a strong baseline for posthoc generators using this model, where using the model's predictions, we simply pick the corresponding label-specific generated explanation.", "We note that the model's predictions have no sensitivity to the generated explanations in NILE: posthoc.", "ExplainThenPredictAttention (ETPA): Following (Camburu et al., 2018), (see Figure 2B), we train a pipelined system, where we first learn to generate the gold explanation t (cid:48) g , followed by a classification of t (cid:48) g to predict the label: t (cid:48) g = G pre (concat ecn ) l x = W x F post ( t (cid:48) g ) where x { entail, contradict, neutral } .", "In this section, we aim to answer the following questions:", "Q1 How does NILE compare with the baselines and other existing approaches in terms of final task performance, and explanation accuracy, on in-domain evaluation sets (train and test on SNLI)?", "(Section 5.1)", "Q2 How well does NILE transfer to out-of-domain examples (train on SNLI, and test on MNLI)?", "(Section 5.2)", "Q3 How faithful are the model's predictions to the generated explanations?", "(Section 5.3)", "We provide training details in Appendix A, and examples of generated label-specific explanations in Appendix B. 5.1 In-domain Results We report the label accuracies of the baselines and proposed architectures on the SNLI Dev and Test set in Table", "1. We also report explanation accuracies, obtained through human evaluation of the generated explanations in the first 100 test examples.", "Binary scores on correctness were sought from five annotators (non-experts in NLP) on the generated explanations.", "For both label and explanation accuracies, we report using a model selected using the SNLI Dev set label accuracy across 5 runs with 5 different seeds of random initialization.", "Please see the Appendix for more details on the the 5 runs.", "First, through NILE:post-hoc, we provide a strong baseline for obtaining high label and explanation accuracy.", "Our aim in this work is to learn explanations that serve as the reason for the model's Model MNLI Dev MNLI Dev-mm Explanation evaluation on first 100 MNLI Dev Samples Label Accuracy Label Accuracy A: Correct Labels Averaged over annotators Annotators in-agreement B: Correct Expl.", "predictions.", "Nevertheless, we are able to compete or outperform this baseline, in terms of explanation accuracy, while incurring a only a small drop in label accuracy.", "All variants of NILE, including NILE-PH and NILE-NS (which is not trained using negative samples of explanations as described in Section 4.4.2), produce more correct explanations than the ETPA baseline.", "NILE-PH:Append, NILE and NILE-NS provide gains over label accuracies compared to the ETPA baseline.", "Additionally, NILE and its variants provide natural ways to probe the sensitivity of the system's predictions to the explanations, as demonstrated in the subsequent sections.", "Finally, the explanations generated by all NILE variants generalize significantly better on out-of-distribution examples when compared to the ETPA baseline (See Section 5.2).", "To test the generalization capability of NILE, we do training and model selection on the SNLI dataset (Section 5.1), and evaluate on the out-of-domain MNLI (Williams et al., 2018) development sets.", "Transfer without fine-tuning to out-of-domain NLI has been a challenging task with transfer learning for generating explanations in MNLI being particularly challenging (Camburu et al., 2018).", "We report label accuracies on the Dev (matched) and Dev-mm (mismatched) sets, and explanation evaluation on the first 100 Dev samples in Table", "2. Explanation evaluation was done by three annotators (who also annotated the SNLI explanations).", "While the label accuracies follow a similar pattern as the in-domain SNLI Test set, all variants of NILE provide gains in the quality of generated explanations.", "All variants of NILE produce more correct explanations (B, C) as well as a higher percentage of correct generated explanations among correct predictions (B/A, C/A).", "This demonstrates that NILE, through intermediate label-specific natural language explanations, provides a more general way for building systems which can produce natural language explanations for their decisions.", "NILE and its variants allow a natural way to probe the sensitivity of their predictions to the generated explanations, which is by perturbing the explanations themselves.", "In this way, NILE resembles Model I+ Exp I only Exp only NILE-NS Independent 91.6 33.8 69.4 Aggregate 91.6 33.8 74.5 Append 91.7 91.2 72.9 NILE Independent 91.3 33.8 46.1 Aggregate 91.2 33.8 40.7 Table 3: Estimating the sensitivity of the system's predictions to input explanations through erasure.", "explanation systems which provide input text fragments as reasons for their decisions.", "DeYoung et al. (2019) propose metrics to evaluate the faithfulness of such explanations.", "Following their work, we first attempt to measure the explanations generated by the methods proposed in this paper for comprehensiveness (what happens when we remove the explanation from the input) and sufficiency (what happens if we keep only the explanations).", "In Table 3, we show these measures for NILE and NILE-NS.", "The results seem to indicate that explanations for both NILE and NILE-NS are comprehensive, while having higher sufficiency in the case of NILE-NS.", "We first note that the comprehensiveness of these systems is ensured by design, and the input is indistinguishable without an explanation.", "Second, we argue that sufficiency may indicate correlations which don't necessarily exist in the system otherwise.", "We study the sensitivity of the explanations through a probe motivated by an understanding of the task and the training examples (see Section 4.4.2).", "We perturb the instance-explanation inputs such that for each test instance, the explanation is replaced by a randomly selected explanation of the same label.", "The results (Table 4) indicate that NILE-NS is more robust to random perturbations of input explanations, and presumably uses the form of the explanation to infer the task (see Section 4.4.2 for a discussion).", "It is true that NILE behaves expectedly as we have specifi-cally designed NILE to prevent the associated bias, and that this could potentially lead the system to learn other such biases.", "However, a key advantage of the proposed architecture is the ability to identify and fix for such biases.", "We leave it as an interesting and challenging future work to find and fix more such biases.", "In this paper we propose NILE, a system for Natural Language Inference (NLI) capable of generating labels along with natural language explanations for the predicted labels.", "Through extensive experiments, we demonstrate the effectiveness of this approach, in terms of both label and explanation accuracy.", "NILE supports the hypothesis that accurate systems can produce testable natural language explanations of their decisions.", "In the paper, we also argue the importance of explicit evaluation of faithfulness of the generated explanations, i.e., how correlated are the explanations to the model's decision making.", "We evaluate faithfulness of NILE's explanations using sensitivity analysis.", "Finally, we demonstrate that task-specific probes are necessary to measure such sensitivity.", "We thank the anonymous reviewers for their constructive comments.", "This work is supported by the Ministry of Human Resource Development (Gov-ernment of India).", "We would also like to thank HuggingFace for providing a state-of-the-art Transformers library for natural language understanding.", "Finally, we want to thank the annotators who annotated generated explanations for correctness." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "method", "abstain", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "other", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "method", "method", "method", "other", "other", "method", "method", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "method", "method", "objective", "other", "other", "other", "other" ]
[ "We reframe suicide risk assessment from social media as a ranking problem whose goal is maximizing detection of severely at-risk individuals given the time available.", "Building on measures developed for resource-bounded document retrieval, we introduce a well founded evaluation paradigm, and demonstrate using an expert-annotated test collection that meaningful improvements over plausible cascade model baselines can be achieved using an approach that jointly ranks individuals and their social media posts.", "Mental illness is one of the most significant problems in healthcare: in economic terms alone, by 2030 mental illness worldwide is projected to cost more than cardiovascular disease, and more than cancer, chronic respiratory diseases, and diabetes combined (Bloom et al., 2012).", "Suicide takes a terrible toll: in 2016 it became the second leading cause of death in the U.S. among those aged 10-34, fourth among those aged 35-54 (Hedegaard et al., 2018).", "Prevalence statistics suggest that roughly 141 of the 3,283 people who attended ACL 2019 have since had serious thoughts of suicide, 42 have made a plan, and 19 have actually made attempts.", "1 The good news is that NLP and machine learning are showing strong promise for impact in mental health, just as they are having large impacts everywhere else.", "Traditional methods for predicting suicidal thoughts and behaviors have failed to make progress for fifty years (Franklin et al., 2017), but with the advent of machine learning approaches (Linthicum et al., 2019), including text analysis methods for psychology (Chung and Pennebaker, 2007) and the rise of research on mental 1 Approximately: ACL is international, but these figures use prevalence statistics for U.S. adults (SAMHSA, 2019).", "health using social media (Choudhury, 2013), algorithmic classification has reached the point where it can now dramatically outstrip performance of prior, more traditional prediction methods (Linthicum et al., 2019; Coppersmith et al., 2018).", "Further progress is on the way as the community shows increasing awareness and enthusiasm in this problem space (e.g., Milne et al., 2016; Losada et al., 2020; Zirikly et al., 2019).", "The bad news is that moving these methods from the lab into practice will create a major new challenge: identifying larger numbers of people who may require clinical assessment and intervention will increase stress on a severely resource-limited mental health ecosystem that cannot easily scale up.", "2 This motivates a reformulation of the technological problem from classification to prioritization of individuals who might be at risk, for clinicians or other suitably trained staff as downstream users.", "Perhaps the most basic way to do prioritization is with a single priority queue that the user scans from top to bottom.", "This ranked retrieval paradigm is common for Information Retrieval (IR) tasks such as document retrieval.", "The same approach has been applied to ranking people based on their expertise (Balog et al., 2012), or more generally to ranking entities based on their characteristics (Balog, 2018).", "Rather than evaluating categorical accuracy, ranked retrieval systems are typically evaluated by some measure of search quality that rewards placing desired items closer to the top (Voorhees, 2001).", "Most such measures use only item position, but we find it important to also model the time it takes to recognize desired items, since in our setting the time of qualified users is the most limited resource.", "In this paper, we do so by building on Time-2 120M Americans live in areas with mental healthcare provider shortages (Bureau of Health Workforce, 2020).", "That number reflects an increase of about 7 million people between September 30, 2019 and March 31, 2020.", "Figure 1 : Illustration of an assessment framework in which individuals are ranked by predicted suicide risk based on social media posts, posts are ranked by expected usefulness for downstream review by a clinician, and word-attention highlighting helps foreground important information for risk assessment.", "Real Reddit posts, obfuscated and altered for privacy.", "Biased Gain (TBG, Smucker and Clarke, 2012), an IR evaluation measure that models the expected number of relevant items a user can find in a ranked list given a time budget.", "We observe that in many risk assessment settings (e.g., Yates et al. (2017); Coppersmith et al. (2018); Zirikly et al. (2019)), the available information comprises a (possibly large and/or longitudinal) set of documents, e.g. social media posts, associated with each individual, of which possibly only a small number contain a relevant signal.", "3 This gives rise to a formulation of our scenario as a nested, or hierarchical , ranking problem, in which individuals are ordered by priority, but each individual's documents must also be ranked (Figure 1).", "Accordingly, we introduce hierarchical Time-Biased Gain (hTBG), a variant of TBG in which individuals are the top level ranked items, and expected reading time is modeled for the ranked list of documents that provides evidence for each individual's assessment.", "In addition, we introduce a prioritization model that uses a three-level hierarchical attention network to jointly optimize the nested ranking task; this model also addresses the fact that in our scenario, as in many other healthcare-related scenarios, relevance obtains at the level of individuals rather than individual documents (cf. Shing et al., 2019).", "Using a test collection of Reddit-posting individuals who have been assessed for suicide risk by clinicians based on their posts (Shing et al., 2018), we use hTBG to model prioritization of individuals and demonstrate that our joint model substantially outperforms cascade model baselines in which the nested rankings are produced independently.", "NLP for Risk Assessment.", "Calvo et al. (2017) survey NLP for mental health applications using non-clinical texts such as social media.", "Several recent studies and shared tasks focus on risk assessment of individuals in social media using a multi-level scale (Milne et al., 2016; Yates et al., 2017; Losada et al., 2020).", "Shing et al. (2018) introduce the dataset we use, and Zirikly et al. (2019) describe a shared task in which 11 teams tackled the individual-level classification that feeds into our prioritization model (their Task B).", "Our work contributes by modeling the downstream users' prioritization task as taking a key step closer to the real-world problem.", "Hierarchical Attention Attention, especially in the context of NLP, has two main advantages: it allows the network to attend to likely-relevant parts of the input (either words or sentences), often leading to improved performance, and it provides insight into which parts of the input are being used to make the prediction.", "These characteristics have made attention mechanisms a popular choice for deep learning that requires human investigation, such as automatic clinical coding (Baumel et al., 2018; Mullenbach et al., 2018; Shing et al., 2019).", "Although concerns about using attention for interpretation exist (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; Wallace, 2019), Shing et al. (2019) show hierarchical document attention can align well with human-provided ground truth.", "Our prediction model, 3HAN, is a variant of Hierarchical Attention Networks (HAN, Yang et al., 2016).", "Yang et al. use a two-level attention mechanism that learns to pay attention to specific words in a sentence to form a sentence representation, and at the next higher level to weight specific sentences in a document in forming a document representation.", "Adapting this approach to suicide assessment of at-risk individuals, our model moves a level up the representational hierarchy, learning also to weight documents to form representations of individuals.", "This allows us to jointly model ranking individuals and ranking their documents as potentially relevant evidence, without document-level annotations.", "Evaluating rankings.", "There is an extensive IR literature on quality measures for ranked lists (Jarvelin and Kekalainen, 2002; Chapelle et al., 2009; Smucker and Clarke, 2012; Sakai, 2019), which generally reward placing highly relevant items near the top of the list, and are often relatively insensitive to mistakes made near the bottom.", "In the setting of suicidality risk assessment, we care about how much gain (number of at-risk individuals found) can be achieved for a given time budget.", "Time-biased gain (TBG, Smucker and Clarke, 2012) measures this by assuming a determined user working down a ranked list, with the discount being a function of the time it takes to reach that position.", "However, neither TBG nor other ranking measures, to the best of our knowledge, can measure the hierarchical ranking found in the scenario that motivates our work: ranking items (i.e. individuals) when each item itself contains a ranked list of potential evidence (their posts).", "In this paper, we design a new metric, hierarchical time-biased gain (hTBG), to measure the hierarchical ranking by incorporating the cascading user model found in Expected Reciprocal Rank (ERR, Chapelle et al., 2009) into TBG.", "Section 1 argued for formulating risk assessment as a prioritization process where the assessor has a limited time budget.", "This leads to four desired properties in an evaluation measure: 4 Risk-based : Individuals with high risk should be ranked above others.", "Head-weighted : Ranking quality near the top of the list, where assessors are more likely to assess, should matter more than near the bottom.", "Speed-biased : For equally at-risk individuals, the measure should reward ranking the one who can be assessed more quickly closer to 4 Throughout, assessor or user signify a clinician or other human assessor, and individual is someone being assessed.", "Figure 2 : User model for Time-Biased Gain (TBG) the top, so that more people at risk can be identified within a given time budget.", "Interpretable : The evaluation score assigned to a system should be meaningful to assessors.", "Among many rank-based measures that satisfy the risk-based and head-weighted criteria, TBG directly accounts for assessment time in a way that also satisfies the speed-biased criterion (see Theorem 3.1).", "Furthermore, the numeric value of TBG is a lower bound on the expected number of relevant items in our case, high-risk individuals found in a given time budget (Smucker and Clarke, 2012), making it interpretable .", "After introducing TBG, in Section 3.2 we develop hierarchical Time-Biased Gain (hTBG), an extension of TBG, to account for specific properties of risk assessment using social media posts.", "5 3.1 Time-Biased Gain TBG was originally developed in IR for the case of a user seeking to find a relevant document, but here we frame it in the context of risk assessment (Figure 2).", "TBG assumes a determined user (say a clinician) examining a ranked list of individuals in the order presented by the system.", "For each individual, the clinician first examines a summary and then decides whether to check relevance via more detailed examination, or to move on.", "Checking requires more time to make an assessment of whether the individual is indeed at-risk.", "TBG is a weighted sum of gain, g k , and discount, D ( ) , a function of time: TBG = (cid:88) k =1 g k D ( T ( k )) .", "5 TBG and hTBG code: https://github.com/sidenver/hTBG", "Table 1 : Parameters used for TBG and hierarchical TBG.", "T ( k ) is the expected amount of time it takes a user to reach position k : T ( k ) = k 1 (cid:88) i =1 t ( i ) (2) t ( i ) = T s + P check ( rel i ) E i (3) where t ( i ) is expected time spent at position i .", "Breaking down t ( i ) , T s is the time it takes to read a summary and decide whether to check the individual; if yes (probability P check ( rel i ) ), E i is expected time for detailed assessment, calculated as a function of the individual's total word count W i : E i = T W i + T (4) where T and T scales words to time.", "The discount function D ( t ) decays exponentially with halflife h : D ( t ) = 2 th (5) where h is the time at which half of the clinicians will stop, on average.", "The expected stop time (or mean-life) is h ln (2) .", "Finally, the gain, g k is: g k = P check ( rel k ) P flag ( rel k ) 1 [ rel k =1] (6) where P check ( rel k ) is the probability of checking the individual after reading the summary at position k , and P flag ( rel k ) is the probability of then flagging that individual as high risk.", "Gain thus accrues only if a clinician actually finds a high-risk individual.", "The decay function in Equation 5 monotonically decreases with increasing time (and thus rank), so TBG satisfies the head-weighted criterion.", "Table 1 shows the parameters used in Smucker and Clarke (2012), which were estimated from user studies using data from TREC 2005 Robust track.", "Particularly of interest in a time-limited assessment, we can prove that TBG is speed-biased : Theorem 3.1 (TGB satisfies the speed-biased criterion) .", "Swapping an at-risk individual of longer Figure 3 : hTBG's model for calculating expected assessment time for an individual, replacing shaded box in Figure 2.", "assessment time ranked at k with an equally at-risk individual of shorter assessment time ranked at k + r , where r > 0 , always increases TBG.", "TBG assumes that detailed assessment involves looking at all available evidence (Equation 4).", "However, in our setting, an individual may have a large or even overwhelming number of social media posts.", "One severe risk individual in the SuicideWatch dataset, for example, has 1,326 posts in Reddit, the vast majority of which would provide the assessor with no useful information.", "Therefore we need to prioritize the documents to be read, and a way of estimating when the user will have read enough to make a decision.", "In general, clinicians engage in a sensemaking process as they examine evidence, and modeling the full complexity of that process would be difficult.", "We therefore make two simplifying assumptions: (1) that there is a high-signal document that suffices, once read, to support a positive relevance judgment, and (2) that the clinician will not read more than some maximum number of documents.", "These assumptions align well with those of Expected Reciprocal Rank (ERR), whose cascading user model assumes that as the user works down a ranked list (in our case, the ranked documents posted by a single individual), they are more likely to stop after viewing a highly relevant document than after viewing an irrelevant one, as their information need is more likely to have been satisfied (Chapelle et al., 2009).", "This results in a cascade model of user behavior: ERR = (cid:80) k =1 1 k P ( stop at k ) , in which P ( stop at k ) = R k (cid:81) k 1 i =1 (1 R i ) , where R k = f ( rel k ) is the probability of stopping at position k as a function of relevance.", "This suggests replacing Equation 4 with the following expected time estimate for detailed assessment of an individual: E i = T L (cid:88) l =1 (cid:32) W i,l l 1 (cid:89) m =1 (1 R i,m ) (cid:33) + T (7) where R i,l is the probability of stopping at the l th document for individual i , and W i,l > 0 is the cost (in our case, word count) of reading the l -th document for individual i .", "Note that for the special case of i, l N, R i,l = 0 , hTBG reduces to TBG.", "See Figure 3 for an illustration of E i of hTBG.", "For derivation of Equation 7 from ERR's cascading user model, see Appendix B.3.", "Calculation of the optimal value for a measure is often important for normalization, though not always easy; in some cases it can be NP-hard (Agrawal et al., 2009, ERR-IA).", "Another popular approach is to normalize by calculating the metric with an ideal collection.", "For example, Smucker and Clarke (2012) calculate the normalization factor of TBG by assuming a collection with an infinite number of relevant documents, each of which lack any content.", "In our case, however, we are actually interested in an optimal value achievable for a given test collection: the optimal values of TBG and hTBG are properties of the bottleneck that occurs due to the user's limited time-budget.", "We find that: Theorem 3.2 (Optimal TBG) .", "The optimal value of TBG under binary relevance is obtained if and only if (1) all at-risk individuals are ranked above not-at-risk individuals, and (2) within the at-risk individuals, they are sorted based on time spent in ascending order.", "Proof.", "See Appendix B.1", "Theorem 3.2 makes sense, as any time spent on assessing a not-at-risk individual is time not spent on assessing other potentially at-risk individuals.", "Preference in assessing individuals with shorter assessment time also increased the chance of assessing more individuals in the given time budget.", "Minimum Individual Assessment Time.", "To calculate optimal hTBG, we need to minimize individual assessment time.", "A natural question to ask, then, is whether a result similar to Theorem 3.2 holds for the individual assessment time of hTBG in Equation 7.", "By swapping paired documents, we can use proof by contradiction to show that: Theorem 3.3.", "Minimum individual assessment time is obtained if the documents are sorted in descending order by R i,l W i,l .", "Proof.", "See Appendix B.2", "Theorem 3.3 shows a surprisingly intuitive tradeoff between how relevant a document might be, and how much time (proportional to word counts) the expert needs to take to read it: highly relevant documents with short reading time are preferred.", "Observe that Theorem 3.1 (speed-biased criterion) and Theorem 3.2 both apply to hTBG, as the two theorems only concern the ranking of individuals, not documents, and hTBG is an extension of TBG to measure the document ranking.", "Using Theorem 3.3 and Theorem 3.2, calculation of optimal TBG and hTBG values is simply a matter of sorting.", "For TBG, time complexity is O ( n log( n )) , where n K is the number of at-risk individuals in the test collection.", "For hTBG, worst-case time complexity is O ( n log( n ) + nm log( m )) , where m L is the maximum number of relevant documents per individual.", "We began by motivating risk assessment via social media as a person-centered, time-limited prioritization problem, in which the technological goal is to support downstream clinicians or other assessors in identifying as many people at risk as possible.", "This led to the conclusion that systems should not only rank individuals but, for each individual, rank their posts, and we introduced an evaluation framework that involves an abstraction of the user's process of identifying people at risk given a nested ranking.", "Next, we need a system that can produce such nested rankings of individuals and their posts.", "Ideally such a system should be able to train on only individual-level, not document-level, labels, since suicide risk is a property of individuals, not documents, and document labels are more difficult to obtain.", "In addition, such a system should ideally produce additional information to help the downstream user if not justification of its output, then at least highlighting potentially useful information.", "To address this need, we introduce 3HAN, a hierarchical attention network (Yang et al., 2016) that extends up to the level of individuals, who are represented as sequences of documents.", "This architecture is similar to the network we proposed in Shing et al. (2019) for coding clinical encounters; it obtained good predictive performance and we also showed that, despite concerns about the interpretation of network attention (Jain and Wallace, 2019), hierarchical document-level attention succeeded in identifying documents containing relevant evidence.", "The architecture here differs in that it builds representations hierarchically from the word level, as opposed to pre-extracted conceptual features, and takes document ordering into account using a bi-directional GRU (Bahdanau et al., 2015).", "Specifically, our model has five layers (Figure 4).", "The first is a word-embedding layer that turns a one-hot word vector into a dense vector.", "The second to fourth layers are three Seq2Vec layers with attention that learn to aggregate, respectively, a sequence of word vectors into a sentence vector, a sequence of sentence vectors into a document vector, and a sequence of document vectors into an individual vector (hence 3HAN).", "The final layer is a fully connected layer followed by softmax.", "We detail our Seq2Vec layer in the context of aggregating a sequence of document vectors to an individual's vector, though the three Seq2Vec layers are the same.", "See Figure 4b for an illustration.", "Document vectors { d i,j } mj =1 are first passed through a bi-directional GRU layer.", "The outputs, after passing through a fully-connected layer and a non-linear layer, are then compared to a learnable attention vector, v attention .", "Specifically, g i,j = Bi-GRU ( d i,j ) (8) r i,j = tanh ( W g i,j + b ) (9) a i,j = e r (cid:62) i,j v attention (cid:80) m j (cid:48) =1 e r (cid:62) i,j (cid:48) v attention (10) u i = (cid:88) m j =1 a i,j g i,j (11) where a i,j is the normalized document attention score for the j -th vector, and u i is the final aggregated individual vector.", "As shown in Equation 10, the transformed vector r i,j is compared with the learnable attention vector v attention using a dot product, and further normalized for the weighted averaging step in Equation 11.", "Once we have the individual vector u i , we can predict the risk label of the individual by passing it through a fully-connected layer and a softmax.", "Finally, we compare with the ground truth label y i of individual i using negative log-likelihood to calculate a loss:", "We first introduce the test collection and then show how we can evaluate 3HAN and the cascade model baselines on the test collection using hTBG.", "To demonstrate the effectiveness of the 3HAN model, which jointly learns to rank individuals and, within each individual, their posts as evidence, we compare it with different combinations of individual-level rankers and document-level rankers.", "Training details for all the models can be found in Appendix C. 5.1 Test Collection In our experimentation, we use the University of Maryland Reddit Suicidality Dataset, v.2 (Shing et al., 2018; Zirikly et al., 2019).", "6 This English-language dataset, derived from the 2015 Full Reddit Submission Corpus (2006-2015), includes 11,129 potentially at-risk individuals who posted on r/SuicideWatch (a subreddit dense in self-reports about suicidality, henceforth SW), as well as 11,129 control individuals who never posted on any mental-health related subreddit.", "Entire posting histories (not just from SW, but all Reddit forums) were collected.", "7 An individual's number of posts can range from 10 to 1,326.", "See Table 2 for a detailed breakdown of number of posts per individual across datasets and risk categories.", "The full dataset has three subsets with disjoint individuals.", "The first, which we term the WEAKSUPERVISION dataset, includes 10,263 individuals who posted in SW and 10,263 control individuals who did not; they are respectively considered to be indirectly positively and negatively labeled, very noisily since posting on SW does not necessary imply suicidal ideation.", "8 The second set is the CROWDSOURCE dataset, including 621 individuals annotated by crowdsourcers with four risk levels: No Risk , Low Risk , Moderate Risk , and Severe Risk .", "Figure 4 : An illustration of the three-level Hierarchical Attention Network (3HAN) model", "Table 2 : Number of individuals with the number (range) of posts, by dataset and risk category.", "The last is the EXPERT dataset, including 242 individuals with the same four-level annotation, by four suicide risk assessment experts.", "9 Along with the level of risk for each individual, the expert annotators also designated the single post that most strongly supported each of their low, moderate, or severe risk labels.", "As TBG and hTBG are measures designed for binary relevance judgements, we map the Severe Risk category to at-risk , and everything else to not-at-risk .", "10 For word counts, we directly use the token counts in documents.", "We use the parameters that Smucker and Clarke (2012) estimated for TBG in user studies (Table 1).", "As discussed in Section 3.2, we assume there exists a maximum number of documents the clinician can read for each individual.", "9 Shing et al. (2018) report reliable expert annotation, Krip-pendorff's = .", "81 .", "The original EXPERT dataset had 245 individuals; we exclude three owing to errors in processing.", "10 Since the label definitions distinguish severe from moderate by focusing on the risk of an attempt in the near future , this binary distinction is aligned with recent work in suicidology that focuses specifically on characterizing the acute mental state that is associated with near-term suicidal behavior (Schuck et al., 2019).", "We set that number to 50 for the calculation of hTBG; if no relevant document exists in the top 50 documents, we consider that individual a miss and set the gain to zero.", "11 To rank individuals using our classification models, we use a standard conversion method to convert four-class probability to a single score: R (cid:88) rel i P ( y i = rel i ) score rel i (14) where R is { No , Low , Moderate , Severe } , and score rel i is the real number that maps to the risk-level of the individual i .", "We use { No = 0 , Low = 1 , Moderate = 2 , Severe = 4 } as our mapping No Risk can plausibly be treated the same as a post with no annotation (e.g. a control individual), and exponential scaling also seems plausible although just one of many possibilities, which we leave for future work.", "The hTBG metric also requires a stopping probability for each document, R i,l .", "Assuming that the more severe the risk associated with a document is, the more likely the assessor is to stop and flag the 11 All parameters were frozen prior to testing.", "We plan to estimate hyperparameters in our own user studies in the future.", "individual, on the EXPERT dataset where we have document-level annotations, we can estimate the expected stopping probability as: R i,l = 1 C (cid:89) c =1 (cid:18) 1 score rel i,l,c score max (cid:19) (15) where C annotators annotated the post as most strongly supporting their judgment.", "Score rel i,l,c is a mapping from the document-level risk by annotator c to a real number, with the same mapping used in Equation 14.", "Score max = 4 is the maximum in that mapping.", "To reflect different time budgets, we report results with the half-life parameter ranging from 1 to 6 hours, which corresponds to expected reading time budgets from 1.4 to 8.7 hours.", "3HAN.", "3HAN is first pretrained on the binary WEAKSUPERVISION dataset.", "The model is then further tuned on the four-class CROWDSOURCE dataset by transferring the weights (except the last fully-connected prediction layer) over.", "We initialized and fixed the word embedding using the 200-dimensional Glove embedding trained on Twitter (Pennington et al., 2014).", "12 3HAN Av.", "3HAN Average is trained the same way as 3HAN, except that the last Seq2Vec layer (the layer that aggregates a sequence of document vectors to an individual vector) is averaged instead of using attention, which can be achieved by fix-ing a i,j = 1 m in Equation 10.", "This is similar to the HN-AVE baseline in Yang et al. (2016).", "Note that 3HAN AV cannot rank documents, as it lacks document attention.", "LR.", "A logistic regression model is trained on the CROWDSOURCE dataset.", "The feature vector for an individual is computed by converting documents into document-level feature vectors, and then averaging them to obtain an individual-level feature vector.", "For each document, we concatenate four feature sets: (1) bag-of-words for vocabulary count larger than three, (2) Glove embedding summing over words, (3) 194 features representing emotional topics from Empath (Fast et al., 2016), 12 We experimented with trainable Glove embedding as well as BERT, but saw little to no improvement in performance using cross-validation.", "We plan to explore fine-tuning BERT on Reddit in future work.", "and (4) seven scores measuring document readability.", "13 This model is included as a conventional baseline in suicide risk assessment, similar to the baseline found in Shing et al. (2018).", "3HAN Att.", "Document attention learned jointly with 3HAN.", "As a side effect to training our 3HAN model, we learn document attention scores, see Equation 10.", "This score can then be used to rank documents in terms of their relevance to the judgement.", "This availability of document ranking, despite a lack of document annotations, is a significant advantage of hierarchical attention networks, since fine-grained document annotations are difficult to obtain on a large scale.", "Sentenceand word-level attention are a further advantage, in terms of potentially facilitating user review (see Figure 1), although exploring that awaits future work.", "Forward and Backward.", "Ranking an individual's documents in either chronological order or reverse chronological order is an obvious default in the absence of a trained model for document ranking, important baselines for testing whether a document ranking model actually adds value.", "Our model, 3HAN+3HAN ATT , the only joint model, achieves the best performance on hTBG compared to all other combinations of individual rankers and document rankers across three different time budgets (Table 3).", "The result is significant except when compared to 3HAN AV +3HAN ATT .", "14 However, using 3HAN ATT to rank documents implies that you have already trained 3HAN.", "Therefore, a more reasonable combination to compare with is 3HAN AV +B ACKWARD , which we outperform by a significant margin.", "Overall, the effect of document ranking is larger than the effect of individual ranking.", "Notably, the FORWARD document ranker always yields the worst performance.", "BACKWARD , on the other hand, is surprisingly competitive.", "We hypothesize that this may be an indication that suicidal ideation worsens over time, or perhaps of the unfortunate 13 Flesch-Kincaid Grade Level, Flesch Reading Ease, Dale Chall Readability, Automated Readability Index (ARI), Coleman Liau Index, Gunning Fog Index, and Linsear Write.", "Table 3 : hTBG scores with three different time budgets, all combinations of individual and document rankers.", "event of suicide attempts following posting a Severe Risk document.", "This motivates the importance of prioritizing the reading order of documents: being able to find evidence early in suicide assessment leaves more time for other individuals, and will reduce probability of misses.", "Document ranking alone does not decide everything, as 3HAN+B ACKWARD outperforms LR+3HAN ATT .", "It is the combination of 3HAN and its document attentions that produce our best model.", "This makes sense, as 3HAN, while learning to predict the level of risk, also learns which documents are important to make the prediction.", "Figure 1 shows the top 3 documents in a summary-style view for each of the highest ranked 3 individuals, with word-level attention shown using shading.", "Words without attention are obfuscated; others are altered to preserve privacy.", "Previously Existing Measures.", "For previously existing measures, e.g. TBG and NDCG@20, document ranking has no effect, and thus these are not suitable measures in our scenario.", "However, we include results here for reference (Table 4).", "Since 3HAN AV .", "and LR cannot rank documents, it is impossible to calculate hTBG, so we report results on the chronologically backward ranking strategy.", "NDCG@20 is NDCG score cut off at 20, chosen based on the optimal hTBG value.", "We introduced hTBG, a new evaluation measure, as a step toward moving beyond risk classification to a paradigm in which prioritization is the focus, and where time matters.", "Like TBG, the hTBG score is interpretable as a lower bound on the expected Ranker hTBG TBG NDCG@20 3HAN+3HAN ATT .", "Table 4 : TBG and NDCG@20 listed to compare with hTBG.", "Both hTBG's and TBG's half lives are set at 3 hrs, and maximum document cutoff is set at 50.", "number of relevant items found in a ranking, given a time budget.", "In our experiment, a relevant item is a person classified by experts as being at risk of attempting suicide in the near future.", "Measured at an expected reading time budget of about half a day (4hr20min, half-life 3hrs), our joint ranking approach achieved hTBG of 12.49 compared with 11.70 for a plausible baseline from prior art: using logistic regression to rank individuals, and then looking at a individual's posts in backward chronological order.", "That increase is just a bit short of identifying one more person in need of immediate help in the experiment's population of 242 individuals.", "There are certainly limitations in our study and miles to go before validating our approach in the real world, but our framework should make it easy to integrate and explore other individual rankers, document rankers and explanation mechanisms, and to actually build user interfaces like the schematic in Figure 1.", "This work has been supported in part by a University of Maryland Strategic Partnership (MPower) seed grant, an AWS Machine Learning Research Award, and an AI + Medicine for High Impact (AIM-HI) Challenge Award.", "We are immensely grateful to Glen Coppersmith, Michelle Colder Carras, April Foreman, Michelle Kuchuk, Beau Pinkham, Rebecca Resnik, Katherine Musacchio Schafer, Jonathan Singer, Raymond Tucker, Tony Wood, Ayah Zirikly, members of the UMIACS CLIP lab, and participants at the Workshops on Computational Linguistics and Clinical Psychology for valuable discussions related to this work." ]
[ "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "result", "abstain", "other", "other", "other", "method", "objective", "other", "other", "other", "abstain", "other", "method", "abstain", "other", "other", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "other", "other" ]
[ "Can attentionor gradient-based visualization techniques be used to infer token-level labels for binary sequence tagging problems, using networks trained only on sentence-level labels?", "We construct a neural network architecture based on soft attention, train it as a binary sentence classifier and evaluate against token-level annotation on four different datasets.", "Inferring token labels from a network provides a method for quantitatively evaluating what the model is learning, along with generating useful feedback in assistance systems.", "Our results indicate that attention-based methods are able to predict token-level labels more accurately, compared to gradient-based methods, sometimes even rivaling the supervised oracle network.", "Sequence labeling is a structured prediction task where systems need to assign the correct label to every token in the input sequence.", "Many NLP tasks, including part-of-speech tagging, named entity recognition, chunking, and error detection, are often formulated as variations of sequence labeling.", "Recent state-of-the-art models make use of bidirectional LSTM architectures (Ir-soy and Cardie, 2014), character-based representations (Lample et al., 2016), and additional external features (Peters et al., 2017).", "Optimization of these models requires appropriate training data where individual tokens are manually labeled, which can be time-consuming and expensive to obtain for each different task, domain and target language.", "In this paper, we investigate the task of performing sequence labeling without having access to any training data with token-level annotation.", "Instead of training the model directly to predict the label for each token, the model is optimized using a sentence-level objective and a modified version of the attention mechanism is then used to infer labels for individual words.", "While this approach is not expected to outperform a fully supervised sequence labeling method, it opens possibilities for making use of text classification datasets where collecting token-level annotation is not possible or cost-effective.", "Inferring token-level labels from a text classification network also provides a method for analyzing and interpreting the model.", "Previous work has used attention weights to visualize the focus of neural models in the input data.", "However, these analyses have largely been qualitative examinations, looking at only a few examples from the datasets.", "By formulating the task as a zero-shot labeling problem, we can provide quantitative evaluations of what the model is learning and where it is focusing.", "This will allow us to measure whether the features that the model is learning actually match our intuition, provide informative feedback to end-users, and guide our development of future model architectures.", "The main system takes as input a sentence, separated into tokens, and outputs a binary prediction as the label of the sentence.", "We use a bidirectional LSTM (Hochreiter and Schmidhu-ber, 1997) architecture for sentence classification, with dynamic attention over words for constructing the sentence representations.", "Related architectures have been successful for machine translation (Bahdanau et al., 2015), sentence summarization (Rush and Weston, 2015), entailment detection (Rocktaschel et al., 2016), and error correction (Ji et al., 2017).", "In this work, we modify the attention mechanism and training objective in order to make the resulting network suitable for 293 also inferring binary token labels, while still performing well as a sentence classifier.", "Figure 1 contains a diagram of the network architecture.", "The tokens are first mapped to a sequence of word representations [ w 1 , w 2 , w 3 , ..., w N ] , which are constructed as a combination of regular word embeddings and character-based representations, following Lample et al. (2016).", "These word representations are given as input to a bidirectional LSTM which iteratively passes through the sentence in both directions.", "Hidden representations from each direction are concatenated at every token position, resulting in vectors h i that are focused on a specific word but take into account the context on both sides of that word.", "We also include a transformation with tanh activation, which helps map the information from both directions into a joint feature-space: h i = LST M ( w i , h i 1 ) (1) h i = LST M ( w i , h i +1 ) (2) e h i = [ h i ; h i ] h i = tanh ( W h e h i + b h ) (3) where W h is a parameter matrix and b h is a parameter vector, optimized during training.", "Next, we include an attention mechanism that allows the network to dynamically control how much each word position contributes to the combined representation.", "In most attention-based systems, the attention amount is calculated in reference to some external information.", "For example, in machine translation the attention values are found based on a representation of the output that has already been generated (Bahdanau et al., 2015); in question answering, the attention weights are calculated in reference to the input question (Hermann et al., 2015).", "In our task there is no external information to be used, therefore we predict the attention values directly based on h i , by passing it through a separate feedforward layer: e i = tanh ( W e h i + b e ) (4) e e i = W e e e i + b e e (5) where W e e , b e e , W e and b e are trainable parameters and e e i results in a single scalar value.", "This method is equivalent to calculating the attention weights in reference to a fixed weight vector, which is optimized during training.", "Shen and Lee (2016) proposed an architecture for dialogue act detection where the attention values are found based on a separate set of word embeddings.", "We found that the method described above was consistently equivalent or better in development experiments, while requiring a smaller number of parameters.", "The values of e e i are unrestricted and should be normalized before using them for attention, to avoid sentences of different length having representations of different magnitude.", "The common approach is to use an exponential function to transform the value, and then normalize by the sum of all values in the sentence: a i = exp ( e e i ) P Nk =1 exp ( e e k ) (6) The value a i is now in a range 0 a i 1 and higher values indicate that the word at position i is more important for predicting the sentence class.", "The network learns to predict informative values for a i based only on the sentence objective, without receiving token-level supervision.", "Therefore, we can use these attention values at each token in order to infer an unsupervised sequence labeling output.", "The method in Equation 6 is well-suited for applications such as machine translation the exponential function encourages the attention to prioritize only one word in the sentence, resulting in a word-word alignment.", "However, the same function is less suitable for our task of unsupervised sequence labeling, as there is no reason to assume that exactly one word has a positive label.", "An input sentence can contain more than one tagged token, or it can contain no tokens of interest, and this should be reflected in the predictions.", "Instead of the exponential function, we make use of the logistic function for calculating soft attention weights: e a i = ( e e i ) a i = e a i P Nk =1 e a k (7) where each e a i has an individual value in the range 0 e a i 1 and a i is normalized to sum up to 1 over all values in the sentence.", "The normalized weights a i are used for combining the context-conditioned hidden representations from Equation 294 w i h i a i e i h i h i w i-1 h i-1 a i-1 e i-1 h i-1 h i-1 w i+1 h i+1 a i+1 e i+1 h i+1 h i+1 d y Figure 1: The neural network architecture for zero-shot sequence labeling.", "NX In addition, we can use the pre-normalization value e a i as a score for sequence labeling, with a natural decision boundary of 0 .", "5 higher values indicate that the token at position i is important and should be labeled positive, whereas lower values suggest the token is largely ignored for sentence classification and can receive a negative label.", "Attention weights with sigmoid activation have been shown to also improve performance on classification tasks (Shen and Lee, 2016), which indicates that this architecture has the benefit of being both accurate and interpretable on the token level.", "Finally, we pass the sentence representation c through a feedforward layer and predict a binary label for the overall sentence: d = tanh ( W d c + b d ) (9) y = ( W y d + b y ) (10) where d is a sentence vector and y is a single value between 0 y 1 , with values higher than 0 .", "5 indicating a positive class and lower values indicating a negative prediction.", "In order to optimize the model, we use several different loss functions.", "The first is the squared loss which optimizes the sentence-level score prediction to match the gold label in the annotation: L 1 = X j ( y ( j ) e y ( j ) ) 2 (11) where y ( j ) is the predicted score for the j -th sentence, and e y ( j ) is the true binary label (0 , 1) for the j -th sentence.", "In addition, we want to encourage the model to learn high-quality token-level labels as part of the attention weights.", "While the model does not have access to token-level annotation during training, there are two constraints that we can take advantage of:", "1. Only some, but not all, tokens in the sentence can have a positive label.", "2. There are positive tokens in a sentence only if the overall sentence is positive.", "We can then construct loss functions that encourage the model to optimize for these constraints: L 2 = X j ( min i ( e a i ) 0) 2 (12) L 3 = X j ( max i ( e a i ) e y ( j ) ) 2 (13) where min i ( e a i ) is the minimum value of all the attention weights in the sentence and max i ( e a i ) is the corresponding maximum value.", "Equation 12 optimizes the minimum unnormalized attention weight in a sentence to be 0, satisfying the constraint that all tokens in a sentence should not have a positive token-level label.", "Equation 13 then optimizes for the maximum unnormalized attention weight in a sentence to be equal to the gold label for that sentence, which is either 0 or 1 , incen-tivizing the network to only assign large attention 295 weights to tokens in positive sentences.", "These objectives do not provide the model with additional information, but serve to push the attention scores to a range that is suitable for binary classification.", "We combine all of these loss objectives together for the main optimization function: L = L 1 + ( L 2 + L 3 ) (14) where is used to control the importance of the auxiliary objectives.", "We experiment with an alternative method for inducing token-level labels, based on visualization methods using gradient analysis.", "Research in computer vision has shown that interpretable visualizations of convolutional networks can be obtained by analyzing the gradient after a single backpropagation pass through the network (Zeiler and Fergus, 2014).", "Denil et al. (2014) extended this approach to natural language processing, in order to find and visualize the most important sentences in a text.", "Recent work has also used the gradient-based approach for visualizing the decisions of text classification models on the token level (Li et al., 2016; Alikaniotis et al., 2016).", "In this section we propose an adaptation that can be used for sequence labeling tasks.", "We first perform a forward pass through the network and calculate the predicted sentence-level score y .", "Next, we define a pseudo-label y = 0 , regardless of the true label of the sentence.", "We then calculate the gradient of the word representation w i with respect to the loss function using this pseudo-label: g i = L 1 w i (cid:12)(cid:12) (cid:12) ( y ,y ) (15) where L 1 is the squared loss function from Equation 11.", "The magnitude of g i , | g i | can now be used as an indicator of how important that word is for the positive class.", "The intuition behind this approach is that the magnitude of the gradient indicates which individual words need to be changed the most in order to make the overall label of the sentence negative.", "These are the words that are contributing most towards the positive class and should be labeled as such individually.", "An obstacle in using this score for sequence labeling comes from the fact that there is no natural decision boundary between the two classes.", "The magnitude of the gradient is not constrained to a specific range and can vary quite a bit depending on the sentence length and the predicted sentence-level score.", "In order to map this magnitude to a decision, we analyze the distribution of magnitudes in a sentence.", "Intuitively, we want to detect outliers scores that are larger than expected.", "Therefore, we map all the magnitudes in a sentence to a Gaussian distribution and set the decision boundary at 1 .", "5 standard deviations.", "Any word that has a gradient magnitude higher than that will be tagged with a positive class for sequence labeling.", "If all the magnitudes in a sentence are very similar, none of them will cross this threshold and therefore all words will be labeled as negative.", "We calculate the gradient magnitude using the same network architecture as described in Section 2, at word representation w i after the character-based features have been included.", "The attention-based architecture is not necessary for this method, therefore we also report results using a more traditional bidirectional LSTM, concatenating the last hidden states from both directions and using the result as a sentence representation for the main objective.", "The system for producing token-level predictions based on sentence-level training data does not necessarily need to be a neural network.", "As the initial experiment, we trained a Naive Bayes classifier with n-gram features on the annotated sentences and then used it to predict a label only based on a window around the target word.", "However, this did not produce reliable results since the classifier is trained on full sentences, the distribution of features is very different and does not apply to a window of only a few words.", "Instead, we calculate the relative frequency of a feature occurring in a positive sentence, normalized by the overall frequency of the feature, and calculate the geometric average over all features that contain a specific word: r k = c ( X k = 1 , Y = 1) P z (0 , 1) c ( X k = 1 , Y = z ) (16) 296 score i = | Fi | s Y k F i r k (17) where c ( X k = 1 , Y = 1) is the number of times feature k is present in a sentence with a positive label, F i is the set of n-gram features present in the sentence that involve the i -th word in the sentence, and score i is the token-level score for the i -th token in the sentence.", "We used unigram, bigram and trigram features, with extra special tokens to mark the beginning and end of a sentence.", "This method will assign a high score to tokens or token sequences that appear more often in sentences which receive a positive label.", "While it is not able to capture long-distance context, it can memorize important keywords from the training data, such as modal verbs for uncertainty detection or common spelling errors for grammatical error detection.", "Finally, we also report the performance of a supervised sequence labeling model on the same tasks.", "This serves as an indicator of an upper bound for a given dataset how well the system is able to detect relevant tokens when directly optimized for sequence labeling and provided with token-level annotation.", "We construct a bidirectional LSTM tagger, following the architectures from Irsoy and Cardie (2014), Lample et al. (2016) and Rei (2017).", "Character-based representations are concatenated with word embeddings, passed through a bidirectional LSTM, and the hidden states from both direction are concatenated.", "Based on this, a probability distribution over the possible labels is predicted and the most probable label is chosen for each word.", "While Lample et al. (2016) used a CRF on top of the network, we exclude it here as the token-level scores coming from that network do not necessarily reflect the individual labels, since the best label sequence is chosen globally based on the combined sentence-level score.", "The supervised model is optimized by minimizing cross-entropy, training directly on the token-level annotation.", "The CoNLL 2010 shared task (Farkas et al., 2010) investigated the detection of uncertainty in natural language texts.", "The use of uncertain language (also known as hedging) is a common tool in scientific writing, allowing scientists to guide research beyond the evidence without overstating what follows from their work.", "Vincze et al. (2008) showed that 19.44% of sentences in the biomedical papers of the BioScope corpus contain hedge cues.", "Automatic detection of these cues is important for downstream tasks such as information extraction and literature curation, as typically only definite information should be extracted and cu-rated.", "The dataset is annotated for both hedge cues (keywords indicating uncertainty) and scopes (the area of the sentence where the uncertainty ap-plies).", "The cues are not limited to single tokens, and can also consist of several disjoint tokens (for example, either ... or ... ).", "An example sentence from the dataset, with bold font indicating the hedge cue and curly brackets marking the scope of uncertainty: Although IL-1 has been reported to contribute to Th17 differentiation in mouse and man, it remains to be determined { whether therapeutic targeting of IL-1 will substantially affect IL-17 in RA } .", "The first subtask in CoNLL 2010 was to detect any uncertainty in a sentence by predicting a binary label.", "The second subtask required the detection of all the individual cue tokens and the resolution of their scope.", "In our experiments, we train the system to detect sentence-level uncertainty, use the architecture to infer the token-level labeling and evaluate the latter on the task of detecting uncertainty cues.", "Since the cues are defined as keywords that indicate uncertainty, we would expect the network to detect and prioritize attention on these tokens.", "We use the train/test data from the second task, which contains the token-level annotation needed for evaluation, and randomly separate 10% of the training data for development.", "grammatically correct sentence.", "The task has numerous applications for writing improvement and assessment, and recent work has focused on error detection as a supervised sequence labeling task (Rei and Yannakoudakis, 2016; Kaneko et al., 2017; Rei, 2017).", "Error detection can also be performed on the sentence level detecting whether the sentence needs to be edited or not.", "Andersen et al. (2013) described a practical tutoring system that provides sentence-level feedback to language learners.", "The 2016 shared task on Automated Evaluation of Scientific Writing (Daudaravicius et al., 2016) also required participants to return binary predictions on whether the input sentence needs to be corrected.", "We evaluate our system on the First Certifi-cate in English (FCE, Yannakoudakis et al. (2011)) dataset, containing error-annotated short essays written by language learners.", "While the original corpus is focused on aligned corrections, Rei and Yannakoudakis (2016) converted the dataset to a sequence labeling format, which we make use of here.", "An example from the dataset, with bold font indicating tokens that have been annotated as incorrect given the context: When the show started the person who was acting it was not Danny Brook and he seemed not to be an actor.", "We train the network as a sentence-level error detection system, returning a binary label and a confidence score, and also evaluate how accurately it is able to recover the locations of individual errors on the token level.", "SemEval has been running a series of popular shared tasks on sentiment analysis in text from social media (Nakov et al., 2013; Rosenthal et al.,", "2014, 2015).", "The competitions have included various subtasks, of which we are interested in two: Task A required the polarity detection of individual phrases in a tweet, and Task B required sentiment detection of the tweet as a whole.", "A single tweet could contain both positive and negative phrases, regardless of its overall polarity, and was therefore separately annotated on the tweet level.", "In the following example from the dataset, negative phrases are indicated with a bold font and positive phrases are marked with italics, whereas the overall sentiment of the tweet is annotated as negative: They may have a SuperBowl in Dallas, but Dallas ain't winning a SuperBowl.", "Not with that quarterback and owner.", "@S4NYC @RasmussenPoll Sentiment analysis is a three-way task, as the system needs to differentiate between positive, negative and neutral sentences.", "Our system relies on a binary signal, therefore we convert this dataset into two binary tasks one aims to detect positive sentiment, the other focuses on negative sentiment.", "We train the system as a sentiment classifier, using the tweet-level annotation, and then evaluate the system on recovering the individual positive or negative tokens.", "We use the train/dev/test splits of the original SemEval 2013 Twitter dataset, which contains phrase-level sentiment annotation.", "During pre-processing, tokens are lowercased while the character-level component still retains access to the capitalization information.", "Word embeddings were set to size 300, pre-loaded from publicly available Glove (Pennington et al., 2014) 298 SemEval Negative SemEval Positive Sent F 1 MAP P R F 1 Sent F 1 MAP P R F 1 Supervised -67.70 31.79 44.66 37.02 -67.41 36.27 50.71 42.24 Relative freq -44.15 17.39 15.67 16.48 -47.64 13.39 54.69 21.51 LSTM-LAST-BP 53.65 43.02 8.33 28.41 12.88 70.83 49.06 17.66 35.06 23.48 LSTM-ATTN-BP 55.83 50.96 11.55 31.54 16.90 71.26 53.89 23.45 34.53 27.92 LSTM-ATTN-SW 55.83 54.37 29.41 14.40 19.23 71.26 56.45 37.19 25.96 30.45 Table 2: Results for different system configurations on the SemEval Twitter sentiment dataset, separated into positive and negative sentiment detection.", "embeddings and fine-tuned during training.", "Character embeddings were set to size 100.", "The recurrent layers in the character-level component have hidden layers of size 100 ; the hidden layers h i and h i are size 300.", "The hidden combined representation h i was set to size 200, and the attention weight layer e i was set to size 100.", "Parameter was set to 0 .", "01 based on development experiments.", "The model was implemented using Tensorflow (Abadi et al., 2016).", "The network weights were randomly initialized using the uniform Glorot initialization method (Glorot and Bengio, 2010) and optimization was performed using AdaDelta (Zeiler, 2012) with learning rate 1 .", "0 .", "Dropout (Srivastava et al., 2014) with probability 0 .", "5 was applied to word representations w i and the composed representations h i after the LSTMs.", "The training was performed in batches of 32 sentences.", "Sentence-level performance was observed on the development data and the training was stopped if performance did not improve for 7 epochs.", "The best overall model on the development set was then used to report performance on the test data, both for sentence classification and sequence labeling.", "In order to avoid random outliers, we performed each experiment with 5 random seeds and report here the averaged results.", "The code used for performing these experiments is made available online.", "1 6 Evaluation Results for the experiments are presented in Tables 1 and", "2. We first report the sentence-level F-measure in order to evaluate the performance on the general text classification objective.", "Next, we report the Mean Average Precision (MAP) at returning the active/positive tokens.", "This measure 1 http://www.marekrei.com/projects/mltagger rewards systems that assign higher scores to positive tokens as opposed to negative ones, evaluating this as a ranking problem.", "It disregards a specific classification threshold and therefore provides a more fair evaluation towards systems that could be improved simply by choosing a different decision boundary.", "Finally, we also report token-level precision, recall and F-measure for evaluating the accuracy of this model as a sequence labeler.", "2 We report five different system configurations: Relative freq is the n-gram based approach described in Section 3.2.", "Supervised is the fully supervised sequence labeling system described in Section 3.3.", "LSTM-LAST-BP is using the last hidden states from the word-level LSTMs for constructing a sentence representation, and the backpropagation-based method from Section 3.1 for inducing token labels.", "LSTM-ATTN-BP is using the attention-based network architecture together with the backpropagation-based labeling method.", "LSTM-ATTN-SW is the method described in Section 2, using soft attention weights for sequence labeling and additional objectives for optimizing the network.", "The method using attention weights achieves the best performance on all datasets, compared to other methods not using token-level supervision.", "On the CoNLL 2010 uncertainty detection dataset the system reaches 73.26% F-score, which is 93% of the supervised upper bound.", "The alternative methods using backpropagation and relative fre-2 The CoNLL 2010 shared task on uncertainty detection comes with an official scorer which requires additional steps and the detection of both cues and scopes, whereas the binary labels from the zero-shot systems are not directly applicable to this format.", "Similarly, error detection is commonly evaluated using F 0 .", "5 , which is motivated by end-user experience, but in this case we wish to specifically measure the tagging accuracy.", "Therefore we use the regular F 1 score as the main evaluation metric for both of these tasks.", "quency achieve high recall values, but comparatively lower precision.", "On the FCE dataset, the F-score is considerably lower at 28.27% this is due to the difficulty of the task and the supervised system also achieves only 34.76%.", "The attention-based system outperforms the alternatives on both of the SemEval evaluations.", "The task of detecting sentiment on the token level is quite difficult overall as many annotations are context-specific and require prior knowledge.", "For example, in order to correctly label the phrase have Superbowl as positive, the system will need to understand that organizing the Superbowl is a positive event for the city.", "Performance on the sentence-level classification task is similar for the different architectures on the CoNLL 2010 and FCE datasets, whereas the composition method based on attention obtains an advantage on the SemEval datasets.", "Since the latter architecture achieves competitive performance and also allows for attention-based token labeling, it appears to be the better choice.", "Analysis of the token-level MAP scores shows that the attention-based sequence labeling model achieves the best performance even when ignoring classification thresholds and evaluating the task through ranking.", "Figure 2 contains example outputs from the attention-based models, trained on each of the four datasets.", "In the first example, the uncertainty detector correctly picks up would appreciate if and possible , and the error detection model focuses most on the misspelling Definetely .", "Both the positive and negative sentiment models have assigned a high weight to the word disappoint-ing , which is something we observed in other examples as well.", "The system will learn to focus on phrases that help it detect positive sentiment, but the presence of negative sentiment provides implicit evidence that the overall label is likely not positive.", "This is a by-product of the 3-way classification task and future work could investigate methods for extending zero-shot classification to better match this requirement.", "In the second example, the system correctly labels the phrase what would be", "suitable? as uncertain, and part of the phrase I'm not really sure as negative.", "It also labels specifying as an error, possibly expecting a comma before it.", "In the third example, the error detection model labels Internet for the missing determiner, but also captures a more difficult error in depended , 300 which is an incorrect form of the word given the context.", "We investigated the task of performing sequence labeling without having access to any training data with token-level annotation.", "The proposed model is optimized as a sentence classifier and an attention mechanism is used for both composing the sentence representations and inferring individual token labels.", "Several alternative models were compared on three tasks uncertainty detection, error detection and sentiment detection.", "Experiments showed that the zero-shot labeling system based on attention weights achieved the best performance on all tasks.", "The model is able to automatically focus on the most salient areas of the sentence, and additional objective functions along with the soft attention mechanism encourage it to also perform well as a sequence labeler.", "The zero-shot labeling task can provide a quantitative evaluation of what the model is learning, along with offering a low-cost method for creating sequence labelers for new tasks, domains and languages.", "We would like to thank the NVIDIA Corporation for the donation of the Titan GPU that was used for this research.", "Anders Sgaard was partially funded by the ERC Starting Grant LOWLANDS No. 313695." ]
[ "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Recent works in dialogue state tracking (DST) focus on an open vocabulary-based setting to resolve scalability and generalization issues of the predefined ontology-based approaches.", "However, they are inefficient in that they predict the dialogue state at every turn from scratch.", "Here, we consider dialogue state as an explicit fixed-sized memory and propose a selectively overwriting mechanism for more efficient DST.", "This mechanism consists of two steps: (1) predicting state operation on each of the memory slots, and (2) overwriting the memory with new values, of which only a few are generated according to the predicted state operations.", "Our method decomposes DST into two sub-tasks and guides the decoder to focus only on one of the tasks, thus reducing the burden of the decoder.", "This enhances the effectiveness of training and DST performance.", "Our SOM-DST (Se-lectively Overwriting Memory for Dialogue State Tracking) model achieves state-of-the-art joint goal accuracy with 51.72% in MultiWOZ 2.0 and 53.01% in MultiWOZ 2.1 in an open vocabulary-based DST setting.", "In addition, we analyze the accuracy gaps between the current and the ground truth-given situations and suggest that it is a promising direction to improve state operation prediction to boost the DST performance.", "1 1 Introduction Building robust task-oriented dialogue systems has gained increasing popularity in both the research and industry communities (Chen et al., 2017).", "Dialogue state tracking (DST), one of the essential tasks in task-oriented dialogue systems (Zhong et al., 2018), is keeping track of user goals or intentions throughout a dialogue in the form of a set of slot-value pairs, i.e., dialogue state.", "Because the 1 The code is available at github.com/clovaai/som-dst.", "Traditional neural DST approaches assume that all candidate slot-value pairs are given in advance, i.e., they perform predefined ontology-based DST (Mrksic et al., 2017; Zhong et al., 2018; Nouri and Hosseini-Asl, 2018; Lee et al., 2019).", "Most previous works that take this approach perform DST by scoring all possible slot-value pairs in the ontology and selecting the value with the highest score as the predicted value of a slot.", "Such an approach has been widely applied to datasets like DSTC2 and WOZ2.0, which have a relatively small ontology size.", "(Henderson et al., 2014; Wen et al., 2017)", "Although this approach simplifies the task, it has inherent limitations: (1) it is often difficult to obtain the ontology in advance, especially in a real scenario (Xu and Hu, 2018), (2) predefined ontology-based DST cannot handle previously unseen slot values, and (3) the approach does not scale large since it has to go over all slot-value candidates at every turn to predict the current dialogue state.", "Indeed, recent DST datasets often have a large size of ontology; e.g., the total number of slot-value candidates in MultiWOZ 2.1 is 4510, while the numbers are much smaller in DSTC2 and WOZ2.0 as 212 and 99, respectively (Budzianowski et al., 2018).", "To address these issues, recent methods employ an approach that either directly generates or extracts a value from the dialogue context for every slot, allowing open vocabulary-based DST (Lei et al., 2018; Gao et al., 2019; Wu et al., 2019; Ren et al., 2019).", "While this formulation is relatively more scalable and robust to handling unseen slot values, many of the previous works do not efficiently perform DST since they predict the dialogue state from scratch at every dialogue turn.", "In this work, we focus on an open vocabulary-based setting and propose SOM-DST (Selectively Overwriting Memory for Dialogue State Tracking).", "Regarding dialogue state as a memory that can be selectively overwritten (Figure 1), SOM-DST decomposes DST into two sub-tasks: (1) state operation prediction, which decides the types of the operations to be performed on each of the memory slots, and (2) slot value generation, which generates the values to be newly written on a subset of the memory slots (Figure 2).", "This decomposition allows our model to efficiently generate the values of only a minimal subset of the slots, while many of the previous works generate or extract the values of all slots at every dialogue turn.", "Moreover, this decomposition reduces the difficulty of DST in an open-vocabulary based setting by clearly separating the roles of the encoder and the decoder.", "Our encoder, i.e., state operation predictor, can focus on selecting the slots to pass to the decoder so that the decoder, i.e., slot value generator, can focus only on generating the values of those selected slots.", "To the best of our knowledge, our work is the first to propose such a selectively overwritable memory-like perspective and a discrete two-step approach on DST.", "Our proposed SOM-DST achieves state-of-the-art joint goal accuracy in an open vocabulary-based DST setting on two of the most actively studied datasets: MultiWOZ 2.0 and MultiWOZ 2.1.", "Error analysis (Section 6.2) further reveals that improving state operation prediction can significantly boost the final DST accuracy.", "In summary, the contributions of our work built on top of a perspective that considers dialogue state tracking as selectively overwriting memory are as follows: Enabling efficient DST, generating the values of a minimal subset of the slots by utilizing the previous dialogue state at each turn.", "Achieving state-of-the-art performance on MultiWOZ 2.0 and MultiWOZ 2.1 in an open vocabulary-based DST setting.", "Highlighting the potential of improving the state operating prediction accuracy in our proposed framework.", "Many works on recent task-oriented dialogue datasets with a large scale ontology, such as MultiWOZ 2.0 and MultiWOZ 2.1, solve DST in an open vocabulary-based setting (Gao et al., 2019; Wu et al., 2019; Ren et al., 2019; Le et al., 2020a,b).", "Wu et al. (2019) show the potential of applying the encoder-decoder framework (Cho et al., 2014a) to open vocabulary-based DST.", "However, their method is not computationally efficient because it performs autoregressive generation of the values for all slots at every dialogue turn.", "Ren et al. (2019) tackle the drawback of the model of Wu et al. (2019), that their model generates the values of all slots at every dialogue turn, by using a hierarchical decoder.", "In addition, they come up with a new notion dubbed Inference Time Complexity (ITC) to compare the efficiency of different DST models.", "ITC is calculated using the number of slots J and the number of corresponding slot values M .", "2 Following their work, we also calculate ITC in Appendix B for comparison.", "Le et al. (2020b) introduce another work that tackles the efficiency issue.", "To maximize the computational efficiency, they use a non-autoregressive decoder to generate the slot values of the current dialogue state at once.", "They encode the slot type information together with the dialogue context and 2 The notations used in the work of Ren et al. (2019) are n and m , respectively.", "the delexicalized dialogue context.", "They do not use the previous turn dialogue state as the input.", "Le et al. (2020a) process the dialogue context in both domain-level and slot-level.", "They make the final representation to generate the values using a late fusion approach.", "They show that there is a performance gain when the model is jointly trained with response generation.", "However, they still generate the values of every slot at each turn, like Wu et al. (2019).", "Gao et al. (2019) formulate DST as a reading comprehension task and propose a model named DST Reader that extracts the values of the slots from the input.", "They introduce and show the importance of the concept of a slot carryover module, i.e., a component that makes a binary decision whether to carry the value of a slot from the previous turn dialogue state over to the current turn dialogue state.", "The definition and use of discrete operations in our work is inspired by their work.", "Zhang et al. (2019) target the issue of ill-formatted strings that generative models suffer from.", "In order to avoid this issue, they take a hybrid approach.", "For the slots they categorize as picklist-based slots, they use a predefined ontology-based approach as in the work of Lee et al. (2019); for the slots they categorize as span-based slots, they use a span extraction-based method like DST-Reader (Gao et al., 2019).", "However, their hybrid model shows lower performance than when they use only the picklist-based approach.", "Although their solely picklist-based model achieves state-of-the-art joint accuracy in MultiWOZ 2.1, it is done in a predefined ontology-based setting, and thus cannot avoid the scalability and generalization issues of predefined ontology-based DST.", "Dialogue State We define the dialogue state at turn t , B t = { ( S j , V jt ) | 1 j J } , as a fixed-sized memory whose keys are slots S j and values are the corresponding slot value V jt , where J is the total number of such slots.", "Following the convention of MultiWOZ 2.0 and MultiWOZ 2.1, we use the term slot to refer to the concatenation of a domain name and a slot name.", "Special Value There are two special values NULL and DONTCARE .", "NULL means that no information is given about the slot up to the turn.", "For instance, the dialogue state before the beginning of any dialogue B 0 has only NULL as the value of all slots.", "DONTCARE means that the slot neither needs to be tracked nor considered important in the dialogue at that time.", "3 Operation At every turn t , an operation r jt O = { CARRYOVER , DELETE , DONTCARE , UPDATE } is chosen by the state operation predictor (Section 3 Such notions of none value and dontcare value appear in the previous works as well (Wu et al., 2019; Gao et al., 2019; Le et al., 2020b; Zhang et al., 2019).", "3.1) and performed on each slot S j to set its current turn corresponding value V jt .", "When an operation is performed, it either keeps the slot value unchanged ( CARRYOVER ) or changes it to some value different from the previous one ( DELETE , DONTCARE , and UPDATE ) as the following.", "V jt = V jt 1 if r jt = CARRYOVERNULL if r jt = DELETEDONTCARE if r jt = DONTCARE v if r jt = UPDATE The operations that set the value of a slot to a special value ( DELETE to NULL and DONTCARE to DONTCARE , respectively) are chosen only when the previous slot value V jt 1 is not the corresponding special value.", "UPDATE operation requires the generation of a new value v / { V jt 1 , NULL , DONTCARE } by slot value generator (Section 3.2).", "State operation predictor performs state operation prediction as a classification task, and slot value generator performs slot value generation to find out the values of the slots on which UPDATE should be performed.", "The two components of SOM-DST are jointly trained to predict the current turn dialogue state.", "Input Representation We denote the representation of the dialogue utterances at turn t as D t = A t ; U t [SEP] , where A t is the system response and U t is the user utterance.", "; is a special token used to mark the boundary between A t and U t , and [SEP] is a special token used to mark the end of a dialogue turn.", "We denote the representation of the dialogue state at turn t as B t = B 1 t . . . B Jt , where B jt = [SLOT] j S j V jt is the representation of the j -th slot-value pair.", "is a special token used to mark the boundary between a slot and a value.", "[SLOT] j is a special token used to aggregate the information of the j -th slot-value pair into a single vector, like the use case of [CLS] token in BERT (Devlin et al., 2019).", "In this work, we use the same special token [SLOT] for all [SLOT] j .", "Our state operation predictor employs a pretrained BERT encoder.", "The input tokens to the state operation predictor are the concatenation of the previous turn dialog utterances, the current turn dialog utterances, and the previous turn dialog state: 4 X t = [CLS] D t 1 D t B t 1 , where [CLS] is a special token added in front of every turn input.", "Using the previous dialogue state as the input serves as an explicit, compact, and informative representation of the dialogue history for the model.", "When the value of the j -th slot at time t 1 , i.e., V jt 1 , is NULL , we use a special token [NULL] as the input.", "When the value is DONTCARE , we use the string dont care to take advantage of the semantics of the phrase don't care that the pretrained BERT encoder would have already learned.", "The input to BERT is the sum of the embeddings of the input tokens X t , segment id embeddings, and position embeddings.", "For the segment id, we use 0 for the tokens that belong to D t 1 and 1 for the tokens that belong to D t or B t 1 .", "The position embeddings follow the standard choice of BERT.", "Encoder Output The output representation of the encoder is H t R | X t | d , and h [CLS] t , h [SLOT] j t R d are the outputs that correspond to [CLS] and [SLOT] j , respectively.", "h Xt , the aggregated sequence representation of the entire input X t , is obtained by a feed-forward layer with a learnable parameter W pool R d d as: h Xt = tanh ( W pool h [CLS] t ) .", "prediction is a four-way classification performed on top of the encoder output for each slot representation h [SLOT] j t :", "where W opr R |O| d is a learnable parameter and P jopr,t R |O| is the probability distribution over operations for the j -th slot at turn t .", "In our formulation, |O| = 4 , because O = { CARRYOVER , DELETE , DONTCARE , UPDATE } .", "Then, the operation is determined by r jt = argmax ( P jopr,t ) and the slot value generation is performed on only the slots whose operation is UPDATE .", "We define the set of the slot indices which require the value generation as U t = { j | r jt = UPDATE } , and its size as J (cid:48) t = | U t | .", "4 We use only the previous turn dialogue utterances D t 1 as the dialogue history, i.e., the size of the dialogue history is 1.", "This is because our model assumes Markov property in dialogues as a part of the input, the previous turn dialogue state B t 1 , can serve as a compact representation of the whole dialogue history.", "For each j -th slot such that j U t , the slot value generator generates a value.", "Our slot value generator differs from the generators of many of the previous works because it generates the values for only J (cid:48) t number of slots, not J .", "In most cases, J (cid:48) t (cid:28) J , so this setup enables an efficient computation where only a small number of slot values are newly generated.", "We use Gated Recurrent Unit (GRU) (Cho et al., 2014b) decoder like Wu et al. (2019).", "GRU is initialized with g j, 0 t = h X t and e j, 0 t = h [SLOT] j t , and recurrently updates the hidden state g j,kt R d by taking a word embedding e j,kt as the input until [EOS] token is generated: g j,kt = GRU ( g j,k 1 t , e j,kt ) .", "The decoder hidden state is transformed to the probability distribution over the vocabulary at the k -th decoding step, where E R d vcb d is the word embedding matrix shared across the encoder and the decoder, such that d vcb is the vocabulary size.", "As the work of Wu et al. (2019), we use the soft-gated copy mechanism (See et al., 2017) to get the final output distribution P j,kval,t over the candidate value tokens: P j,kctx,t = softmax ( H t g j,kt ) R | X t | , P j,kval,t = P j,kvcb,t + (1 ) P j,kctx,t , such that is a scalar value computed as: = sigmoid ( W 1 [ g j,kt ; e j,kt ; c j,kt ]) , where W 1 R 1 (3 d ) is a learnable parameter and c j,kt = P j,k ctx,t H t R d is a context vector.", "State Operation Predictor In addition to the state operation classification, we use domain classification as an auxiliary task to force the model to learn the correlation of slot operations and domain transitions in between dialogue turns.", "Domain classification is done with a softmax layer on top of h Xt : P dom,t = softmax ( W dom h Xt ) , where W dom R d dom d is a learnable parameter and P dom,t R d dom is the probability distribution over domains at turn t .", "d dom is the number of domains defined in the dataset.", "The loss for each of state operation classification and domain classification is the average of the negative log-likelihood, as follows: L opr,t = 1 JJ (cid:88) j =1 ( Y jopr,t ) (cid:124) log P jopr,t , L dom,t = ( Y dom,t ) (cid:124) log P dom,t , where Y dom,t R d dom is the one-hot vector for the ground truth domain and Y jopr,t R |O| is the one-hot vector for the ground truth operation for the j -th slot.", "Slot Value Generator The objective function to train slot value generator is also the average of the negative log-likelihood: L svg,t = 1 | U t | (cid:88) j U t 1 K jt K jt (cid:88) k =1 ( Y j,kval,t ) (cid:124) log P j,kval,t , where K jt is the number of tokens of the ground truth value that needs to be generated for the j -th slot.", "Y j,kval,t R d vcb is the one-hot vector for the ground truth token that needs to be generated for the j -th slot at the k -th decoding step.", "Therefore, the final joint loss L joint,t to be minimized at dialogue turn t is the sum of the losses mentioned above: L joint,t = L opr,t + L dom,t + L svg,t .", "We use MultiWOZ 2.0 (Budzianowski et al., 2018) and MultiWOZ 2.1 (Eric et al., 2019) as the datasets in our experiments.", "These datasets are two of the largest publicly available multi-domain task-oriented dialogue datasets, including about 10,000 dialogues within seven domains.", "MultiWOZ 2.1 is a refined version of MultiWOZ 2.0 in which the annotation errors are corrected.", "5 Following Wu et al. (2019), we use only five domains ( restaurant , train , hotel , taxi , attraction ) 5 See Table 8 in Appendix A for more details of MultiWOZ 2.1.", "excluding hospital and police .", "6 Therefore, the number of domains d dom is 5 and the number of slots J is 30 in our experiments.", "We use the script provided by Wu et al. (2019) to preprocess the datasets.", "7 4.2 Training We employ the pretrained BERT-base-uncased model 8 for state operation predictor and one GRU (Cho et al., 2014b) for slot value generator.", "The hidden size of the decoder is the same as that of the encoder, d , which is 768.", "The token embedding matrix of slot value generator is shared with that of state operation predictor.", "We use BertAdam as our optimizer (Kingma and Ba, 2015).", "We use greedy decoding for slot value generator.", "The encoder of state operation predictor makes use of a pretrained model, whereas the decoder of slot value generator needs to be trained from scratch.", "Therefore, we use different learning rate schemes for the encoder and the decoder.", "We set the peak learning rate and warmup proportion to 4e-5 and 0.1 for the encoder and 1e-4 and 0.1 for the decoder, respectively.", "We use a batch size of 32 and set the dropout (Srivastava et al., 2014) rate to 0.1.", "We also utilize word dropout (Bowman et al., 2016) by randomly replacing the input tokens with the special [UNK] token with the probability of 0.1.", "The max sequence length for all inputs is fixed to 256.", "We train state operation predictor and slot value generator jointly for 30 epochs and choose the model that reports the best performance on the validation set.", "During training, we use the ground truth state operations and the ground truth previous turn dialogue state instead of the predicted ones.", "When the dialogue state is fed to the model, we randomly shuffle the slot order with a rate of 0.5.", "This is to make state operation predictor exploit the semantics of the slot names and not rely on the position of the slot tokens or a specific slot order.", "During inference or when the slot order is not shuffled, the slots are sorted alphabetically.", "We use teacher forcing 50% of the time to train the decoder.", "All experiments are performed on NAVER Smart Machine Learning (NSML) platform (Sung et al., 2017; Kim et al., 2018).", "All the reported results of SOM-DST are averages over ten runs.", "We compare the performance of SOM-DST with both predefined ontology-based models and open vocabulary-based models.", "FJST uses a bidirectional LSTM to encode the dialogue history and uses a feed-forward network to predict the value of each slot (Eric et al., 2019).", "HJST is proposed together with FJST; it encodes the dialogue history using an LSTM like FJST but uses a hierarchical network (Eric et al., 2019).", "SUMBT exploits BERT-base as the encoder for the dialogue context and slot-value pairs.", "After encoding them, it scores every candidate slot-value pair in a non-parametric manner using a distance measure (Lee et al., 2019).", "HyST employs a hierarchical RNN encoder and takes a hybrid approach that incorporates both a predefined ontology-based setting and an open vocabulary-based setting (Goel et al., 2019).", "DST Reader formulates the problem of DST as an extractive QA task; it uses BERT-base to make the contextual word embeddings and extracts the value of the slots from the input as a span (Gao et al., 2019).", "TRADE encodes the whole dialogue context with a bidirectional GRU and decodes the value for every slot using a copy-augmented GRU decoder (Wu et al., 2019).", "COMER uses BERT-large as a feature extractor and a hierarchical LSTM decoder to generate the current turn dialogue state itself as the target sequence (Ren et al., 2019).", "ML-BST uses a Transformer-based architecture to encode the dialogue context with the domain and slot information and combines the outputs in a late fusion approach.", "Then, it generates the slot values and the system response jointly (Le et al., 2020a).", "DS-DST uses two BERT-base encoders and takes a hybrid approach of predefined ontology-based DST and open vocabulary-based DST.", "It defines picklist-based slots for classification similarly to SUMBT and span-based slots for span extraction like DST Reader (Zhang et al., 2019).", "DST-picklist is proposed together with DS-DST and uses a similar architecture, but it performs only predefined ontology-based DST considering all slots as picklist-based slots (Zhang et al., 2019).", "Table 1 shows the joint goal accuracy of SOM-DST and other models on the test set of MultiWOZ 2.0 and MultiWOZ 2.1.", "Joint goal accuracy is an accuracy which checks whether all slot values predicted at a turn exactly match the ground truth values.", "As shown in the table, SOM-DST achieves state-of-the-art performance in an open vocabulary-based setting.", "Interestingly, on the contrary to the previous works, our model achieves higher performance on MultiWOZ 2.1 than on MultiWOZ 2.0.", "This is presumably because our model, which explicitly uses the dialogue state labels as input, benefits more from the error correction on the state annotations done in MultiWOZ 2.1.", "9 9 Eric et al. (2019) report that the correction of the annotations done in MultiWOZ 2.1 changes about 32% of the state annotations of MultiWOZ 2.0, which indicates that MultiWOZ 2.0 consists of many annotation errors.", "Table 2 shows the domain-specific results of our model and the concurrent works which report such results (Le et al., 2020a,b).", "Domain-specific accuracy is the accuracy measured on a subset of the predicted dialogue state, where the subset consists of the slots specific to a domain.", "While the performance is similar to or a little lower than that of other models in other domains, SOM-DST outperforms other models in taxi and train domains.", "This implies that the state-of-the-art joint goal accuracy of our model on the test set comes mainly from these two domains.", "A characteristic of the data from these domains is that they consist of challenging conversations; the slots of these domains are filled with more diverse values than other domains, 10 and there are more than one domain changes, i.e., the user changes the conversation topic during a dialogue more than once.", "For a specific example, among the dialogues where the domain switches more than once, the number of conversations that end in taxi domain is ten times more than in other cases.", "A more detailed statistics are given in Table 10 in Appendix A. Therefore, we assume our model performs relatively more robust DST in such challenging conversations.", "We conjecture that this strength attributes to the effective utilization of the previous turn dialogue state in its explicit form, like using a memory; 10 The statistics of the slot value vocabulary size are shown in Table 9 in Appendix A. Table 3: Joint goal accuracy on the MultiWOZ 2.1 test set when the four-way state operation prediction changes to two-way, three-way, or six-way.", "the model can explicitly keep even the information mentioned near the beginning of the conversation and directly copy the values from this memory whenever necessary.", "Figure 1 shows an example of a complicated conversation in MultiWOZ 2.1, where our model accurately predicts the dialogue state.", "More sample outputs of SOM-DST are provided in Appendix C. 6 Analysis 6.1 Choice of State Operations Table 3 shows the joint goal accuracy where the four-way state operation prediction changes to two-way, three-way, or six-way.", "The joint goal accuracy drops when we use two-way state operation prediction, which is a binary classification of whether to (1) carry over the previous slot value to the current turn or (2) generate a new value, like Gao et al. (2019).", "We assume the reason is that it is better to separately model operations DELETE , DONTCARE , and UPDATE that correspond to the latter class of the binary classification, since the values of DELETE and DONTCARE tend to appear implicitly while the values for UPDATE are often explicitly expressed in the dialogue.", "We also investigate the performance when only three operations are used or two more state operations, YES and NO , are used.", "YES and NO represent the cases where yes or no should be filled as the slot value, respectively.", "The performance drops in all of the cases.", "Table 4 shows the joint goal accuracy of the combinations of the cases where the ground truth is used or not for each of the previous turn dialogue state, state operations at the current turn, and slot", "values for UPDATE at the current turn.", "From this result, we analyze which of state operation predictor and slot value generator is more responsible for the error in the joint goal prediction, under the cases where error propagation occurs or not.", "Among the absolute error of 46.99% made under the situation that error propagation occurs, i.e., the dialogue state predicted at the previous turn is fed to the model, it could be argued that 92.85% comes from state operation predictor, 21.6% comes from slot value generator, and 14.45% comes from both of the components.", "This indicates that at least 78.4% to 92.85% of the error comes from state operation predictor, and at least 7.15% to 21.6% of the error comes from slot value generator.", "11 Among the absolute error of 19% made under the error propagation-free situation, i.e., ground truth previous turn dialogue state is fed to the model, it could be argued that 90.53% comes from state operation predictor, 19.63% comes from slot value generator, and 10.16% comes from both of the components.", "This indicates that at least 80.37% to 90.53% of the error comes from state operation predictor, and at least 9.47% to 19.63% of the error comes from slot value generator.", "Error propagation that comes from using the dialogue state predicted at the previous turn increases the error 2.47 (= 100 53 . 01 100 81 . 00 ) times.", "Both with and without error propagation, a relatively large amount 11 The calculation of the numbers in the paragraph is done as follows.", "(The figures in the paragraph immediately below are calculated in the same way.) 100 53 .", "of error comes from state operation predictor, implying that a large room for improvement currently exists in this component.", "Improving the state operation prediction accuracy, e.g., by tackling the class imbalance shown in Table 5, may have the potential to increase the overall DST performance by a large margin.", "In Table 6, we compare the number of slot values generated at a turn among various open vocabulary-based DST models that use an autoregressive decoder.", "The maximum number of slots whose values are generated by our model at a turn, i.e., the number of slots on which UPDATE should be performed, is 9 at maximum and only 1.14 on average in the test set of MultiWOZ 2.1.", "On the other hand, TRADE and ML-BST generate the values of all the 30 slots at every turn of a dialogue.", "COMER generates only a subset of the slot values like our model, but it generates the values of all the slots that have a nonNULL value at a turn, which is 18 at maximum and 5.72 on average.", "Table 7 shows the latency of SOM-DST and several other models.", "We measure the inference time for a dialogue turn of MultiWOZ 2.1 on Tesla V100 with a batch size of 1.", "The models used for comparison are those with official public implementations.", "It is notable that the inference time of SOM-DST is about 12.5 times faster than TRADE, which consists of only two GRUs.", "Moreover, the latency of SOM-DST is compatible with that of NADST, which explicitly uses non-autoregressive decoding, while SOM-DST achieves much higher joint goal accuracy.", "This shows the efficiency of the proposed selectively overwriting mechanism of SOM-DST, which generates only the minimal slot values at a turn.", "In Appendix B, we also investigate Inference Time Complexity (ITC) proposed in the work of Ren et al. (2019), which defines the efficiency of a DST model using J , the number of slots, and M , the number of values of a slot.", "We propose SOM-DST, an open vocabulary-based dialogue state tracker that regards dialogue state as an explicit memory that can be selectively overwritten.", "SOM-DST decomposes dialogue state tracking into state operation prediction and slot value generation.", "This setup makes the generation process efficient because the values of only a minimal subset of the slots are generated at each dialogue turn.", "SOM-DST achieves state-of-the-art joint goal accuracy on both MultiWOZ 2.0 and MultiWOZ 2.1 datasets in an open vocabulary-based setting.", "SOM-DST effectively makes use of the explicit dialogue state and discrete operations to perform relatively robust DST even in complicated conversations.", "Further analysis shows that improving state operation prediction has the potential to increase the overall DST performance dramatically.", "From this result, we propose that tackling DST with our proposed problem definition is a promising future research direction.", "The authors would like to thank the members Clova AI for proofreading this manuscript." ]
[ "abstain", "abstain", "objective", "abstain", "method", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "objective", "objective", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "method", "abstain", "abstain", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other" ]
[ "We introduce a novel method for multilingual transfer that utilizes deep contextual embeddings, pretrained in an unsupervised fashion.", "While contextual embeddings have been shown to yield richer representations of meaning compared to their static counterparts, aligning them poses a challenge due to their dynamic nature.", "To this end, we construct context-independent variants of the original monolingual spaces and utilize their mapping to derive an alignment for the context-dependent spaces.", "This mapping readily supports processing of a target language, improving transfer by context-aware embeddings.", "Our experimental results demonstrate the effectiveness of this approach for zero-shot and few-shot learning of dependency parsing.", "Specifically, our method consistently outperforms the previous state-of-the-art on 6 tested languages, yielding an improvement of 6.8 LAS points on average.", "1 1 Introduction Multilingual embedding spaces have been demonstrated to be a promising means for enabling crosslingual transfer in many natural language processing tasks (e.g. Ammar et al. (2016); Lample et al. (2018)).", "Similar to how universal part-of-speech tags enabled parsing transfer across languages (Petrov et al., 2012), multilingual word embeddings further improve transfer capacity by enriching models with lexical information.", "Since this lexical representation is learned in an unsupervised fashion and thus can leverage large amounts of raw data, it can capture a more nuanced representation of meaning than unlexical-ized transfer.", "Naturally, this enrichment is transEqual contribution 1 Code and models: https://github.com/ TalSchuster/CrossLingualELMo .", "in low-resource scenarios (Guo et al., 2015).", "In this paper, we are moving further along this line and exploring the use of contextual word embeddings for multilingual transfer.", "By dynamically linking words to their various contexts, these embeddings provide a richer semantic and syntactic representation than traditional context-independent word embeddings (Pe-ters et al., 2018).", "A straightforward way to utilize this richer representation is to directly apply existing transfer algorithms on the contextual embeddings instead of their static counterparts.", "In this case, however, each token pair is represented by many different vectors corresponding to its spe-cific context.", "Even when supervision is available in the form of a dictionary, it is still unclear how to utilize this information for multiple contextual embeddings that correspond to a word translation pair.", "In this paper, we propose a simple but effective mechanism for constructing a multilingual space of contextual embeddings.", "Instead of learning the alignment in the original, complex contextual space, we drive the mapping process using context-independent embedding anchors.", "We obtain these anchors by factorizing the contextual embedding space into context-independent and context-dependent parts.", "Operating at the anchor level not only compresses the space, but also enables us to utilize a word-level bilingual dictionary as a source of supervision, if available.", "Once the anchor-level alignment is learned, it can be readily applied to map the original spaces with contextual embeddings.", "Clearly, the value of word embeddings depends on their quality, which is determined by the amount of raw data available for their training (Jiang et al., 2018).", "We are interested in expanding the above approach to the truly low-resource scenario, where a language not only lacks annotations, but also has limited amounts of raw data.", "In this case, we can also rely on a data rich language to stabilize monolingual embeddings of the resource-limited language.", "As above, context-independent anchors are informing this process.", "Specifically, we introduce an alignment component to the loss function of the language model, pushing the anchors to be closer in the joint space.", "While this augmentation is performed on the static anchors, the benefit extends to the contextual embeddings space in which we operate.", "We evaluate our aligned contextual embeddings on the task of zero-shot cross-lingual dependency parsing.", "Our model consistently outperforms previous transfer methods, yielding absolute improvement of 6.8 LAS points over the prior state-of-the-art (Ammar et al., 2016).", "We also perform comprehensive studies of simplified variants of our model.", "Even without POS tag labeling or a dictionary, our model performs on par with context-independent models that do use such information.", "Our results also demonstrate the benefits of this approach for few-shot learning, i.e. processing languages with limited data.", "Specifically, on the Kazakh tree-bank from the recent CoNLL 2018 shared task with only 38 trees for training, the model yields 5 LAS points gain over the top result (Smith et al., 2018a).", "Multilingual Embeddings The topic of crosslingual embedding alignment is an active area of research (Mikolov et al., 2013; Xing et al., 2015; Dinu and Baroni, 2014; Lazaridou et al., 2015; Zhang et al., 2017).", "Our work most closely relates to MUSE (Conneau et al., 2018a), which constructs a multilingual space by aligning monolingual embedding spaces.", "When a bilingual dictionary is provided, their approach is similar to those of (Smith et al., 2017; Artetxe et al., 2017).", "MUSE extends these methods to the unsupervised case by constructing a synthetic dictionary.", "The resulting alignment achieves strong performance in a range of NLP tasks, from sequence labeling (Lin et al., 2018) to natural language inference (Conneau et al., 2018b) and machine translation (Lample et al., 2018; Qi et al., 2018).", "Recent work further improves the performance on both the supervised (Joulin et al., 2018) and unsupervised (Grave et al., 2018b; Alvarez-Melis and Jaakkola, 2018; Hoshen and Wolf, 2018) settings for context-independent embeddings.", "While MUSE operates over token based embeddings, we are interested in aligning contextual embeddings, which have shown their benefits in several monolingual applications (Peters et al., 2018; McCann et al., 2017; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2018).", "However, this expansion introduces new challenges which we address in this paper.", "In a concurrent study, Aldarmaki and Diab (2019) introduced an alignment that is based only on word pairs in the same context, using parallel sentences.", "Our method achieves better word translations without relying on such supervision.", "Our work also relates to prior approaches that utilize bilingual dictionaries to improve embeddings that were trained on small datasets.", "For instance, Xiao and Guo (2014) represent word pairs as a mutual vector, while Adams et al. (2017) jointly train cross-lingual word embeddings by replacing the predicted word with its translation.", "To utilize a dictionary in the contextualized case, we include a soft constraint that pushes those translations to be similar in their context-independent representation.", "A similar style of regularization was shown to be effective for cross-domain transfer of word embeddings (Yang et al., 2017).", "Multilingual Parsing In early work on multilingual parsing, transfer was commonly implemented using delexicalized representation such as part-of-speech tags (McDonald et al., 2011; Petrov et al., 2012; Naseem et al., 2012; Tiedemann, 2015).", "Another approach for cross-lingual parsing includes annotation projection and treebank translation (Xiao and Guo, 2015; Wang and Eisner, 2016; Tiedemann, 2017), which mostly require some source of supervision.", "Advancements in multilingual word representations opened a possibility of lexicalized transfer.", "Some of these approaches start by aligning monolingual embedding spaces (Zhang and Barzilay, 2015; Guo et al., 2015, 2016; Ammar et al., 2016), and using resulting word embeddings as word representations instead of universal tags.", "Other approaches are learning customized multilingual syntactic embeddings bootstrapping from universal POS tags (Duong et al., 2015).", "While some models also learn a language embedding (Ammar et al., 2016; de Lhoneux et al., 2018), it is unfeasible in a zero-shot scenario.", "In all of the above cases, token-level embeddings are used.", "Inspired by strong results of using contextualized embeddings in monolingual parsing (Che et al., 2018; Wang et al., 2018; Clark et al., 2018), we aim to utilize them in the multilingual transfer case.", "Our results demonstrate that richer representation of lexical space does lead to significant performance gains.", "In this section we describe several approaches for aligning context-dependent embeddings from a source language s to a target language t .", "We address multiple scenarios, where different amounts of supervision and data are present.", "Our approach is motivated by interesting properties of context-dependent embeddings, which we discuss later.", "We begin with some notations: Context Dependent Embeddings: Given a context c and a token i , we denote the embedding of i in the context c by e i,c .", "We use e i, to denote the point cloud of all contextual embeddings for token i .", "Embedding Anchor: Given a token i we denote the anchor of its context dependent embeddings by e i , where: e i = E c (cid:2) e i,c (cid:3) .", "(1) In practice, we calculate the average over a subset of the available unlabeled data.", "Shift From Mean: For any embedding e i,c we can therefore define the shift e i,c from the average via: e i,c = e i + e i,c .", "Formally, our alignment is always of the following form: e s t i,c = W s t e si,c .", "A given token i can generate multiple vectors e i,c , each corresponding to a different context c .", "A key question is how the point cloud e i, is distributed.", "In what follows we explore this structure, and reach several conclusions that will motivate our alignment approach.", "The following experiments are performed on ELMo (Peters et al., 2018).", "Point Clouds are Well Separated A cloud e i, corresponds to occurrences of the word i in different contexts.", "Intuitively, we would expect its points to be closer to each other than to points from e j, for a different word j .", "Indeed, when measuring similarity between points e i,c and their anchor e i , we find that these are much more similar than anchors of different words e i and e j (see Table 1).", "This observation supports our hypothesis that anchor-driven alignment can guide the construction of the alignment for the contextual space.", "A visualized example of the contextualized representations of four words is given in Figure 1, demonstrating the appropriateness of their anchors.", "Still, as previous studies have shown, and Figure 2: Contextual embeddings for the English word bear and its two possible translations in Spanish oso (animal) in blue and tener (to have) in red.", "Homonym Point Clouds are Multi-Modal When a word i has multiple distinct senses, we might expect the embeddings for i to reflect this by separating into multiple distinct clouds, one for each meaning.", "Figure 2 demonstrates that this indeed happens for the English word bear.", "Furthermore, it can be seen that after alignment (Sec-tion 3.3) with Spanish, the distinct point clouds are aligned with their corresponding distinct words in Spanish.", "See App.", "D for another example.", "We examined the shift from mean for a list of 250 English homonyms from Wikipedia.", "2 As Table 1 shows, the shift of these words is indeed slightly higher than it is for other words.", "However, they still remain relatively close to their per-token anchor.", "Therefore, these anchors can still serve as a good approximation for learning alignments.", "We begin by briefly reviewing previous approaches for aligning context-independent embeddings, as they are generalized in this work to the contextual case.", "We denote the embedding of a word i by e i .", "At first, assume we are given word pairs { ( e si , e ti ) } from a source language s and a 2 https://en.wikipedia.org/wiki/List_ of_true_homonyms target language t , and we look for a mapping between those.", "Mikolov et al. (2013) proposed to learn a linear transformation whereby e ti is approximated via W e si , for a learned matrix W .", "We focus on methods that follow this linear alignment.", "The alignment matrix is found by solving: W s t = argmin W O d ( R ) n (cid:88) i =1 (cid:13)(cid:13)(cid:13) W e si e ti (cid:13)(cid:13)(cid:13) 2 , (4) where O d ( R ) is the space of orthogonal matrices.", "This constraint was proposed by Xing et al. (2015) in order to preserve inter-lingual relations.", "Under this constraint, Eq.", "4 is an instance of the orthogonal Procrustes problem, which has a closed-form solution W s t = UVT .", "The columns of U and V are the left and right singular vectors of the multiplication of the source and (transposed) target embedding matrices.", "For the unsupervised case (i.e. when a dictionary is absent), Conneau et al. (2018a) (MUSE) suggested to learn the alignment via adversarial training, such that a discriminator is trained to distinguish between target and aligned source embeddings.", "Thereafter, a refinement procedure is applied iteratively as follows.", "First, a dictionary is built dynamically using the current alignment such that only words with high confidence are considered.", "Using the dictionary, the alignment matrix is re-calculated as in the supervised case.", "We next turn our attention to the main task of this paper, which is aligning context-dependent embeddings.", "We now describe our generalization of the methods described in Section 3.2 for this case.", "The first two methods are based only on anchors while the third one uses the contextual vectors themselves.", "Altogether, we suggest three alignment procedures, one aimed for the supervised and two for the unsupervised cases.", "Supervised Anchored Alignment As a first step, we are assuming access to a dictionary for the source and target domains.", "For each source word i denote by D ( i ) the corresponding word in the target language.", "3 In the context-dependent case, Eq.", "3 In practice, we may have multiple target words for a single source word, and the extension is straight-forward.", "words.", "However, this challenge can be addressed by aligning the vectors e i for which we do have one per word.", "This is motivated by our observations in Section 3.1 that context-dependent embeddings are well clustered around their centers.", "we solve Eq.", "4 with token anchors as inputs.", "We emphasize that by constraining W s t to be orthogonal, we also preserve relations between e i,c and e i,c (cid:48) that represent the contextual information.", "Unsupervised Anchored Alignment In this setting, no dictionary is present.", "As in the supervised case, we can naturally extend a context-independent alignment procedure to the contextual space by leveraging the anchor space e i .", "This can be done using the adversarial MUSE framework proposed by Conneau et al. (2018a) and described at the end of Section 3.2.", "Unsupervised Context-based Alignment Alternatively, the alignment could be learned directly on the contextual space.", "To this end, we follow again the adversarial algorithm of MUSE, but for each word we use multiple embeddings induced by different contexts, rather than the word anchor.", "This context-based alignment presents opportunities but also introduces certain challenges.", "On the one hand, it allows to directly handle homonyms during the training process.", "However, empirically we found that training in this setting is less stable than unsupervised anchored alignments.", "Refinement As a final step, for both of the unsupervised methods, we perform the refinement procedure that is incorporated in MUSE (end of Section 3.2).", "In order to synthesize a dictionary, we use distance in the anchor space.", "Thus far we assumed that embeddings for both source and target languages are pretrained separately.", "Afterwards, the source is mapped to the target in a second step via a learned mapping.", "However, this approach may not work well when raw data for the source languages is scarce, resulting in deficient embeddings.", "In what follows, we show how to address this problem when a dictionary is available.", "We focus on embeddings that are learned using a language model objective but this can be easily generalized to other objectives as well.", "Our key idea is to constrain the embeddings across languages such that word translations will be close to each other in the embedding space.", "This can serve as a regularizer for the resource-limited language model.", "In this case, the anchors are the model representations prior to its context-aware components (e.g., the inputs to ELMo's LSTM).", "Denote the anchor for word i in language s by v si .", "Now, assume we have trained a model for the target language and similarly have embeddings v ti .", "We propose to train the source model with an added regularization term as follows: anchor (cid:88) i (cid:107) v si v tD ( i ) (cid:107) 22 , (5) where anchor is a hyperparamter.", "This regularization has two positive effects.", "First, it reduces overfitting by reducing the effective number of parameters the model fits (e.g., if the regularizer has large coefficient, these parameters are essentially fixed).", "Second, it provides a certain level of alignment between the source and target language since they are encouraged to use similar anchors.", "Now that we presented our method for aligning contextual embeddings, we turn to evaluate it on the task of cross-lingual dependency parsing.", "We first describe our baseline model, and then show how our alignment can easily be incorporated into this architecture to obtain a multilingual parser.", "Baseline Parser Most previous cross-lingual dependency parsing models used transition-based models (Ammar et al., 2016; Guo et al., 2016).", "We follow Che et al. (2018); Wang et al. (2018); Clark et al. (2018) and use a first-order graph-based model.", "Specifically, we adopt the neural edge-scoring architecture from Dozat and Manning (2017); Dozat et al. (2017), which is based on Kiperwasser and Goldberg (2016).", "We now briefly review this architecture.", "Given a sentence s , let e i and p i be its word and POS-tag embeddings.", "These are concatenated and fed into a Bi-LSTM to produce token-level contextual representations r i .", "Four Multi-Layer Perceptrons are applied on these vectors, resulting in new representations h arc dep i , h arc head i , h rel dep i and h rel head i for each word i .", "s arcij = (cid:16) h arc head i (cid:17) T (cid:16) U arc h arc dep j + b arc (cid:17) .", "(6) Additionally, the score for predicting the dependency label r for an edge ( i, j ) is defined as s rel ( i,j ) ,r = (cid:16) h rel head i (cid:17) TU relr h rel dep j + (cid:16) u rel head r (cid:17) T h rel head i + (cid:16) u rel dep r (cid:17) T h rel dep j + b r .", "(7) At test time, MST is calculated to ensure valid outputs.", "Multilingual Parsing with Alignment We now extend this model, in order to effectively use it for transfer learning.", "First, we include contextualized word embeddings by replacing the static embeddings with a pre-trained ELMo (Peters et al., 2018) model (instead of e i ).", "Second, we share all model parameters across languages and use the contextual word embeddings after they are aligned to a joint space J .", "Formally, if s is a sentence of language (cid:96) , contextual word embeddings are obtained via: e (cid:96) J i,s = W (cid:96) J e i,s , (8) where W (cid:96) J is the alignment matrix from language (cid:96) to the joint space.", "4 This alignment is learned apriori and kept fixed during parser training.", "This setup is applicable for both single and multiple training languages.", "For the tested language, training data could be available, sometimes limited (few-shot), or absent (zero-shot).", "The alignment methods are described in detail in Section", "3. In their paper, Peters et al. (2018) suggest to output a linear combination over the representations of each layer of ELMo, learning these weights jointly with a downstream task.", "Our alignment is learned separately for each layer.", "Therefore, we keep the weights of the combination fixed during the training to ensure that the parser's inputs are from the joint cross-lingual space.", "Alternatively, one can share the weights of the combination between the languages and learn them.", "4 We use the space of the training language as our joint space and align the tested language to it.", "In the multi-source scenario, we align all embeddings to English.", "Contextual Embeddings We use the ELMo model (Peters et al., 2018) with its default parameters to generate embeddings of dimension 1024 for all languages.", "For each language, training data comprises Wikipedia dumps 5 that were tokenized using UDpipe (Straka and Strakova, 2017).", "We randomly shuffle the sentences and, following the setting of ELMO, use 95% of them for training and 5% for evaluation.", "Alignment We utilize the MUSE framework 6 (Conneau et al., 2018a) and the dictionary tables provided by them.", "The e i (anchor) vectors for the alignment are generated by computing the average of representations on the evaluation set (except for the limited unlabeled data case).", "To evaluate our alignment, we use the anchors to produce word translations.", "For all experiments we use the 50k most common words in each language.", "Dependency Parsing We used the biaffine parser implemented in AllenNLP (Gardner et al., 2018), refactored to handle our modifications as described in Section", "4. 7 The parser is trained on trees from a single or multiple languages, as described in each setting (Section 6).", "For the multiple case, we randomly alternate between the available languages, i.e. at each iteration we randomly choose one language and sample a corresponding batch.", "Dropout (Srivastava et al., 2014) is applied on ELMo representations, Bi-LSTM representations and outputs of MLP layers.", "We also apply early stopping, where validation accuracy is measured as average LAS score on the development set across all training languages.", "The parser hyperparameters are the same as Dozat et al. (2017) except we reduce the POS tag embedding size from 100 to 50 and increase the head/dependent MLP dimension from 400 to 500.", "All hyperparameter values used are listed in App.", "C. From experiments on the English tree-bank, we found that using the outputs of the first LSTM layer is as good as learning a combination.", "8 This 5 https://lindat.mff.cuni.cz/ repository/xmlui/handle/11234/1-1989 6 https://github.com/facebookresearch/ MUSE/ 7 https://github.com/TalSchuster/ allennlp-MultiLang 8 This was concurrently justified by Liu et al. (2019), showing that the first layer alone can perform better than a mixture.", "Therefore, we fix the weights over ELMo layers to [0 , 1 , 0] , i.e. using only representations from the first LSTM layer.", "Evaluation Scenarios for Dependency Parsing For a fair comparison, we use the same setting as used by previous models for each scenario.", "Our main model (which we refer to as OURS ) is using a SUPERVISEDANCHOREDALIGNMENT (Section 3.3) to align the multilingual pretrained ELMo embeddings which are used by the parser.", "We compare against several variants of our model: ALIGNED FASTTEXT : instead of ELMo, we use FASTTEXT pretrained embeddings (Grave et al., 2018a), aligned to English using MUSE.", "ALIGNED e : instead of contextualized embeddings, we use the anchors themselves as fixed embeddings, aligned to English.", "Alignment As mentioned above, we use outputs of the first LSTM layer of ELMo in our parsing experiments.", "Therefore, we present the alignment accuracy for those in Table 2, summarizing the precision@5 word-translation from 6 languages to English.", "Results for the other layers are presented in App.", "A. As expected, supervised alignments outperform unsupervised ones by a large margin.", "Between the two unsupervised methods, the context-based alignment achieved significantly better results for Spanish and Portuguese but failed to converge for Swedish.", "In both cases, the value of anchors in the REFINE step is clear, substantially improving the precision for all languages.", "Zero-Shot Parsing, Multiple Source Languages Table 3 summarizes the results for our zero-shot, multi-source experiments on six languages from Google universal dependency treebank version 2.0.", "9 For each tested language, the parser was trained on all treebanks in the five other languages and English.", "We align each of the six languages to English.", "We compare our model to the performance of previous methods in the same setting (re-ferred to as L t L s = in Ammar et al. (2016)).", "The results show that our multilingual parser outperforms all previous parsers with a large margin of 6.8 LAS points.", "Even with an unsupervised alignment, our model consistently improves over previous models.", "To make a fair comparison to previous models, we also use gold POS tags as inputs to our parser.", "However, for low-resource languages, we might not have access to such labels.", "Even without the use of POS tags at all, in five out of six languages the score is still higher than previous methods that do consider such annotations.", "An exception is the Portuguese language where it leads to a drop of 8.8 LAS points.", "While in the single language setting this good performance can be explained by the knowledge captured in the character level, contextual embeddings (Smith et al., 2018b; Belinkov 9 https://github.com/ryanmcd/ uni-dep-tb/ et al., 2017), the results suggest that this knowledge transfers across languages.", "In order to assess the value of contextual embeddings, we also evaluate our model using non-contextual embeddings produced by FASTTEXT (Bojanowski et al., 2017).", "While these improve over previous works, our context-aware model outperforms them for all six languages in UAS score and for 5 out of 6 languages in LAS score, obtaining an average higher by 3 points.", "To further examine the impact of introducing context, we run our model with precomputed anchors ( e ).", "Unlike FASTTEXT embeddings of size 300, these anchors share the same dimension with contextual embeddings but lack the contextual information.", "Indeed, the context-aware model is consistently better.", "Few-Shot Parsing, Small Treebanks In this scenario, we assume a very small tree-bank for the tested language and no POS tags.", "We use the Kazakh tree-bank from CoNLL 2018 shared task (Zeman et al., 2018).", "The training set consists of only 38 trees and no development set is provided.", "Segmentation and tokenization are applied using UDPipe.", "Similar to Rosa and Marecek (2018); Smith et al. (2018a), we utilize the available training data in Turkish as it is a related language.", "To align contextual embeddings, we use a dictionary generated and provided by Rosa and Marecek (2018) and compute an alignment from Kazakh to Turkish.", "The dictionary was obtained using FastAlign (Dyer et al., 2013) on the Open-Subtitles2018 (Lison and Tiedemann, 2016) parallel sentences dataset from OPUS (Tiedemann, 2012).", "10 Table 5 summarizes the results, showing that our algorithm outperforms the best model from the shared task by 5 .", "05 LAS points and improves by 10 https://github.com/CoNLL-UD-2018/ CUNI-x-ling over 10 points over a FASTTEXT baseline.", "Zero-Shot Parsing, Limited Unlabeled Data To evaluate our anchored language model (Sec-tion 3.4), we simulate a low resource scenario by extracting only 10k random sentences out of the Spanish unlabeled data.", "We also extract 50k sentences for LM evaluation but perform all computations, such as anchor extraction, on the 10k training data.", "For a dictionary, we used the 5k training table from Conneau et al. (2018a).", "11 Another table of size 1,500 was used to evaluate the alignment.", "In this scenario, we assume a single training language (English) and no usage of POS tags nor any labeled data for the tested language.", "Table 4 shows the results.", "Reducing the amount of unlabeled data drastically decreases the precision by around 20 points.", "The regularization introduced in our anchored LM significantly improves the validation perplexity, leading to a gain of 7 UAS points and 9 LAS points.", "We introduce a novel method for multilingual transfer that utilizes deep contextual embeddings of different languages, pretrained in an unsupervised fashion.", "At the core of our methods, we suggest to use anchors for tokens, reducing this problem to context-independent alignment.", "Our methods are compatible both for cases where a dictionary is present and absent, as well as for low-resource languages.", "The acquired alignment can be used to improve cross-lingual transfer learning, gaining from the contextual nature of the embeddings.", "We show that these methods lead to good word translation results, and improve significantly upon state-of-the-art zero-shot and few-shot crosslingual dependency parsing models.", "In addition, our analysis reveals interesting properties of the context-aware embeddings generated by the ELMo model.", "Those findings are another step towards understanding the nature of contextual word embeddings.", "We thank the MIT NLP group and the reviewers for their helpful discussion and comments.", "The first and third authors were supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9116.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.", "This work was also supported in part by the US-Israel Binational Science Foundation (BSF, Grant No. 2012330), and by the Yandex Initiative in Machine Learning." ]
[ "objective", "abstain", "method", "abstain", "objective", "result", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "method", "result", "method", "method", "objective", "abstain", "other", "method", "other", "other", "other", "other", "abstain", "objective", "other", "abstain", "abstain", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "abstain", "result", "result", "abstain", "other", "other", "other", "other", "other" ]
[ "We introduce CARETS, a systematic test suite to measure consistency and robustness of modern VQA models through a series of six fine-grained capability tests.", "In contrast to existing VQA test sets, CARETS features balanced question generation to create pairs of instances to test models, with each pair focusing on a specific capability such as rephrasing, logical symmetry or image obfuscation.", "We evaluate six modern VQA systems on CARETS and identify several actionable weaknesses in model comprehension, especially with concepts such as negation, disjunction, or hypernym invariance.", "Interestingly, even the most sophisticated models are sensitive to aspects such as swapping the order of terms in a conjunction or changing the number of answer choices mentioned in the question.", "We release CARETS to be used as an extensible tool for evaluating multi-modal model robustness.", "1 1 Introduction The task of visual question answering integrates the domains of computer vision and NLP by probing models' understanding of images through natural language queries.", "After the introduction of the Visual Question Answering (VQA) benchmark (Antol et al., 2015), subsequent work identified the presence of several superficial correlations and other weaknesses latent in the VQA question gathering process (Goyal et al., 2017; Agrawal et al., 2018), which lead to potentially optimistic evaluations when considering accuracy alone.", "More recently developed benchmarks (Hudson and Manning, 2019; Goyal et al., 2017; Agrawal et al., 2018) explicitly avoid these weaknesses by introducing question, answer, and image balancing, or distributional shifts.", "While these 1 Source code, data, and additional resources may be found at https://github.com/princeton-nlp/CARETS efforts provide more difficult benchmarks, a thor-ough evaluation of model capabilities requires a deeper and more detailed approach.", "To this end, we introduce CARETS a Consistency And Robustness Evaluative Test Suite for visual question answering.", "Inspired by recent work in NLP that generates unit tests' for models (Ribeiro et al., 2020b), CARETS contains systematically generated VQA tests that evaluate six different capabilities that any VQA model should handle robustness to question rephrasings, ontological reasoning, symmetrical logic, visual perturbations, question negation, and attribute antonymy.", "Each test point in CARETS consists of a pair of instances which are small but strategic variations of each other either visually or in the question's text.", "This allows us to conduct fine-grained capability evaluations beyond just measuring high-level accuracy scores.", "Across tests, we generate over 190,000 instance pairs in total using a programmatic approach that fills in templates (from nearly 200 templates in total) using ground-truth scene graphs (Krishna et al., 2017) from the GQA (Hudson and Manning, 2019) validation split.", "We then evaluate six modern VQA models on each test using several metrics: overall accuracy, self-consistency and comprehensive accuracy .", "Self-consistency measures models' ability to maintain their answer across question variations, while comprehensive accuracy estimates their ability to answer all instance variants correctly.", "Our experiments reveal several interesting findings: (1) most modern VQA systems achieve only middling self-consistency ( 60-80%) which is further not always correlated with their accuracy, (2) all models struggle to comprehend the concept of negation (self-consistency of 18-28% and comprehensive accuracy < 17%), and (3) even simple perturbations like changing the order of choices in the question text can induce a substantial drop 6392 Figure 1: Our consistency and robustness test suite (CARETS) consists of six tests, corresponding to six identified phenomena that VQA models should be robust to.", "(10-15%) in performance.", "Moreover, even state-of-the-art models like LXMERT (Tan and Bansal, 2020) are highly sensitive to the type of questions (binary vs multi-choice) and the number of choices provided.", "These results reveal several shortcomings in modern VQA systems and hint at potential areas for future improvement.", "Going beyond our current discoveries, CARETS is an extensible framework and may be easily extended by adding new capability tests for fine-grained evaluation of future models.", "VQA evaluation.", "The textual, visual, and answer biases discovered in the VQA dataset (Antol et al., 2015) spurred on recent work seeking to improve model evaluation for the task by eliminating these biases (Goyal et al., 2017), introducing distributional shifts (Agrawal et al., 2018), requiring model explanations (Li et al., 2018), thoroughly analyzing biases in datasets and models (Man-junatha et al., 2019), or evaluating on different recognition subtasks (Kafle and Kanan, 2017).", "While debiased and challenging benchmarks are important, their focus on accuracy as the sole evaluation metric leaves much to be desired (Ribeiro et al., 2020a; Kervadec et al., 2020).", "In contrast, our testbed provides question or image pairs that compares models' predictions between questions; measuring their accuracy, consistency, and robustness to a variety of text and image perturbations.", "Synthetic Dataset Generation for VQA.", "One way in which we can generate diverse and balanced datasets is to generate them synthetically, as is done by (Johnson et al., 2015; Zhang et al., 2016; Hudson and Manning, 2019).", "Synthetically generating questions, images, or both, allows fine control over the distribution of questions, answers, and images.", "Additionally for our case, synthetic generation allows us to control not just the particular semantics of one question, but also how one question relates to another question in a precisely defined way (e.g. one question is a negation of another) while also remaining relevant and grounded in the image.", "As both the CLEVR (Johnson et al., 2015) and GQA (Hudson and Manning, 2019) datasets use image scene graphs for question and label generation, they contain questions combining a variety of required capabilities, including compositionality.", "We feature real-world images with synthetically generated questions as well, but in contrast to GQA, our evaluation has instance pairs to systematically test a focused set of capabilities, showing that models may still struggle with simpler, non-compositional questions.", "Consistency as Model Comprehension.", "Some recent work has sought to evaluate models using consistency and other metrics (Hudson and Manning, 2019; Shah et al., 2019; Ribeiro et al., 2020a; Selvaraju et al., 2020; Bitton et al., 2021; Mouselinos et al., 2022).", "These evaluations of-6393 ten evaluate consistency through question entailment and implication, or simply contrasting examples in the case of (Bitton et al., 2021).", "The concurrent work (Mouselinos et al., 2022) takes a unique approach, performing discrete visual perturbations while preserving the semantic integrity of a question; their work complements our own as it focuses exclusively on visual perturbations in a synthetic image setting.", "While we consider these previous methods important for evaluating model comprehension, they often combine question types and capabilities, changing the kind of expected answer, or evaluating consistency on a tree or set of entailed questions.", "Though ideally models would be consistent and robust for these more complex types of tests, our approach reveals that models can fail even on simple implications.", "Our goal is to provide a testbed for fine-grained evaluation of VQA models' capabilities.", "To do so, we generate multiple tests, each corresponding to a specific model capability.", "In contrast to standard VQA benchmarks (Antol et al., 2015; Goyal et al., 2017; Hudson and Manning, 2019), our test sets consist of a pair of original and perturbed instances (cid:104) ( I 1 , q 1 , a 1 ) , ( I 2 , q 2 , a 2 ) (cid:105) , each with an image, a question and an answer.", "The two instances within a pair differ from each other in a minimal yet carefully constructed way to hone in on a particular capability, similar to BLiMP (Warstadt et al., 2020).", "A model is then evaluated on its overall accuracy , self-consistency (ability to produce consistent, even if wrong, answers to both instances within a pair), and comprehensive accuracy (ability to answer consistently and correctly for an instance pair).", "Overall, we create six tests corresponding to key capabilities.", "We provide a high-level description of each test here and describe generation details in Section 4. First, we create four invariance tests 2 that use variations of the question phrasing and expect the model to produce the same answer to both questions within an instance pair: 1. Rephrasing invariance ( REPHRASE-INV ) measures the model's understanding of mi-nor, meaning-preserving textual changes, e.g.: What color is the bottle on the shelf, white or blue? and Does the color of the bottle on the shelf seem more white or blue? 2 Reusing terminology from Ribeiro et al. (2020b).", "2. Ontological invariance ( ONTOLOGICALINV ) measures understanding of ontology, e.g. changing a hyponym in: Do you see a green jacket? to a hypernym Do you see any green clothes ? 3. Order invariance ( ORDER-INV ) measures understanding of logically equivalent questions containing different argument orders, e.g.: Is the black vehicle a van or a truck? and Is the black vehicle a truck or a van ? 4. Visual obfuscation invariance ( VISUALINV ) measures the model's answering ability when parts of the image not directly relevant to the visual question are removed.", "Specifi-cally, we explore blurring, masking and cropping techniques to modify the image.", "5. Attribute antonym directional expectation ( ANTONYM-DIR ) measures the model's understanding of antonyms, e.g., Do you think that the wood table is short? and Do you think that the wood table is long ? 6. Negation directional expectation ( NEGATION-DIR ) measures a model's grasp of negation, e.g., Are there any apples in this picture? and Are there no apples in this picture?", "Each of the six test datasets start with the generation of original', unperturbed instances ( I 1 , q 1 , a 1 ) (Section 4.1). Then, for each such instance, we generate a variation ( I 2 , q 2 , a 2 ) by either perturbing the original question q 1 or image I 1 (Section 4.2). Further, each test set is composed of a diverse set of questions. These may broadly be grouped into verification (or binary ) questions, with expected answers being either yes or no , and multi-choice questions, with expected answers derived from a list of objects or attributes provided in the question.", "Questions for each test are generated from question templates (examples for each are provided in Appendix A.1) grouped into the following types. Q1: Object verification (54 templates): e.g., Is", "Q2: Conjunctive verification (18 templates): e.g., Is there a <obj1> and a <obj2> in the image?", "Q3: Disjunctive verification (18 templates): e.g., Is there a <obj1> or a <obj2> in the image?", "Q4: Attribute verification (25 templates): e.g., Is the <obj> in the image <attr> ?", "Q5: Object multi-choice (25 templates): e.g., Is the <obj-class> , <choices> ?", "Q6: Attribute multi-choice (39 templates): e.g., What sort of <attr-class> is the <obj> , <choices> ?", "Q7: Action multi-choice (28 templates): e.g., What is the <action-class> that the <obj> doing, <choices> ?", "Words in typewriter font represent template arguments.", "Generally, each <obj> argument can be filled by a singular object (cup) or an at-tribute+object (red cup) while <attr> arguments are filled with singular attributes (shiny).", "For object verification (Q1), attribute verification (Q4), attribute multi-choice (Q6), and action multi-choice (Q7) questions, some templates let <obj> arguments be filled with an object related to another object (e.g. red cup on the table); this type is excluded from conjunctive verification (Q2) and disjunctive verification (Q3) questions to prevent the generation of verbose questions.", "For the multi-choice questions (Q5, Q6, Q7), <choices> are replaced with a list of 2 or 3 singular objects, attributes, or actions respectively (e.g. cow or horse or wood, metal, or plas-tic).", "The <obj-class> argument is filled with a hypernym of all object choices and always appears with either an attribute or a related object (black animal, animal eating grass).", "The <attr-class> argument is filled with the attribute category of all attribute choices (e.g. ma-terial).", "Finally, the <action-class> argument is filled with the action category of all action choices (e.g. sport).", "Question argument generation.", "The question arguments are generated using images from the validation split of the GQA dataset (Hudson and Manning, 2019) which contains 10,696 images manually annotated with 1,536 different objects and 603 different object attributes.", "For binary question types this results in questions with solely affirmative answers.", "To produce an answer balanced dataset, we run a second stage of question argument generation for binary questions to generate plausible negative questions with false objects or attributes.", "We sample false objects from a distribution conditioned on an image's objects, and optionally sample object attributes from a distribution conditioned on the chosen object.", "For <choices> arguments, false choices are again generated from a distribution conditioned on the object's hypernym for Q5 questions, the attribute category for Q6 questions, or the action category for the Q7 questions.", "We additionally ensure that the generated choices are mutually exclusive (e.g. tan or beige would be an invalid gen-eration).", "To get more diverse multi-choice questions, we first generate a large pool of question candidates, and then select only a small number of questions sampled from this pool with sample probabilities inversely proportional to the count of the questions' hypernym, attribute class, or action class, and the count of the generated answer.", "Question argument refinement.", "To improve the reliability of generated questions, we apply a variety of checks and constraints.", "For example, when sampling false objects from the conditional distribution, we filter out all objects (and their hypernyms 3 ) present in the scene graph in order to guarantee that the sampled object is truly not present.", "We also filter out question arguments that are not included in the image scene graph but are sub-parts of objects that are annotated (e.g., tire when a car is annotated).", "Finally, we enforce various logical constraints on question arguments to prevent trivial or malformed questions.", "For example, for conjunctive and disjunctive questions (Q2, Q3), we apply a hypernym exclusion constraint to prevent questions like Is there a black cat and a black animal in the image?.", "We now describe our procedure for creating perturbed instances ( I 2 , q 2 , a 2 ) for the six tests.", "In all tests except visual obfuscation, the image remains unchanged, i.e. I 2 = I 1 .", "3 We use an ontology with hypernym paths generated with WordNet (Miller, 1995).", "We manually review and revise the default synset annotations from Visual Genome (Krishna et al., 2017) for the entire object vocabulary, and compare to a sample of annotated images to ensure general validity.", "(a) Rephrasing invariance.", "Since each original question q 1 was generated using a text template, we simply use a different template of the same type to generate a valid rephrasing q 2 .", "The image and answer remain the same, i.e. I 1 = I 2 , a 1 = a 2 and the model is expected to be invariant to this rephrasing.", "We apply this to Q1, Q2, Q3, Q5, Q6 and Q7.", "(b) Ontological invariance.", "Here, we use object verification questions (Q1) only and perform two types of transformations.", "For positive questions (i.e. a 1 = yes ), we filter question arguments to only include objects that are valid hyponyms (using WordNet again) and use those to generate a perturbed question q 2 by changing the hyponym to a hypernym.", "For example, q 1 = Do you see a jogging woman ? with a 1 = yes is paired with q 2 = Do you see a jogging person ? containing a hypernym.", "Similarly, for negative questions ( a 1 = no ), we filter question arguments to only include valid hypernyms and generate a q 2 : thus for example, q 1 = Do you see a jogging person with a 1 = no is paired with q 2 = Do you see a jogging woman ? containing a hyponym, a 2 = no also.", "(c) Order invariance.", "Order invariance tests apply to conjunctive verification, disjunctive verification, and all multi-choice question types; models are expected to be invariant to the logical order of arguments.", "We perturb conjunctive verification and disjunctive verification questions by swapping the questions' first and second arguments ( <obj1>, <obj2> ).", "For multi-choice question types, we perturb instances by generating the <choices> argument with different orders.", "The answer remains the same in both cases by construction.", "(d) Visual obfuscation invariance.", "For this test, we let q 1 = q 2 and a 1 = a 2 but generate a perturbed image I 2 by obscuring parts of I 1 that are irrelevant to the question at hand using bounding box annotations from Visual Genome (Krishna et al., 2017).", "For all true objects in a question, we consider the bounding boxes around these object(s) to be the foreground and all other pixels in the image to be the background .", "For negative verification questions asking about object(s) not present in the image, we select one (or two) random object bounding box(es) as the foreground and consider everything else to be the background, since focusing on any image region should not affect the model's answer.", "4 We then apply five types of perturbations to obscure the background: (i-iii) Gaussian blurring using the soft masking method of (Yang et al., 2021) with light ( = 3 ), medium ( = 6 ), or heavy ( = 9 ) blurring,", "(iv) Masking with the channel-wise averaged pixel value from the GQA (Hudson and Manning, 2019) training dataset, entirely obscuring the context, and", "(v) Cropping to the smallest rectangular region including the foreground.", "Example images are shown in Appendix A.2.", "(e) Negation directional expectation.", "For the negation directional test, we use object verification, conjunctive verification, and disjunctive verification questions.", "Each question q 1 is perturbed by substituting the original's text template with a paired negated text template to create q 2 .", "Since each perturbed question represents the negation of the original, the expected answers a 1 (cid:54) = a 2 .", "(f) Attribute antonym directional expectation.", "We perturb the generated attribute verification questions by changing the <attr> question argument to its antonym using WordNet.", "All attribute antonym relations are manually curated to remove unintuitive examples; questions with arguments without a valid antonym are discarded.", "The original and perturbed questions of a pair have opposite answers a 1 (cid:54) = a 2 .", "To assess the quality, difficulty and validity of the generated tests, we sample 100 question pairs (200 questions) from each question type for the 6 tests and procure 5 annotations per question from workers on Amazon Mechanical Turk.", "Workers are vetted for a 97% approval rating and minimum of 500 completed HITs.", "Workers take 2 minutes per task on average and are compensated $0 .", "50 per task and thus $15 per hour.", "Each HIT include 24 questions total, including 4 verification questions 5 , and typically include a variety of question types from each of our tests.", "4 We choose 32 32 as a minimum bounding box size, shown to be reliably recognized by humans (Torralba et al., 2008).", "5 Tasks are also interspersed with binary or multi-choice gold-standard questions with perfect annotator agreement from the VQA dataset (Antol et al., 2015) which are required to be answered correctly before a HIT can be submitted.", "Workers are given the opportunity to correct answers before submitting if a gold-standard question has been answered incorrectly.", "Human agreement.", "In addition to yes and no for binary questions and the appropriate choices for multi-choice questions, all questions offer an ambiguous option.", "Human answers are the majority vote among the 5 workers; questions failing to reach majority or with ambiguous as the majority are always counted against accuracy.", "This process is inspired by the human evaluations of implied question pairs in Ribeiro et al. (2020a).", "We report both human and model performance in Section 6. 5.2 Evaluated models We evaluate six recent models on our tests, and compare them to human accuracy.", "Models are trained on the GQA (Hudson and Manning, 2019) balanced training split (using hyperparameters suggested from the original papers).", "All models, except LXMERT 6 , are trained and finetuned using the MMF (Singh et al., 2020) library and region of interest (RoI) features from Faster R-CNN (Ren et al., 2015) with a ResNeXt-152 (Xie et al., 2017) backbone pre-trained on the Visual Genome (Kr-ishna et al., 2017) dataset for object-based models.", "More details are provided in Appendix A.3.", "Model initialization and pre-training.", "Of the six models evaluated, a defining characteristic of each model relates to their initializations, image-encoding choice, and the use of multi-modal pretraining.", "Our most basic model (CNN+LSTM) is randomly initialized and uses no pre-trained components; however, GloVe (Pennington et al., 2014) word embeddings are used for representing tokens.", "Another class of models use pre-trained image encoders to extract object features from images.", "Of our models, BAN (Kim et al., 2018) is randomly initialized prior to training but ingests pre-trained Faster R-CNN features which should provide the model with enhanced visual capabilities over the CNN+LSTM model.", "MultiModal BiTransformer (MMBT) (Kiela et al., 2019), uses similar pre-trained image features as BAN but is further initialized with pre-trained BERT (Devlin et al., 2019) weights prior to training on GQA.", "The last class of models are multi-modal pretrained models; those that use pre-trained image features and are pre-trained on multi-modal tasks, such as image-based masked language modeling.", "Models in this class include LXMERT (Tan and 6 For LXMERT we use the authors' open source repository at https://github.com/airsplay/lxmert Bansal, 2020), ViLBERT (Lu et al., 2019), and VisualBERT (Li et al., 2019).", "Similar to MMBT, ViLBERT and VisualBERT are also initialized with pre-trained weights from BERT.", "Accuracy ( ACC ).", "On our test datasets with K paired instances, we define accuracy as: 1 2 KK (cid:88) i =1 1 [ a i 1 = a i 1 ] + 1 [ a i 2 = a i 2 ] where the model answers a ia , a i 2 on the original and perturbed questions respectively are compared to the ground truth answers a i 1 , a i 2 .", "Self-consistency ( CONS ).", "We measure self-consistency of the model predictions across the original and perturbed questions as 1 KK (cid:88) i =1 (cid:26) 1 [ a i 1 = a i 2 ] on invariance tests 1 [ a i 1 (cid:54) = a i 2 ] on directional exp. tests Note that these metrics only measure the internal consistency of the model and do not include the ground truth answers a i 1 , a i 2 .", "Comprehensive accuracy ( C-ACC ).", "We define comprehensive accuracy as: 1 KK (cid:88) i =1 1 [ a i 1 = a i 1 a i 2 = a i 2 ] measuring whether model predictions are both accurate and self-consistent across perturbations.", "invariance and directionality tests.", "Fig 2 details the performance of various models under our suite of tests.", "Each bar in the figure shows both ACC and C-ACC for the model, with the arrow representing the gap between the two.", "We first observe that all models achieve significantly lower performance (at least a 8% drop) compared to humans (grey).", "Even simple tests such as REPHRASEINV (Figure", "2(a)) prove to be quite challenging, with models managing < 68% ACC compared to humans' 86%.", "On tests like NEGATION-DIR (Fig-ure", "2(e)), models only get about 50% accuracy, substantially worse than human scores of 78%.", "Moreover, C-ACC is considerably lower than ACC across the board, with as much as a 35% gap on NEGATION-DIR and 14.5% on ANTONYMDIR tests, even for a state-of-the-art model like LXMERT.", "7 Even though this gap is smaller on 7 Other modern systems like ViLBERT and VisualBERT 6397 Figure 2: ACC and C-ACC across all six tests.", "other tests like REPHRASE-INV or VISUAL-INV , the performance drop is still at least 6-7% in most cases.", "This means that models are not invariant to textual rephrasings of questions and do not have a strong grasp of concepts like attributes and negation, despite negation of attributes appearing in the GQA training dataset.", "(R2)", "VQA systems are not self-consistent in their predictions.", "Table 1 shows the self-consistency scores for all models under our different tests.", "While humans achieve CONS > 88% in all the tests, VQA models are much worse at least 6% lower CONS in all cases, with the best performing model (LXMERT) still 26% lower than human performance on average across tests and models.", "Scores are especially low on the directional tests (antonym and negation), which have even worse C-ACC .", "means that models are confused in their decisions simply with the addition of negation words this hints at issues of overfitting to spurious feature without understanding the presence or absence of specific concepts, corroborating the findings of (Bitton et al., 2021).", "Interestingly, the best performing model (LXMERT) is not always the most consistent.", "Furthermore, there is no single model that is the most self-consistent, with LXMERT, ViLBERT and VisualBERT each returning the highest consistency scores on different tests.", "(R3)", "Models are more robust to hyponym than hypernym variations.", "Breaking out the results on the ontological invariance test (Figure 2", "(c)) in the last two columns of Table 2, we see that self-consistency is higher on the hyponym perturbations (on negative answer questions) than on hypernym perturbations (positive questions); this 6398 Conjunction Disjunction ACCY N OACCY N O BAN 52 53 47 0 52 71 28 0 CNN+LSTM 39 53 47 0 35 65 35 0 LXMERT 78 49 51 0 56 59 32 9 MMBT 56 50 50 0 55 63 34 2 ViLBERT 58 54 46 0 56 79 21 0 VisualBERT 59 49 51 0 57 70 30 0 Table 3: Conjunctive (Con) vs disjunctive (Dis) comprehensive accuracy on ORDER-INV , along with a breakdown of response rates for yes (Y), no (N) and other than yes/no (O).", "effect is particularly noticeable for MMBT and ViLBERT with a 19% and 15% difference, respectively.", "Thus, when an object is not detected in an image its hyponym elicits a negative response as expected; however when an object (like steak) is detected, the hypernym question (Is there any meat in the image?) may trip the model to generate a negative response.", "This points to the need for more structured, hierarchical grounding of concepts in these models.", "(R4)", "Models perform better on conjunctive rather than disjunctive tests.", "From Table 3, we note that models generally have higher accuracy on conjunctive rather than disjunctive tests, with the largest discrepancy for LXMERT at 81% accuracy on conjunctive tests vs only 62% on disjunctive.", "Many models seem to exhibit a strong positive bias for disjunctive questions, suggesting they may just be short-cutting to answering yes' for disjunctive questions.", "LXMERT also seems to frequently confuse disjunctive questions for an open-ended or multi-choice question.", "provides a breakdown of LXMERT's scores for binary and multi-choice questions.", "It is evident that multi-choice questions are harder for the model, with self-consistency dropping by 16% between binary and multi-choice questions, and C-ACC dropping by 33%.", "This is surprising since the multi-choice questions only include two or three choices and hence are quite similar to the binary (yes/no) questions.", "This may indicate a bias in the models towards binary questions with simple answers.", "Furthermore, Table 4 also shows that models consistently perform worse on 3-choice questions than 2-choice ones, with even the top-performing LXMERT having a 7% drop from 62% to 55%.", "This hints that there may be some effect of randomness in the way these models pick their answers.", "In contrast and as expected, humans are robust to the number of choices.", "to deal with.", "From Figure 2 and Table 1, we notice that the models are slightly more robust to visual perturbations on average compared to the lexical ones.", "All models only have a drop of 4-8% from ACC to C-ACC , while self-consistency of all models is also 78% or higher.", "Appendix A.2 provides a more detailed breakdown of all the different visual perturbation tests we performed.", "(R7)", "Direct data augmentation improves CARETS evaluation.", "We show the feasibility of high performance on CARETS through data augmentation.", "We add 95,000 questions generated from CARETS question templates, and using a similar distribution of question types, to the original training split of GQA and re-train the LXMERT model.", "Table 5 shows that this dra-6399 matically improves the model on all three metrics ( ACC , self-consistency and C-ACC ), with the LXMERT(Augmented) model achieving near human performance on tests like ORDER-INV and ANTONYM-DIR .", "Since CARETS is designed to be an evaluation suite, these numbers show that CARETS questions should generally be within the capabilities of existing SOTA models, provided that they are able to generalize appropriately.", "In this work, we have developed CARETS a new test suite for capability-focused robustness testing and comprehension evaluation of visual question answering (VQA) models.", "CARETS consists of six different tests that use instance pairs to evaluate models on their understanding of various linguistic and visual concepts.", "Using this test suite, we evaluated six modern VQA systems to reveal several inconsistencies in state-of-the-art models and provide a fine-grained view of their comprehension of different visuo-linguistic concepts.", "Quite surprisingly, we find that even state-of-the-art models struggle with concepts like negation, disjunction, order invariance and multi-choice questions.", "CARETS can also support the addition of more tests in the future and we view it as a platform for continually stress testing and improving VQA models.", "CARETS emulates previous work in using text templates to generate questions and their textual perturbations (Hudson and Manning, 2019; Johnson et al., 2017; Ribeiro et al., 2020b).", "The use of templates to generate perturbations is motivated by the desire to maintain the grounded integrity of generations, ensuring that they remain relevant and that the generated label is true in the context of the subject image.", "While we have sought to generate a diverse language set through using a large number of templates (nearly 200 in total), there are some limitations to this approach.", "An improvement to our approach may be able to generate more sophisticated questions and perturbations through conditional text generation (Schick and Schtze, 2021; Madaan et al., 2021; Wu et al., 2021) while also preserving our other motivations, such as atomicity and grounded relevancy.", "This material is based upon work supported by the National Science Foundation under Grant No. 2107048.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.", "We would also like to thank Tianyu Gao, Austin W. Hanjie and Alexander Wettig for their valuable feedback and advice." ]
[ "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "objective", "other", "method", "other", "method", "other", "objective", "other", "other", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "result", "result", "result", "abstain", "abstain", "method", "abstain", "other", "other", "other" ]
[ "Dialog response generation in open domain is an important research topic where the main challenge is to generate relevant and diverse responses.", "In this paper, we propose a new dialog pre-training framework called DialogVED, which introduces continuous latent variables into the enhanced encoder-decoder pre-training framework to increase the relevance and diversity of responses.", "With the help of a large dialog corpus (Reddit), we pre-train the model using the following 4 tasks, used in training language models (LMs) and Variational Autoencoders (VAEs) literature:", "1) masked language model;", "2) response generation;", "3) bag-of-words prediction; and", "4) KL divergence reduction.", "We also add additional parameters to model the turn structure in dialogs to improve the performance of the pre-trained model.", "We conduct experiments on PersonaChat, DailyDialog, and DSTC7-AVSD benchmarks for response generation.", "Experimental results show that our model achieves the new state-of-the-art results on all these datasets.", "Pre-trained language models (PLMs) have been widely explored both in natural language understanding (NLU) and generation (NLG) in recent years, this pre-training and fine-tuning paradigm sheds light on various downstream tasks in natural language processing (NLP).", "Compared with general pre-trained models, task-oriented pre-trained models (such as Summarization , Dialog and etc.), which is designed in line with task characteristics, may achieve better performance and be more robust.", "In this paper, we proposes a novel pre-trained dialog response generation model based on previous research.", "Dialogue Response Generation (DSG) in open domain is a challenging task with a wide range of Worked during the internship at Microsoft Research Asia.", "application scenarios.", "Recent advances in DSG utilize pre-trained language models (PLMs) such as BERT (Devlin et al., 2019) and GPT2 (Radford et al., 2019) in two major categories.", "The first one focuses on how to fine-tune PLMs in downstream tasks and address the various application-specific needs and challenges (Lin et al., 2020).", "The second one augments dialog specific tasks into the PLM training (Zhang et al., 2020; Bao et al., 2020) and then fine-tunes the new pre-trained model in downstream tasks.", "We study the latter in this paper.", "There is a proverbial one-to-many problem in DSG, i.e., a single dialog context could be followed by multiple reasonable responses.", "Existing works introduce latent variables to model this problem.", "For example, VHRED (Serban et al., 2017) incorporates latent continuous variable into the sequence-to-sequence (Seq2Seq) RNN model to improve the diversity of generated responses.", "VAE-Seq2Seq (Bahuleyan et al., 2017) proposes variational attention to replace the vanilla encoder-decoder attention (Luong et al., 2015), to avoid attention to bypass the latent space and invalidate the latent variable.", "For controllability and interpretability, some discrete VAEs have also been proposed, such as (Oord et al., 2017; Vahdat et al., 2018).", "Recently, PLATO (Bao et al., 2020) firstly introduces latent variables into their pre-training dialog model, where the authors introduce a K -way ( K = 20 ) categorical latent variable, and the pretrained model shows significant gains in multiple downstream response generation tasks.", "Continuous latent variables besides discrete latent variables is popularly used for modeling one-to-many mapping in dialog system, but the potential of incorporating continuous latent variables with large-scale language pretraining is less explored.", "In this paper, we propose a pre-trained latent V ariable E ncoderD ecoder model for Dialog generation, which is called DialogVED.", "In this model, we introduce a continuous latent variable into the 4852 enhanced encoder-decoder pre-training framework and we adopt the optimization techniques based on the VAEs literature to learn the model with continuous latent variables.", "More specifically, we conduct the pre-training by optimizing the following 4 pre-training objectives simultaneously:", "1) masked language spans loss to enhance the encoder's understanding of context,", "2) response generation with n-gram loss to improve the decoder's planning ability,", "3) Kullback-Leibler divergence loss to minimize the difference between the posterior and prior distribution of the latent variables, and", "4) bag-of-words loss to reduce posterior distribution collapse.", "In addition, we also explore the effect of absolute and relative position embeddings specific for conversational data on the model performance.", "We conduct experiments on three different kinds of conversation tasks: chit-chat, knowledge grounded conversation, and conversational question answering.", "Experimental results verify the effectiveness and superiority of our model compared with the previous state-of-the-art method.", "We further carry out ablation study to better understand the impact of different components in the DialogVED on model performance including latent space sizes, different decoding strategies, and position embeddings for turns and roles.", "The main contributions of this paper can be summarized as follows:", "1) We propose a pretrained dialog model, which incorporates continuous latent variables into the enhanced encoder-decoder pre-training framework;", "2) We explore the impact of latent variable sizes, different decoding strategies, and position embeddings for turns and roles in our model;", "3) Extensive experiments show that the proposed model achieves the new state-of-the-art (SOTA) in multiple downstream tasks, and our model has better performance both on relevance and diversity than previous SOTA in response generation.", "In response generation, there are three elements: dialogue context c , response r and latent variable z .", "The dialogue context c may consist of several history utterances (i.e., multi turns) and the response r is one piece of appropriate reply towards the given context.", "Additionally, the latent variable z in the latent space represents many unobserved factors associating the context and the response.", "We assume the latent variable z is continuous, which is different from PLATO (Bao et al., 2020), and portrays a certain conditional probability distribution related to the response given context.", "We then define the conditional distribution p ( r, z | c ) = p ( r | c, z ) p ( z | c ) and our goal is to use encoder-decoder models (parameterized by ) to approximate p ( r | c, z ) and a multi-layer perceptron (parametrized by ) to estimate p ( z | c ) , which is called the prior network in VAE literature.", "We call the final pre-trained model DialogVED, which is a transformer-based encoder-decoder model with an extra prior network for modeling the latent space.", "Figure 1 gives a overview of our model.", "We use multi-layer Transformer-based (Vaswani et al., 2017) encoder to encode the dialogue context.", "First, an input sequence of tokens is mapped to a sequence of embeddings, which are then passed into the encoder.", "The encoder consists of a stack of blocks, each of which comprises two subcomponents: a self-attention layer followed by a small feed-forward network.", "Compared to the vanilla transformer encoder, our encoder has slight differences in position embeddings and self-attention layer in fine-tuning phase, which contains richer location information and will be introduced in 2.7.", "Future predicting strategy has been concerned in recent research (Qi et al., 2020; Xiao et al., 2020), instead of predicting only the next token at each time step, the decoder using future predicting predicts n future tokens simultaneously.", "Specifically, the original Seq2Seq model aims to optimize the conditional likelihood P ( r t | r <t , c ) , while future predicting strategy changes the optimization of predicting next single token to P ( r t : t + n 1 | r <t , c ) at each time step t , where r t : t + n 1 denotes the next continuous n future tokens.", "The future n-gram prediction loss can explicitly encourage the model to plan for future token prediction and prevent over-fitting on strong local correlations (Qi et al., 2020).", "We adopt the n-stream self-attention proposed in ProphetNet (Qi et al., 2020) in our decoder.", "The n-stream self-attention mechanism incorporates n extra self-attention predicting streams besides main stream to predict next n continuous future tokens respectively at each time step.", "Memory Scheme To incorporate the latent variable into decoder, we adopt a memory scheme similar to OPTIMUS (Li et al., 2020), where latent variable z RP is mapped to a additional memory vector, denoted as h Mem , which is an additional key-value pair for decoder to attend.", "We have memory vector h Mem = (cid:20) z key z value (cid:21) = WM z (1) where WM RH P is the weight matrix, and the memory vector is shared and propagated across all layers in decoder as: H ( k +1) = MultiHead ( H ( k ) , h ( k ) Mem H ( k ) , h ( k ) Mem H ( k ) ) where H ( k ) refers to the hidden state of the k -th layer of decoder.", "The memory vector is equivalent to adding a virtual token during decoding to participate in the calculation of self-attention main stream, and the predicting streams are implicitly affected by h Mem through interaction with the main stream.", "The latent variable guides the generation of each step of the decoder through the memory vector.", "Intuitively, introducing latent variables provides a hierarchical generation procedure:", "1) sample a latent variable z from the prior network p ( z | c ) ; 2) generate r through the decoder network p ( r | c, z ) .", "From previous research (Zhao et al., 2017a), z p ( z | c ) may determine the high-level semantics, and the auto-regressive decoding is followed to produce the output sentences with low-level syntactic and lexical details.", "Similar to the Variational Autoencoders (VAEs), we learn the parameters by maximizing the marginal log likelihood: log p ( r | c ) = log (cid:90) p ( z | c ) p ( r | c, z ) dz, where p involves an intractable marginaliza-tion over the latent variable z .", "(Kingma et al., 2016; Li et al., 2020), We will optimize its lower bound, which is equivalent to minimize the two terms below: reconstruction loss (or negative log-likelihood) L rc = E q ( z ) [ log p ( r | c, z )] = E q ( z ) [ log (cid:89) t p ( r t : t + n 1 | r <t , c )] (2) and K-L regularization term L kl = KL ( q ( z ) || p ( z | c )) .", "Here q ( z ) is a multivariable normal distribution with mean RP and diagonal variance matrix with diagonal taiking values 2 RP , denoted as diag ( 2 ) .", "To connect to the hidden space, we add a special classification token ([CLS]) to the beginning of the context, and the first hidden state denoted as h [ CLS ] RH in last-layer is used to represent the global dialog context.", "We assume 4854 (cid:20) log ( 2 ) (cid:21) = MLP h h [ CLS ] (4) where MLP h is a multilayer perceptron and this multilayer perceptron is called the prior network in VAEs literature.", "We can then sample P random variables with each variable is from standard normal distribution and via transformation, we obtain samples of z RP from N ( , diag ( 2 )) , and feed them to the decoder.", "To improve the understanding ability of the encoder and the robustness to noise, we randomly mask part of the context before encoding.", "Recent research (Joshi et al., 2020; Lewis et al., 2020) on masked language models show the advantages of masking spans over masking individual words or subword units.", "We adopt a simple method to mask spans:", "1) randomly select n tokens in context, denote as S ; 2) for each token t S , extend it to a text span with a fixed length of m ; 3) mask all selected tokens after sorting, deduplication and boundary checking.", "Following BERT (Devlin et al., 2019), the total number of masked tokens in the context accounts for approximately 15%, and we replace the masked token with:", "1) the [MASK] token 80% of the time;", "2) a random token 10% of the time;", "3) the unchanged masked token 10% of the time.", "Then, the last-layer hidden states h x RH of each masked token x will be used to predict the original token and the encoder is trained to optimize the cross entropy loss: LM = (cid:88) x LSM( W 2 tanh ( W 1 h x + b 1 ))( x ) (5) where W 1 RH H , b 1 RH and W 2 RH | V | denote the weight matrices of one fully-connected layer, | V | is the vocabulary size, LSM is log soft-max function and LSM( . . . )( x ) means to take the log probability value corresponding to token x .", "In this paper, we share the parameters of W 2 with parameters of embedding layers in the encoder and decoder.", "Note that we only mask the context only the pre-training stage.", "DialogVED allows the decoder to attend the hidden states of context (i.e., the output of the encoder), and thus direct training will cause the decoder to", "ignore the latent variable z , and the KL loss will rapidly decrease to 0 and the latent space loses its expressive power, which is called posterior collapse or KL-vanishing (Bowman et al., 2016).", "This paper adopts two methods developed in VAEs literature to reduce posterior collapse: Free Bits (Kingma et al., 2016), which replaces the K-L regularization term in (3) with a hinge loss term that maximize each component of the original K-L term with a constant : L kl = (cid:88) i max ( , KL ( q ( z i ) || p ( z i | c ))) (6) Bag-of-words Loss (Zhao et al., 2017b), which is used to encourage the latent variable to predict the words in response r in a non-autoregressive way: LBOW = T (cid:88) t =1 log f r t (7) where T is the number of tokens in response r , and f r t denotes the estimated probability of word r t .", "More specifically, f is the function outputting the probability of words within the target response: f = softmax(MLP z [ z h [ CLS ] ]) R | V | (8) where MLP z is a multilayer perceptron and V refers to the whole vocabulary.", "Absolute Position Embeddings Besides token-level learned position embeddings used in original Transformer, we also consider turn level and speaker-level position embeddings like PLATO (Bao et al., 2020).", "To better model the meaning of a turn in a dialog, We introduce embedding for turn position and role position in one conversation, the final input embedding of each token is the sum of corresponding turn, role and token embeddings.", "Relative Position Embeddings It has recently become more common to use relative position embeddings, which produce a different learned embedding according to the offset between the key and query being compared in the self-attention mechanism (Shaw et al., 2018; Raffel et al., 2019).", "We extend the element of the original relative distance matrix in T5 (Raffel et al., 2019) to two-tuple.", "In the mapping function f , we consider both token relative distance d token and turn relative distance d turn , where these tuples are mapped through a bucket function, and then a Kij is queried in predefined embedding layers.", "Combining the losses detailed in the Equations (2) (5) (6) and (7), we have pre-training objective, which we use to pre-train the DialogVED on the large-scale conversation corpus:", "To sum up, we mask text spans in the context c , sample a latent variable z from prior network, and then let the encoder and decoder predict the masked spans and response r respectively with the guidance of the latent variable z .", "In this section, we firstly introduce the pre-training datasets and fine-tuning benchmarks in 3.1, and implement details in 3.2.", "Then we present the main results in 3.3.", "Lastly, we analyze the influence of parameters and position embeddings in 3.4.", "Large-scale Reddit comments dataset (Zhou et al., 2018; Galley et al., 2019) is employed for pretraining our dialog language model.", "This dataset has been proved to be helpful in various conversation downstream tasks (Bao et al., 2020; Zhang et al., 2020).", "We use the script provided by DialoGPT (Zhang et al., 2020) to obtain the latest Reddit comment data.", "We obtain 215 million 1 training samples (42GB in total) for pre-training.", "To accelerate the training process and accommodate GPU memory limitations, we adopt two methods.", "First, we sort the samples according to the length of the context.", "Samples with similar length (i.e. number of tokens in context) are assembled into a batch to minimize the amount of padding.", "Secondly, due to the uneven distribution of sample lengths, we divide the Reddit corpus into two sub-datasets: Reddit-Short and Reddit-Long 1 Given an instance containing multiple turns of dialogue { t 1 , t 2 , ..., t n } , we extract n 1 samples (i.e. contextresponse pairs), where the context c is { t 1 , t 2 , ..., t i 1 } , and the response r is { t i } , for i = { 2 , 3 , ..., n } .", "according to the length of context and response.", "with some statistics in Table 1, and optimize the batch size for each sub-dataset to avoid reserving a large amount of memory for a few long response samples during the training process.", "Within an epoch, we first pre-train on Reddit-Short with a larger batch size, and then pre-train Reddit-Long with a smaller batch size.", "We split the reddit comment dataset here mainly for efficiency.", "Following PLATO (Bao et al., 2020), we select", "three datasets as our benchmarks: DailyDialog (Li et al., 2017), a chit-chat dataset, which contains high-quality human conversations about daily life.", "Persona-Chat (Zhang et al., 2018), a knowledge grounded conversation dataset.", "It provides both manually annotated conversations and corresponding persona profiles (background knowl-edge), where two participants chat naturally and try to get to know each other.", "DSTC7-AVSD (Alamri et al., 2019a), a conversational question answering dataset, shorts for Audio Visual Scene-aware Dialog of the DSTC7 challenge.", "The system needs to generate an answer given dialogue context and background knowledge.", "There are multiple reference responses for each context in DSTC7-AVSD test set.", "For evaluation, we use the same metrics as used in PLATO, except for knowledge-related metrics, since this paper does not focus on utilizing knowledge.", "So we will focus the following metrics: BLEU-1/2 (Papineni et al., 2002), which measures the relevance of generated text to the reference text by calculating the 1/2-gram overlapping between them.", "Distinct-1/2 (Li et al., 2016a), which measures the diversity of a generated sentence by focusing on the number of distinct 1/2-gram of a sentence 4856 Model DailyDialog PersonaChat BLEU-1 BLEU-2 Distinct-1 Distinct-2 BLEU-1 BLEU-2 Distinct-1 Distinct-2 Seq2Seq (Vinyals and Le, 2015) 0.336 0.238 0.030 0.128 0.448 0.353 0.004 0.016 iVAE_MI (Fang et al., 2019) 0.309 0.249 0.029 0.250 --LIC (Golovanov et al., 2019) --0.405 0.320 0.019 0.113 PLATO w/o latent (Bao et al., 2020) 0.405 0.322 0.046 0.246 0.458 0.357 0.012 0.064 PLATO (Bao et al., 2020) 0.397 0.311 0.054 0.291 0.406 0.315 0.021 0.121 ProphetNet (Qi et al., 2020) 0.443 0.392 0.039 0.211 0.466 0.391 0.013 0.075 DialogVED w/o latent 0.461 0.407 0.041 0.222 0.459 0.380 0.010 0.062 DialogVED Greedy 0.459 0.410 0.045 0.265 0.470 0.387 0.016 0.103 DialogVED Sampling 0.431 0.370 0.058 0.372 0.428 0.357 0.032 0.273 DialogVED 0.481 0.421 0.042 0.232 0.482 0.399 0.015 0.094 Table 2: Experimental results on DailyDialog and PersonaChat with automatic evaluations, with highest value written in bold.", "Other word-overlap-based metrics, METEOR, ROUGE-L, and CIDEr, which are also reported for the DSTC7-AVSD dataset, same as DSTC7 reviews (Alamri et al., 2019b).", "Vanilla sequence to sequence (Seq2Seq) models, dialog pre-training models, and general natural language pre-training models are used as our baselines: Seq2Seq (Vinyals and Le, 2015) is a sequence-to-sequence model with attention.", "iVAE MI (Fang et al., 2019) is an implicit deep latent variable model based on Variational Autoencoder for better latent representations and diverse responses.", "LIC (Golovanov et al., 2019) obtains the best performance during the contest, and is one transformer based generation method.", "PLATO (Bao et al., 2020) utilizes a discrete latent variable for dialog generation pre-training to address the one-to-many problem.", "ProphetNet (Qi et al., 2020) is a pretrained LM model with predicting more than one future tokens as the pre-training objective.", "We fine-tune ProphetNet-Large model released in (Qi et al., 2020) with downstream training data directly.", "For benchmark DSTC7-AVSD, we include AVSD Baseline (Alamri et al., 2019a) system provided by the the challenge organizer, as well as the best performing model developed by the team of CMU Sinbad's (Sanabria et al., 2019).", "DialogVED is composed of a 12-layer encoder and a 12-layer decoder, with 1024 embedding/hidden size and 4096 feed-forward filter size.", "The dimension P of hidden states z is set to 64 and we will analyze the effect of P in 3.4.1.", "We use Adam optimizer (Kingma and Ba, 2014) with a learning rate of 3 10 4 for pre-training.", "We set ngram as 2 following ProphetNet (Qi et al., 2020).", "The pre-training of dialogue generation is carried out on 32 Nvidia Telsa V100 32G GPU (4 nodes) for 6 epochs, taking about 5 days to reach convergence.", "Mixed precision training is also adopted for efficiently training and inference, and we use the Fairseq (Ott et al., 2019) framework to conduct all experiments.", "We use the BERT-uncased dictionary, and replace some unused tokens to custom special symbols (such as [SOT], denoting the beginning of the conversation, which is suitable for conversation datasets containing knowledge, like PersonaChat and DSTC7-AVSD).", "We used package WordPiece (Devlin et al., 2019) for tokenization.", "For fine-tuning, we use exactly the same hyperparameter settings in all three datasets, and they are slightly different from the hyperparameter in pre-training.", "The learning rate is set to 1 10 4 4857 and the batch size is fixed to 512.", "We also adopt an additional warmup strategy where we linearly increase the learning rate from initial learning rate ( 1 10 7 ), the number of warmup updates is set to 2000.", "For each dataset, we train 10 epochs, and select the checkpoint with the lowest validation loss for inference.", "In Table 2, we compare several DialogVED variants with baseline models.", "DialogVED represents inferencing DialogVED with beam search.", "Compared with DialogVED, DialogVED w/o latent is not equipped with latent variable, thus the loss function does not include bag-of-words loss and K-L loss.", "DialogVED Greedy means DialogVED inference with greedy search.", "For DialogVED Sampling , we sample from the top K tokens with the highest output probability at each decoding step.", "For the latent space, we always sample each latent variable from the prior distribution standard normal distribution.", "Here, beam size is set to 5 and K is set to 100.", "As shown in Table 2 and Table 3, our model DialogVED is very competitive compared to PLATO and other models.", "In particular, decoding using Top-K ( K = 100 ) sampling with DialogVED beats the PLATO in BLEU-1/2 and Distinct-1/2 on DailyDialog and PersonaChat (see in Table 2).", "In fact, as K increases, the overlap of n-grams decreases and the diversity increases.", "Based on our observations, K taking 100 is a good balance, Table 4 shows more detailed results.", "On the DSTC7-AVSD, the diversity of the responses is not as important as the accuracy.", "From Table 3, We observe that DialogVED w/o latent variable perform the best in overall metrics.", "However, DialogVED equipped with beam search or greedy search, can still easily beat PLATO even though it has a post-generation ranking component.", "There are 2 essential components that contribute greatly the success of our model: Firstly, We adopt a newly developed pretrained LM as the initializer and further continue its pretraining pipeline on our dialog dataset (Reddit) and thus we have a really powerful encoder-decoder.", "This is demonstrated in the fact that our model (DialogVED w/o latent variable) beat PLATO (w/o latent variable) in all metrics on all the three datasets.", "models.", "Compared to general VAEs, DialogVED allows encoder-decoder interaction in the decoding, which avoids insufficient representation of low-dimensional latent variable.", "At the same time, compared with seq2seq model, predicting the bag of words pushes the latent variable to give extra guidance to decoder.", "This is demonstrated by the fact that when compared with DialogVED w/o latent variable, we observe the additional gains in terms of both accuracy and diversity (see Table 2).", "Overall, our DialogVED achieves new state-of-the-art results in all three downstream tasks of dialogue response generation.", "We investigate the effect of latent space sizes, P , defined as the dimension of the latent variable z and the different K in sampling.", "The results in Table 4 show that smaller latent size ( P = 32 ) is more dominant in n-gram based metrics (BLEU-1/2), while larger latent size generates more diverse texts.", "From the results of topK sampling, we see that the two metric (BLEU-1/2 and Distinct-1/2) have a negative correlation.", "We can flexibly choose the decoding strategy depends on specific scene.", "We study the impact of position embeddings as described in section 2.7, we define two types of position embeddings: absolute position embeddings (APE) and relative position embeddings (RPE).", "We report the metrics of their different combinations, these independent components are TurnAPE (turn absolute embedding), RoleAPE (role absolute embedding), TokenRPE (token relative embedding) and TurnRPE(turn relative embedding) 4858 respectively.", "As the results shown in Table 5, the combination of TurnAPE and RoleAPE achieve the best performance.", "Both absolute and relative position embeddings improve model performance, nevertheless, including them at the same time can be harmful.", "Automated metrics (BLEU 1/2, Distinct-1/2, etc.) have limitations for evaluating open-domain dialog tasks.", "To make it more convincing, we conduct a human evaluation.", "Specifically, we randomly select 100 dialogue contexts and generate responses with the following methods: PLATO, DialogVED and DialogVED-Sampling.", "Following PLATO, annotators are asked to compare the response (win, tie or lose) quality from four aspects: fluency, coherence, informativeness and overall.", "The results of human comparison are shown in Table 6, where the average Cohen's kappa (Krae-mer, 2014) of group 1 and 2 is 0.729 and 0.743 respectively, indicating annotators have reached moderate agreement.", "It can be seen that most of the time they are tied, and the three models sometimes generate exactly the same response.", "For DialogVED, it beats Plato more in coherence but with close informativeness; while DialogVED-sampling Group 1 Group 2 Win Lose Win Lose Fluency 0.16 0.11 0.19 0.13 Coherence 0.38 0.16 0.22 0.24 Informativeness 0.13 0.11 0.31 0.14 Overall 0.26 0.14 0.24 0.17 Table 6: Human evaluation of DialogVED vs. PLATO (Group", "In general, DialogVED can generate both relevant and diverse response, we show some case study to help illustrate the effectiveness of our model in Appendix A. 4 Related Work Encoder-Decoder dialog models Unlike retrieval based dialogue systems (Boussaha et al., 2019; Chen et al., 2021), encoder-decoder models are widely used in dialog response generation, but it tends to generate generic responses and dull responses (e.g., I don't know).", "To enhance encoder-decode models and generate diverse responses, researchers have tried different approaches: using diversity promotion objectives (Li et al., 2016a), using different decoding algorithms (Li et al., 2016b), adding additional contents (Xu et al., 2019), or introducing large-scale knowledge graphs into dialog generation (Liu et al., 2018; Wu et al., 2020).", "Another class of methods is using the latent variable to address the one-to-many problem in response generation.", "These models introduce discourse-level diversity and are able to generate diverse dialog responses (Serban et al., 2017; Zhao et al., 2017a, 2018; Gao et al., 2019).", "In this paper, we also adopt this approach and further we incorporate the latent variables both in the pre-training and fine-tuning.", "Pre-trained Dialog Models Pre-trained language models have been successfully used in NLG and NLU tasks (Devlin et al., 2019; Radford et al., 2019).", "Recently, various new pre-trained language models have been pre-trained including BART (Lewis et al., 2020), ProphetNet (Qi et al., 2020), T5 (Raffel et al., 2020).", "In these papers, they demonstrate that better performance can be obtained with fine-tuning PLMs than training from scratch.", "Due to the fact that there are many important applications in the dialog domain and the dialog corpus has different linguistic features from general documents, pre-trained dialog models with open domain dialog data such as Reddit is very important.", "DialoGPT (Zhang et al., 2020) continues to pre-train GPT-2 model directly on Reddit comments data, and the new pre-trained model achieves better performance on downstream tasks including several dialog response generation benchmarks.", "PLATO (Bao et al., 2020) proposes a new model specifically for dialog generation, which introduces a discrete variable for one-to-many relationship modeling.", "The pre-trained model helps to achieve state-of-the-art results on several response generation tasks.", "This is the closest work in literature to ours.", "However, in our paper, we introduce continuous latent variables during pre-training on dialog corpus instead of a discrete latent variable.", "This paper proposes a new pre-training framework for dialogue response generation called DialogVED.", "The latent variable is incorporated into the sequence-to-sequence framework based on Transformer, and obtains a robust and diverse response generation model through 4 training targets.", "our pre-trained model has achieved new state-of-the-art in multiple downstream tasks of dialogue response generation.", "Extensive experiments prove the effectiveness of our model.", "Additional human evaluation demonstrates the advantages of our proposed model.", "This work is partially supported by Natural Science Foundation of China (No.6217020551, No.61906176), Science and Technology Commission of Shanghai Municipality Grant (No.20dz1200600, 21QA1400600, GWV-1.1, 21511101000) and Zhejiang Lab (No. 2019KD0AD01).", "In this paper, different ethical restrictions deserve discussion.", "All data used in our pre-training are available online and other dialog corpus in this paper are publicly available sources.", "We strictly followed the platform's policies and rules when crawling data from web platforms.", "We did not employ any author-specific information in our research.", "Our corpus may includes some bias, such as political bias and social bias, and our model might have inherited some forms of these bias.", "In order to limit these bias as much as possible, we filter controversial articles and removed data with offensive information when possible." ]
[ "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "other", "abstain", "method", "method", "method", "abstain", "method" ]
[ "End-to-end speech translation poses a heavy burden on the encoder because it has to transcribe, understand, and learn cross-lingual semantics simultaneously.", "To obtain a powerful encoder, traditional methods pre-train it on ASR data to capture speech features.", "However, we argue that pre-training the encoder only through simple speech recognition is not enough, and high-level linguistic knowledge should be considered.", "Inspired by this, we propose a curriculum pre-training method that includes an elementary course for transcription learning and two advanced courses for understanding the utterance and mapping words in two languages.", "The difficulty of these courses is gradually increasing.", "Experiments show that our curriculum pre-training method leads to significant improvements on En-De and En-Fr speech translation benchmarks.", "Speech-to-Text translation (ST) is essential to breaking the language barrier for communication.", "It aims to translate a segment of source language speech to the target language text.", "To perform this task, prior works either employ a cascaded method, where an automatic speech recognition (ASR) model and a machine translation (MT) model are chained together, or an end-to-end approach, where a single model converts the source language audio sequence to the target language text sequence directly (Berard et al., 2016).", "Due to the alleviation of error propagation and lower latency, the end-to-end ST model has been a hot topic in recent years.", "However, large paired data of source audios and target sentences are required to train such a model, which is not easy to satisfy for most language pairs.", "To address this Works are done during internship at Microsoft", "issue, previous works resort to pre-training technique (Berard et al., 2018; Bansal et al., 2019), where they leverage the available ASR and MT data to pre-train an ASR model and an MT model respectively, and then initialize the ST model with the ASR encoder and the MT decoder.", "This strategy can bring faster convergence and better results.", "The end-to-end ST encoder has three essential roles: transcribe the speech, extract the syntactic and semantic knowledge of the source sentence and then map it to a semantic space, based on which the decoder can generate the correct target sentence.", "These pose a heavy burden to the encoder, which can be alleviated by pre-training.", "However, we argue that the current pre-training method restricts the power of pre-trained representations.", "The encoder pre-trained on the ASR task mainly focuses on transcription, which learns the alignment between the acoustic feature with phonemes or words.", "It cannot capture linguistic knowledge or understand the semantics, which is essential for translation.", "In order to teach the model to understand the sentence and incorporate the required knowledge, extra courses should be taken before learning translation.", "Motivated by this, we propose a curriculum pre-training method for end-to-end ST. As shown in Figure 1, we first teach the model transcription through ASR task.", "After that, we design two tasks, named frame-based masked language model (FMLM) task and frame-based bilingual lexicon translation (FBLT) task, to enable the encoder to understand the meaning of a sentence and map words in different languages.", "Finally, we fine-tune the model on ST data to obtain the translation ability.", "For the FMLM task, we mask several segments of the input speech feature, each of which corresponds to a complete word.", "Then we let the encoder predict the masked word.", "This task aims to force the encoder to recognize the content of the utterance and understand the inner meaning of the sentence.", "In FBLT, for each speech segment that aligns with a complete word, whether or not it is masked, we ask the encoder to predict the corresponding target word.", "In this task, we give the model more explicit and strong cross-lingual training signals.", "Thus, the encoder has the ability to perform simple word translation, and the burden on the ST decoder is largely reduced.", "Besides, we adopt a hierarchical manner where different layers are guided to perform different tasks (first 8 layers for ASR and FMLM pre-training, and another 4 layers for FBLT pre-training).", "This is mainly because the three pre-training tasks have different requirements for language understanding and different output spaces.", "The hierarchical pre-training method can make the division of labor more clear and separate the incorporation of source semantic knowledge and cross-lingual alignments.", "We conduct experiments on the LibriSpeech En-Fr and IWSLT18 En-De speech translation tasks, demonstrating the effectiveness of our pre-training method.", "The contributions of our paper are as follows: (1) We propose a novel curriculum pretraining method with three courses: transcription, understanding and mapping, forcing the encoder to have the ability to generate necessary features for the decoder.", "(2) We propose two new tasks to learn linguistic features, FMLM and FBLT, which explicitly teach the encoder to do source language understanding and target language meaning mapping.", "(3) Experiments show that both the proposed courses are helpful for speech translation, and our proposed curriculum pre-training leads to significant improvements.", "Early work on speech translation used a cascade of an ASR model and an MT model (Ney, 1999; Matusov et al., 2005; Mathias and Byrne, 2006), which makes the MT model access to ASR errors.", "Recent successes of end-to-end models in the MT field (Bahdanau et al., 2015; Luong et al., 2015; Vaswani et al., 2017) and the ASR fields (Chan et al., 2016; Chiu et al., 2018) inspired the research on end-to-end speech-to-text translation system, which avoids error propagation and high latency issues.", "In this research line, Berard et al. (2016) give the first proof of the potential for an end-to-end ST model.", "After that, pre-training, multitask learning, attention-passing and knowledge distillation have been applied to improve the ST performance (Anastasopoulos et al., 2016; Duong et al., 2016; Berard et al., 2018; Weiss et al., 2017; Bansal et al., 2018, 2019; Sperber et al., 2019; Liu et al., 2019; Jia et al., 2019).", "However, none of them attempt to guide the encoder to learn linguistic knowledge explicitly.", "Recently, Wang et al. (2019b) propose to stack an ASR encoder and an MT encoder as a new ST encoder, which incorporates acoustic and linguistic knowledge respectively.", "However, the gap between these two encoders is hard to bridge by simply concatenating the encoders.", "Kano et al. (2017) propose structured-based curriculum learning for English-Japanese speech translation, where they use a new decoder to replace the ASR decoder and to learn the output from the MT decoder (fast track) or encoder (slow track).", "They formalize learning strategies from easier networks to more difficult network structures.", "In contrast, we focus on curriculum learning in pre-training and increase the difficulty of pre-training tasks.", "Curriculum learning is a learning paradigm that starts from simple patterns and gradually increases", "to more complex patterns.", "This idea is inspired by the human learning process and is first applied in the context of machine learning by Bengio et al. (2009).", "The study shows that this training approach results in better generalization and speeds up the convergence.", "Its effectiveness has been verified in multiple tasks, including shape recognition (Ben-gio et al., 2009), object classification (Gong et al., 2016), question answering (Graves et al., 2017), etc.", "However, most studies focus on how to control the difficulty of the training samples and organize the order of the learning data in the context of single-task learning.", "Our method differs from previous works in two ways: (1) We leverage the idea of curriculum learning for pre-training.", "(2) We do not train the model on the ST task directly with more and more difficult training examples or use more and more complicated structures.", "Instead, we design a series of tasks with increased difficulty to teach the encoder to incorporate diverse knowledge.", "The overview of our training process is shown in Figure 2.", "It can be divided into three steps: First, we train the model towards the ASR objective LASR to learn transcription.", "We note this as the elementary course.", "Next, we design two advanced courses (tasks) to teach the model understanding a sentence and mapping words in two languages, named Frame-based Masked Language Model (FMLM) task and Frame-based Bilingual Lexicon Translation (FBLT) task.", "In the FMLM task, we mask some speech segments and ask the encoder to predict the masked words.", "In the FBLT task, we ask the encoder to predict the target word for each speech segment which corresponds to a complete source word.", "In this stage, the encoder is updated by LADV .", "We adopt a hierarchical training manner where N encoder blocks are used to perform ASR and FMLM tasks as they both require outputs in source word space, and N e blocks are used in the FBLT task.", "After the two-phases pretraining, the encoder is finally combined with a new decoder or a pre-trained MT decoder to perform the ST task towards LST .", "Problem Formulation The speech translation corpus usually contains speech-transcription-translation triples, denoted as S = { ( x , y s , y t ) } .", "Specially, x = ( x 1 , , x T x ) is a sequence of acoustic features which are extracted from the speech signals.", "y s = ( y s 1 , , y sT s ) and y t = ( y t 1 , , y t T t ) represent the corresponding transcription in source language and the translation in target language respectively.", "To pre-train the encoder, an extra ASR dataset A = { ( x , y s ) } can be leveraged .", "Finally, the data for encoder pre-training is denoted as { ( x , y s ) | ( x , y s ) A ( x , y s , y t ) S} After the encoder is pre-trained, we fine-tune the model using only S , to enable it generate y t from x directly.", "The model is updated using cross-entropy loss LST = log P ( y t | x ) .", "2019).", "The encoder is a stack of two 3 3 2D CNN layers with stride 2 and N e Transformer encoder blocks.", "The CNN layers result in downsampling by a factor of 4.", "The decoder is a stack of N d Transformer decoder blocks.", "In the elementary course, we train an end-to-end ASR model, which has similar architecture as the ST model.", "The ASR encoder consists of N blocks, and these blocks are used to initialize the bottom N blocks of the ST encoder.", "For the ASR task, we follow Karita et al. (2019), to employ a multitask learning strategy, that is, both the E2E decoder and a CTC module predict the source sentence.", "Offline experiments indicate that the CTC objective is crucial for attentional encoder-decoder based ASR models.", "The final objective combines the CTC loss L ctc and the cross-entropy loss LCE : LASR = LCTC + (1 ) LCE = log P ctc ( y s | x ) (1 ) log P s 2 s ( y s | x ) (1) In this work, we set to 0.3.", "The CTC loss works on the encoder output and it pushes the encoder to learn frame-wise alignment between speech with words.", "With the ability of transcription, we further propose two new tasks for the advanced courses.", "The design of the Frame-based Masked Language Model task is inspired by the Masked Language Model (MLM) objective of BERT (Devlin et al., 2019) and semantic mask for ASR task (Wang et al., 2019a).", "This task enables the encoder to understand the inner meaning of a segment of speech.", "As shown in Figure 2, we first perform force-alignment between the speech and the transcript sentence to determine where in time particular words occur in the speech segment.", "For each word y si , we obtain its corresponding start position s i and the end position e i in the sequence x according to force alignment results.", "At each training iteration, we randomly sample some percentage of the words in the y s and denote the selected word set as y s .", "Next, for each selected token y sj in y s , we mask the corresponding speech piece [ x s j : x e j ] .", "The masked utterance is denoted as x and used as input to the encoder: h = Enc ( x ) (2) After that, for a masked piece [ x s j : x e j ] , we average the corresponding output hidden states [ h (cid:98) sj 4 (cid:99) : h (cid:100) ej 4 (cid:101) ] 1 , and compute the distribution probability over source words as shown in follows: h j = mean ([ h (cid:98) sj 4 (cid:99) : h (cid:100) ej 4 (cid:101) ]) (3) p ( y sj | x ) = softmax ( h j W ) (4) In practice, the sentence is represented in BPE tokens and W R d model | V s | , where | V s | is the size of source vocabulary.", "In this way, a speech piece can be aligned with one or more tokens.", "We compute KL-Divergence loss as: LFMLM = (cid:88) y sj y s (cid:88) q ( y sj ) log p ( y sj | x ) q ( y sj ) (5) q ( y si ) R | V s | is a distribution over all BPE tokens in source vocabulary V s and defined as: q ( y sj ) ( pos ) = (cid:26) 1 /n j , V s [ pos ] y sj 0 , otherwise .", "In this work, we use a mask ratio of 15% following BERT and the masked speech piece is filled with the mean value of the whole utterance following Park et al. (2019).", "Because FMLM focuses on the understanding of source language, we computes its loss at the N -th layer of encoder (same with ASR loss), in the hope that the bottom N layers are only concerned with source language.", "Aside from predicting masked source words, we go further to leverage cross-lingual information.", "Specifically, for each segment of speech features [ x s i : x e i ] which aligned with a source word y si , we assume we can obtain its target counterpart y ti .", "Similar to FMLM, we average the output hidden states from position (cid:98) s i 4 (cid:99) to (cid:100) e i 4 (cid:101) , and then compute the distribution probability over target vocabulary.", "The alignment between speech segments and target 1 The position indexs are divided by 4 due to downsampling.", "words is a many-to-many correspondence, so there are cases where y ti contains nothing or contains multiple foreign words.", "For the former case, we set the loss to zero, and for the latter case, we also compute KL-Divergence loss as: LFBLT = (cid:88) y ti (cid:88) q ( y ti ) log p ( y ti | x ) q ( y ti ) (7) The definition of q ( y ti ) is the length normalized distribution over all tokens appear in y ti .", "Note that the loss is computed on every speech segments, whether or not it is masked.", "The only question remaining is how to obtain y ti for each speech segment.", "Since there are two types of data for pre-training, ( x , y s , y t ) S and ( x , y s ) A , we use two methods to get the alignment: For training examples ( x , y s , y t ) S , we use reference-supervised method.", "In particular, we simply run Moses 2 scripts to establish word alignments.", "It begins from running of GIZA++ 3 to get source-to-target and target-to-source alignments, and then runs a heuristic grow-diag-final algorithm to get the final results, which means y si y s , we choose one word from its translation sentence as the corresponding word y ti y t s.t. y ti y s .", "For training examples ( x , y s ) A , we apply dictionary-supervised method.", "Through the above alignment process, we can calculate a bilingual lexical translation table T with { ( y s , y t ) | ( x , y s , y t ) S} , which estimates the translation probability between a source word w si and a target word w tj , denoted as T = ( w si , w tj , p ( w si , w tj )) .", "After that, we compute a y ti for each y si in y s according to y ti = argmax w sj p ( y si , w sj ) .", "We compute the LFBLT at the top layer of the encoder, indicating that the top N e N layers are duty on bilingual word mapping.", "The final training objective in the advanced course combines FMLM and FBLT losses LADV = LFMLM + LFBLT (8) 4 Experiments 4.1 Data and Preprocess We conduct experiments on two publicly available speech translation datasets: the LibriSpeech En-Fr 2 http://www.statmt.org/moses 3 https://github.com/moses-smt/giza-pp Corpus (Kocabiyikoglu et al., 2018) and the IWSLT En-De Corpus (Niehues et al., 2018).", "LibriSpeech En-Fr: This corpus is a subset of the LibriSpeech ASR corpus (Panayotov et al., 2015) and aligned with French e-books, which contains 236 hours of speech in total.", "Following previous works, we use the 100 hours clean training set and double the ST size by concatenating the aligned references with the provided Google Translate references, resulting in 90k training instances.", "We validate on the dev set and report results on the test set (2048 utterances).", "IWSLT En-De: The corpus contains 271 hours of data, with English wave, English transcription, and German translation in each example.", "We follow Inaguma et al. (2019) to remove utterances of low alignment quality, resulting in 137k utterances.", "We sample 2k segments from the ST-TED corpus as dev set and tst2013 is used as the test set (993 utterances).", "Data Preprocessing: We run ESPnet 4 (Watan-abe et al., 2018) recipes to perform data preprocessing.", "For both tasks, our acoustic features are 80-dimensional log-Mel filterbanks stacked with 3-dimensional pitch features extracted with a step size of 10ms and window size of 25ms.", "The features are normalized by the mean and the standard deviation for each training set.", "Utterances of more than 3000 frames are discarded.", "We perform speed perturbation with factors 0.9 and 1.1.", "The alignment results between speech and transcriptions are obtained by Montreal Forced Aligner (McAuliffe et al., 2017).", "For references pre-processing, we tokenize and lowercase all the text with the Moses scripts.", "For pre-training tasks, the vocabulary is generated using sentencepiece (Kudo and Richardson, 2018) with a fixed size of 5k tokens for all languages, and the punctuation is removed.", "For ST task, we normalize the punctuation using Moses and use the character-level vocabulary due to its better performance (Berard et al., 2018).", "Since there is no human-annotated segmentation provided in the IWSLT tst2013 , we use two methods to segment the audios:", "1) Following ESPnet, we segment each audio with the LIUM SpkDiarization tool (Meignier and Merlin, 2010).", "For evaluation, the hypotheses and references are aligned using the MWER method with RWTH toolkit (Bender et al., 2004).", "4 https://github.com/espnet/espnet", "2) We perform sentence-level force-alignment between audio and transcription using aeneas 5 tool and segment the audio according to alignment results.", "Experiments are conducted in two settings: base setting and expanded setting .", "In base setting, only the corpus described in Section 4.1 is used for each task.", "In the expanded setting, additional ASR and/or MT data can be used.", "All results are reported on case-insensitive BLEU with the multi-bleu.perl script unless noted.", "We mainly compare our method with the conventional encoder pre-training method which uses only the ASR task to pre-train the encoder.", "Besides, we also compare with the results of the other works in the literature by copying their numbers.", "LibriSpeech: In the context of base setting , Berard et al. (2018) and ESPnet have reported results on a LSTM-based ST model with pre-training and/or multi-task learning strategy.", "Liu et al. (2019) use a Transformer ST model and knowledge distillation method.", "Wang et al. (2019b) stack an ASR encoder and an MT encoder for final ST task, named as TCEN.", "Regarding the expanded setting , Bahar et al. (2019) apply the SpecAugment on ST task.", "They use the total 236h of speech for ASR pre-training.", "Inaguma et al. (2019) combine three ST datasets of 472h training data 6 to train a multilingual ST model.", "In our work, we use the LibriSpeech ASR corpus as additional pre-training data, including 960h of speech.", "As the dev and test set of LibriSpeech ST task are extracted from the 960h corpus, we exclude all training utterances with the same speaker that appear in dev or test sets .", "IWSLT: Since previous works use different segmentation methods and BLEU-score scripts, it is unfair to copy their numbers.", "In our work, we choose the ESPnet results as base setting baseline, the multilingual model and TCEN-LSTM model as expanded baselines.", "Inaguma et al. (2019) use the same multilingual model as described in LibriSpeech baselines.", "And Wang et al. (2019b) use an additional 272h TEDLIUM2(Rousseau et al., 2014) 5 https://www.readbeyond.it/aeneas 6 LibriSpeech En-Fr, IWSLT En-De and Fisher-CallHome Es-En ASR corpus and 41M parallel data from WMT18 and WIT3 7 .", "All of them use ESPnet code, LIUM segmentaion method and multi-bleu.perl script.", "We follow Wang et al. (2019b) to use another 272h ASR data for encoder pre-training and a subset of WMT18 8 for decoder pre-training.", "We use the same processing method for MT data, resulting in 4M parallel sentences in total.", "We also reimplement the CL-fast track of Kano et al. (2017) using our model architecture and data as another baseline.", "For LibriSpeech ST task, we use results of Berard et al. (2018), Inaguma et al. (2019) and Liu et al. (2019) as base cascaded baselines.", "The first two use LSTM models for ASR and MT. While the last work trains Transformer ASR and MT models.", "We build an expanded cascaded system with the pre-trained Transformer ASR model and a LSTM MT model with the default setting in ESPnet recipe.", "For IWSLT ST task, we use Inaguma et al. (2019) as base cascaded baseline, which is based on LSTM architecture.", "And we implement a Transformer-based baseline using our pre-trained ASR and MT models in the expanded setting.", "All our models are implemented based on ESPnet.", "We set the model dimension d model to 256, the head number H to 4, the feed forward layer size d ff to 2048.", "For LibriSpeech expanded setting, d model = 512 and H = 8 .", "For all the ST models, we set the number of encoder blocks N e = 12 and the number of decoder blocks N d = 6 .", "Unless noted, we use N = 8 encoder blocks to perform the ASR and the FMLM pre-training tasks.", "For MT model used in IWSLT expanded setting, we use the Transformer architecture in Vaswani et al. (2017) with N e = 6 , N d = 6 , H = 4 , d model = 256 .", "We train the model with 4 Tesla P40 GPUs and batch size is set to 64 per GPU.", "The pre-training takes 50 and 20 epochs for each phase and the final ST task takes another 50 epochs (a total of 120 epochs).", "We use the Adam optimizer with warmup steps 25000 in each phase.", "The learning rate decays proportionally to the inverse square root of the step number after 25000 steps.", "We 7", "save checkpoints every epoch and average the last 5 checkpoints as the final model.", "To avoid over-fitting, SpecAugment strategy (Park et al., 2019) is used in ASR pre-training with frequency masking (F = 30, mF = 2) and time masking (T = 40, mT=2).", "The decoding process uses a beam size of 10 and a length penalty of 0.2.", "LibriSpeech En-Fr: The results on LibriSpeech En-Fr test set are listed in Table 1.", "In base setting, our method improves the Transformer+ASR pre-train baseline by 1.7 BLEU and beats all the previous works, even though we do not pre-train the decoder.", "It indicates that through a well-designed learning process, the encoder has a strong potential to incorporate large amount of knowledge.", "Our method beats a knowledge distillation baseline, where an MT model is utilized to teach the ST model.", "The reason, we believe, is that our method gives the model more training signals and makes it easier to learn.", "We also outperform a TCEN baseline which includes two encoders.", "Compared to them, our method is more flexible and incorporates all information into a single encoder, which avoids the representation gap between the two encoders.", "As the ASR data size increases, the model performs better.", "In the expanded setting, we find the FBLT task performs poorly compared with the base setting.", "This is because the target word prediction task is dictionary-supervised in expanded setting rather than reference-supervised as in base setting.", "However, our method still outperforms the simple pre-training method by a large margin.", "Besides, it is surprising to find that the end-to-end ST model is approaching the performance of an MT model, which is the upper bound of the ST model since it accepts golden source sentence without any ASR errors.", "This further verifies the effectiveness of our method.", "IWSLT En-De: The results on IWSLT tst2013 are listed in Table 2, showing a similar trend as in LibriSpeech dataset.", "We find that the segmentation methods have a big influence on the final results.", "In the base setting, our method can improve the ASR pre-training baseline by 0.9 to 2.2 BLEU scores, depending on the segmentation methods.", "In the expanded setting, we find when combined with decoder pre-train, the performance is further improved and beats other expanded baselines.", "Table 3 shows comparison with cascaded ST sys-tems.", "For the base setting of two tasks, our end-to-end model can achieve comparable or better results with cascaded methods.", "This shows the end-to-end model has powerful learning capabilities and combines the functions of two models.", "In the LibriSpeech expanded setting, when more ASR data is available, we also obtain a competitive performance.", "This indicates our method can make a good use of ASR corpus and learn valuable linguistic knowledge other than simple acoustic information.", "However, when additional MT data is used, there is still a gap between the end-to-end method and the cascaded method.", "How to utilize bilingual parallel sentences to improve the E2E ST model is worth Method Enc pre-train Dec pre-train segment method (speech data) (text data) LIUM aeneas base setting ESPnet 12.50 +enc pre-train (cid:88) 13.12 +enc dec pre-train (cid:88) (cid:88) 13.54 Transformer+ASR pre-train (cid:88) 15.35 17.10 Transformer+curriculum pre-train (cid:88) 16.27 19.29 expanded setting Multilingual ST+pre-train(Inaguma et al., 2019) (cid:88) (472h) 14.6 TCEN-LSTM (Wang et al., 2019b) (cid:88) (479h) (cid:88) (40M) 17.65 CL-fast(Kano et al., 2017)(re-implemented) (cid:88) (479h) 14.33 16.23 Transformer+curriculum pre-train+dec pre-train (cid:88) (479h) (cid:88) (4M) 18.15 20.35 Table 2: ST results on IWSLT En-De tst2013 set.", "Ablation Study To better understand the contribution of each component, we perform an ablation study on LibriSpeech expanded setting.", "The results are shown in Table 4.", "On the one hand, we show that both of our proposed pre-training tasks are beneficial: In -FMLM task and -FBLT task 9 , we perform single-task pre-training for advanced course.", "The performance drops when we remove either one of them.", "On the other hand, we show the two-phases pre-training paradigm is necessary: The phase 2 experiment degenerates to the simple ASR pre-training baseline.", "In -phase 1 setting, we find that without the ASR pre-training, the training accuracy on FMLM task and FBLT task drops a lot, which further affects the ST performance.", "This means the ASR task is necessary for both the advanced courses and ST. In Multi3 9 we use 12-layer encoder for ASR and FMLM pre-training for a fair comparison.", "setting, we pre-train the model on ASR, FMLM and FBLT tasks in one phase.", "In this setting, we observe multi-task learning also decrease individual task performances (ASR, FMLM and FBLT) compared to curriculum learning.", "One reasonable expanation is that it is hard to train on the FMLM and FBLT tasks which takes masked input from randomly initialized parameters, which also leads to performance degradation on the ST task.", "Hyper-parameter N During pre-training, which layer conducts ASR pre-training and FMLM loss is an important hyper-parameter.", "We conduct experiments on LibriSpeech base setting to explore the influence of different choices.", "We keep N e = 12 unchanged and always use the top layer to perform the FBLT task.", "Then we alter the hyperparameter N .", "We find if N = 6 , the model finds it difficult to converge during ST training.", "That may be because the distance between the decoder and the bottom 6 encoder layers is too far so that the valuable source linguistic knowledge can not be well utilized.", "Moreover, the model performs undesirable if the choice is 10 or 12, which results in 16.47 and 16.14 BLEU score respectively, since the number of blocks for FBLT task is not enough.", "The model achieves the best performance when we choose N = 8 .", "Thus, we use this strategy in our main experiments.", "Unlabeled Speech Data In this work, we also explore how to utilize the unlabeled speech data in pre-training, but only get negative results.", "We conduct exploratory experiments on the LibriSpeech ST task.", "Assume that the ( x , y s ) from 100h ST corpus as labeled pre-training data and ( x ) from 960h LibriSpeech ASR corpus as unlabeled data.", "Following Jiang et al. (2019), we design an unsupervised pre-training task for elementary course, in which we randomly mask 15% of fbank features and let the bottom 4 encoder layers predict the masked part.", "We compute the L1 loss between the prediction and groundtruth filterbanks.", "However, we find that this method is not helpful for the final ST task, which results in 16.85 BLEU score, lower than our base setting model (without extra data pre-training).", "It is still an open question about how to use unlabeled speech data.", "This paper investigates the end-to-end method for ST. We propose a curriculum pre-training method, consisting of an elementary course with an ASR loss, and two advanced courses with a frame-based masked language model loss and a bilingual lexicon translation loss, in order to teach the model syntactic and semantic knowledge in the pre-training stage.", "Empirical studies have demonstrated that our model significantly outperforms baselines.", "In the future, we will explore how to leverage unlabeled speech data and large bilingual text data to further improve the performance.", "Besides, we expect the idea of curriculum pre-training can be adopted on other NLP tasks.", "This work was supported in part by the National Natural Science Foundation of China under Grant No.U1636116 and the Ministry of education of Humanities and Social Science project under grant 16YJC790123." ]
[ "abstain", "abstain", "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "other", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "result", "result", "method", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "method", "abstain", "method", "abstain", "method", "method", "result", "abstain", "objective", "objective", "objective", "method", "other" ]
[ "We propose a deep and interpretable probabilistic generative model to analyze glyph shapes in printed Early Modern documents.", "We focus on clustering extracted glyph images into underlying templates in the presence of multiple confounding sources of variance.", "Our approach introduces a neural editor model that first generates well-understood printing phenomena like spatial perturbations from template parameters via interpertable latent variables, and then modifies the result by generating a non-interpretable latent vector responsible for inking variations, jitter, noise from the archiving process, and other unforeseen phenomena associated with Early Modern printing.", "Critically, by introducing an inference network whose input is restricted to the visual residual between the observation and the interpretably-modified template, we are able to control and isolate what the vector-valued latent variable captures.", "We show that our approach outperforms rigid interpretable clustering baselines (Ocular) and overly-flexible deep generative models (VAE) alike on the task of completely unsupervised discovery of typefaces in mixed-font documents.", "Scholars interested in understanding details related to production and provenance of historical documents rely on methods of analysis ranging from the study of orthographic differences and stylometrics, to visual analysis of layout, font, and printed characters.", "Recently developed tools like Ocular (Berg-Kirkpatrick et al., 2013) for OCR of historical documents have helped automate and scale some textual analysis methods for tasks like compositor attribution (Ryskina et al., 2017) and digitization of historical documents (Garrette et al., 2015).", "However, researchers often find the need to go beyond Figure 1: We desire a generative model that can be biased to cluster according to typeface characteristics (e.g. the length of the middle arm) rather than other more visually salient sources of variation like inking.", "textual analysis for establishing provenance of historical documents.", "For example, Hinman (1963)'s study of typesetting in Shakespeare's First Folio relied on the discovery of pieces of damaged or distinctive type through manual inspection of every glyph in the document.", "More recently, Warren et al. (2020) examine pieces of distinctive types across several printers of the early modern period to posit the identity of clandestine printers of John Milton's Areopagitica (1644).", "In such work, researchers frequently aim to determine whether a book was produced by a single or multiple printers (Weiss (1992); Malcolm (2014); Takano (2016)).", "Hence, in order to aid these visual methods of analyses, we propose here a novel probabilistic generative model for analyzing extracted images of individual printed characters in historical documents.", "We draw from work on both deep generative modeling and interpretable models of the printing press to develop an approach that is both flexible and controllable the later being a critical requirement for such analysis tools.", "As depicted in Figure 1, we are interested in identifying clusters of subtly distinctive glyph shapes as these correspond to distinct metal stamps in the type-cases used by printers.", "However, other sources of variation (inking, for example, as depicted in Figure", "1) are likely to dominate conventional clustering methods.", "For example, powerful models like the variational autoencoder (VAE) (Kingma and Welling, 2014) capture the more visually salient variance in inking rather than typeface, while more rigid models (e.g. the emission model of Ocular (Berg-Kirkpatrick et al., 2013)), fail to fit the data.", "The goal of our approach is to account for these confounding sources of variance, while isolating the variables pertinent to clustering.", "Hence, we propose a generative clustering model that introduces a neural editing process to add expressivity , but includes interpretable latent variables that model well-understood variance in the printing process: bi-axial translation, shear, and rotation of canonical type shapes.", "In order to make our model controllable and prevent deep latent variables from explaining all variance in the data, we introduce a restricted inference network.", "By only allowing the inference network to observe the visual residual of the observation after interpretable modifications have been applied, we bias the posterior approximation on the neural editor (and thus the model itself) to capture residual sources of variance in the editor for example, inking levels, ink bleeds, and imaging noise.", "This approach is related to recently introduced neural editor models for text generation (Guu et al., 2018).", "In experiments, we compare our model with rigid interpretable models (Ocular) and powerful generative models (VAE) at the task of unsupervised clustering subtly distinct typeface in scanned images early modern documents sourced from Early English Books Online (EEBO).", "Our model reasons about the printed appearances of a symbol (say majuscule F ) in a document via a mixture model whose K components correspond to different metal stamps used by a printer for the document.", "During various stages of printing, random transformations result in varying printed manifestations of a metal cast on the paper.", "Figure 2 depicts our model.", "We denote an observed image of the extracted character by X .", "We denote choice of typeface by latent variable c (the mixture component) with prior .", "We represent the shape of the k -th stamp by template T k , a square matrix of parameters.", "We denote the interpretable latent variables corresponding to spatial adjustment of Figure 2: Proposed generative model for clustering images of a symbol by typeface.", "the metal stamp by , and the editor latent variable responsible for residual sources of variation by z .", "As illustrated in Fig. 2, after a cluster component c = k is selected, the corresponding template T k undergoes a transformation to yield T k .", "This transformation occurs in two stages: first, the interpretable spatial adjustment variables ( ) produce an adjusted template ( 2.1), T k = warp ( T k , ) , and then the neural latent variable transforms the adjusted template ( 2.2), T k = filter ( T k , z ) .", "The marginal probability under our model is p ( X ) = (cid:88) k k (cid:90) p ( X | , z ; T k ) p ( ) p ( z ) dzd, where p ( X | , z ; T k ) refers to the distribution over the binary pixels of X where each pixel has a bernoulli distribution parametrized by the value of the corresponding pixel-entry in T k .", "Early typesetting was noisy, and the metal pieces were often arranged with slight variations which resulted in the printed characters being positioned with small amounts of offset, rotation and shear.", "These real-valued spatial adjustment variables are denoted by = ( r, o, s, a ) , where r represents the rotation variable, o = ( o h , o v ) represents offsets along the horizontal and vertical axes, s = ( s h , s v ) denotes shear along the two axes.", "A scale fac-tor, a = 1 .", "0 + a , accounts for minor scale variations arising due to the archiving and extraction processes.", "All variables in are generated from a Gaussian prior with zero mean and fixed variance as the transformations due to these variables tend to be subtle.", "In order to incorporate these deterministic transformations in a differentiable manner, we map to a template sized attention map H ij for each output pixel position ( i, j ) in T as depicted in Figure 3.", "The attention map for each output pixel is formed in order to attend to the corresponding shifted (or scaled or sheared) portion of the input template and is shaped according to a Gaussian distribution with mean determined by an affine transform.", "This approach allows for strong inductive bias which contrasts with related work on spatial-VAE (Bepler et al., 2019) that learns arbitrary transformations.", "Apart from spatial perturbations, other major sources of deviation in early printing include random inking perturbations caused by inconsistent application of the stamps, unpredictable ink bleeds, and noise associated with digital archiving of the documents.", "Unlike in the case of spatial perturbations which could be handled by deterministic affine transformation operators, it is not possible to analytically define a transformation operator due to these variables.", "Hence we propose to introduce a non-interpretable real-valued latent vector z , with a Gaussian prior N ( 0 , I ) , that transforms T into a final template T via neurally-parametrized function filter ( T , z ; ) with neural network parameters .", "This function is a convolution over T whose kernel is parametrized by z , followed by non-linear operations.", "Intuitively, parametrizing the filter by z results in the latent variable accounting for variations like inking appropriately because convolution filters capture local variations in appearance.", "Srivatsan et al. (2019) also observed the effectiveness of using z to define a deconvolutional kernel for c X Observation Template choice T c Warped template R c Residual z Inference parameters Posterior Approximation q ( z | X , c ; ) R c = X \u0000 T c z = InferNet ( R c ,c ) Figure 4: Inference network for z conditions on the mixture component and only the residual image left after subtracting the -transformed template from the image.", "Our aim is to maximize the log likelihood of the", "observed data ( { X d | d N , d < n } ) of n images wrt.", "model parameters: LL ( T 1 ,...,k , ) = max T, (cid:88) d log (cid:104) (cid:88) k k (cid:90) p ( X d | d , z d ; T k , ) p ( d ) p ( z d ) dz d d d (cid:105) During training, we maximize the likelihood wrt.", "instead of marginalizing, which is an approximation inspired by iterated conditional modes (Besag, 1986): max T, (cid:88) d log (cid:88) k max k,d k (cid:90) p ( X d | d = k,d , z d ; T k , ) p ( d = k,d ) p ( z d ) dz d However, marginalizing over z remains intractable.", "Therefore we perform amortized variational inference to define and maximize a lower bound on the above objective (Kingma and Welling, 2014).", "We use a convolutional inference neural network parametrized by (Fig. 4), that takes as input, the mixture component k , the residual image R k = X T k , and produces mean and variance parameters for an isotropic gaussian proposal distribution q ( z | R k , k ; ) .", "This results in the final training objective: max T,, (cid:88) d log (cid:88) k E q ( z d | R d,k ,k ; ) (cid:2) max k,d (cid:0) k p ( X d | = k,d , z d ; T k , ) p ( = k,d ) (cid:1)(cid:3) KL (cid:0) q ( z d | R d,k , k ; ) || p ( z ) (cid:1) We use stochastic gradient ascent to maximize this objective with respect to T, , and .", "We train our models on printed occurrences of 10 different uppercase character classes that scholars have found useful for bibliographic analysis (Warren et al., 2020) because of their distinctiveness.", "As a preprocessing step, we ran Ocular (Berg-Kirkpatrick et al., 2013) on the grayscale scanned images of historical books in EEBO dataset and extracted the estimated image segments for the letters of interest.", "We show that our model is superior to strong baselines at clustering subtly distinct typefaces (using realistic synthetic data), as well as in terms of fit-ting the real data from historical books.", "Ocular: Based on the emission model of Ocular that uses discrete latent variables for the ver-tical/horizontal offset and inking variables, and hence has limited expressivity.", "-only: This model only has the interpretable continuous latent variables pertaining to spatial adjustment.", "VAE-only: This model is expressive but doesn't have any interpretable latent variables for explicit control.", "It is an extension of Kingma et al. (2014)'s model for semi-supervised learning with a continuous latent variable vector in which we obtain tighter bounds by marginalizing over the cluster identities explicitly.", "For fair comparison, the encoder and decoder convolutional architectures are the same as the ones in our full model.", "The corresponding training objective for this baseline is: max T,, (cid:88) d log (cid:88) k E q ( z d | X d ,k ; ) (cid:2) k p ( X d | z d ; T k , ) (cid:3) KL (cid:0) q ( z d | X d , k ; ) || p ( z ) (cid:1) No-residual: The only difference from the full model is that the encoder for the inference network conditions the variational distribution q ( z ) on the entire input image X instead of just the residual image X T .", "Early modern books were frequently composed from two or more type cases, resulting in documents with mixed fonts.", "We aim to learn the dif-V-measure Mutual Info F&M NLL Ocular 0.42 0.45 0.61 379.21 -only 0.49 0.51 0.70 322.04 VAE-only 0.22 0.29 0.38 263.45 No-residual 0.54 0.58 0.73 264.27 Our Model 0.73 0.74 0.85 257.92 Table 1:", "ferent shapes of metal stamps that were used as templates for each cluster component in our model.", "Data: In order to quantitatively evaluate our model's performance, we experiment with synthetically generated realistic dataset for which we know the ground truth cluster identities in the following manner: For each character of interest, we pick three distinct images from scanned segmented EEBO images, corresponding to three different metal casts.", "Then we randomly add spatial perur-bations related to scale, offset, rotation and shear.", "To incorporate varying inking levels and other distortions, we randomly either perform erosion , dilation , or a combination of these warpings using OpenCV (Bradski, 2000) with randomly selected kernel sizes.", "Finally, we add a small Gaussian noise to the pixel intensities and generate 300 perturbed examples per character class.", "Results: We report macro-averaged results across all the character classes on three different clustering measures, V-measure (Rosenberg and Hirschberg, 2007), Mutual Information and Fowlkes and Mallows Index (Fowlkes and Mallows, 1983).", "In Table 1, we see that our model significantly outperforms all other baselines on every metric.", "Ocular and -only models fail because they lack expressiveness to explain the variations due to random jitters, erosions and dilations.", "The VAE-only model, while very expressive, performs poorly because it lacks the inductive bias needed for successful clustering.", "The No-residual model performs decently but our model's superior performance emphasizes the importance of designing a restrictive inference network such that z only focuses on extraneous sources of variation.", "For the analysis of real books, we selected three books from the EEBO dataset printed by different printers.", "We modeled each character class for each book separately and report the macro-aggregated upper bounds on the negative log likelihood (NLL) in Table 1.", "We observe that adding a small amount of expressiveness makes our -only model better than Ocular .", "The upper bounds of other inference network based models are much better than the tight 1 bounds of both the interpretable models.", "Our model has the lowest upper bound of all the models while retaining interpretability and control.", "We provide visual evidence of desirable behavior of our model on collections of character extractions from historical books with mixed fonts.", "Specifically, we discus the performance of our model on the mysterious edition of Thomas Hobbes' Leviathan known as the 25 Ornaments edition.", "(Hobbes, 1651 [really 1700?]).", "The 25 Ornaments Leviathan is an interesting test case for several reasons.", "While its title page indicates a publisher and year of publication, both are fabricated (Mal-colm, 2014).", "The identities of its printer(s) remain speculative, and the actual year of publication is uncertain.", "Further, the 25 Ornaments exhibits two distinct fonts.", "Our model is successful in discovering distinctly shaped typefaces in the 25 Ornaments Leviathan .", "We focus on the case study of majuscule letters F and R , each of which have two different typefaces mixed in throughout.", "The two typefaces for F differ in the length of the middle arm (Fig. 1), and the two typefaces for R have differently shaped legs .", "In Fig. 5, we show that our model successfully learns the two desired templates T 1 and T 2 for both the characters which indicates that the clusters in our 1 For Ocular and -only models, we report the upper bound obtained via maximization over the interpretable latent variables.", "Intuitively, these latent variables are likely to have unimodal posterior distributions with low variance, hence this approximation is likely tight.", "model mainly focus on subtle differences in underlying glyph shapes.", "We also illustrate how the latent variables transform the model templates T to T for four example F images.", "The model learns complex functions to transform the templates which go beyond simple affine and morphological transformations in order to account for inking differences, random jitter, contrast variations etc. 3.2.2 Interpretable variables ( ) and Control 1 2 3 Avg.", "Finally, we visualize the ability of our model to separate responsibility of modelling variation among the interpretable and non-interpretable variables appropriately.", "We use the inferred values of the interpretable ( ) variable for each image in the dataset to adjust the corresponding image.", "Since the templates represent the canonical shape of the letters, the variables which shift the templates to explain the images can be reverse applied to the input images themselves in order to align them by accounting for offset, rotation, shear and minor size variations.", "In Fig. 6, we see that the input images (top row) are uneven and vary by size and orientation.", "By reverse applying the inferred values, we are able to project the images to a fixed size such that they are aligned and any remaining variations in the data are caused by other sources of variation.", "Moreover, this alignment method would be crucial for automating certain aspects of bibliographic studies that focus on comparing specific imprints.", "Beyond applications to typeface clustering, the general approach we take might apply more broadly to other clustering problems, and the model we developed might be incorporated into OCR models for historical text.", "This project is funded in part by the NSF under grants 1618044 and 1936155, and by the NEH under grant HAA256044-17." ]
[ "objective", "method", "objective", "method", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "other", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "other" ]
[ "A key challenge of dialog systems research is to effectively and efficiently adapt to new domains.", "A scalable paradigm for adaptation necessitates the development of generalizable models that perform well in few-shot settings.", "In this paper, we focus on the intent classification problem which aims to identify user intents given utterances addressed to the dialog system.", "We propose two approaches for improving the generalizability of utterance classification models: (1) observers and (2) example-driven training.", "Prior work has shown that BERT-like models tend to attribute a significant amount of attention to the [CLS] token, which we hypothesize results in diluted representations.", "Observers are tokens that are not attended to, and are an alternative to the [CLS] token as a semantic representation of utterances.", "Example-driven training learns to classify utterances by comparing to examples, thereby using the underlying encoder as a sentence similarity model.", "These methods are complementary; improving the representation through observers allows the example-driven model to better measure sentence similarities.", "When combined, the proposed methods attain state-of-the-art results on three intent prediction datasets ( BANKING 77, CLINC 150, HWU 64) in both the full data and few-shot (10 examples per intent) settings.", "Furthermore, we demonstrate that the proposed approach can transfer to new intents and across datasets without any additional training.", "Task-oriented dialog systems aim to satisfy a user goal in the context of a specific task such as booking flights (Hemphill et al., 1990), providing transit information (Raux et al., 2005), or acting as a tour guide (Budzianowski et al., 2018).", "Task-oriented dialog systems must first understand the user's goal Work done while Shikib was at Amazon by extracting meaning from a natural language utterance.", "This problem is known as intent prediction and is a vital component of task-oriented dialog systems (Hemphill et al., 1990; Coucke et al., 2018).", "Given the vast space of potential domains, a key challenge of dialog systems research is to effectively and efficiently adapt to new domains (Rastogi et al., 2019).", "Rather than adapting to new domains by relying on large amounts of domain-specific data, a scalable paradigm for adaptation necessitates the development of generalizable models that perform well in few-shot settings (Casanueva et al., 2020; Mehri et al., 2020).", "The task of intent prediction can be characterized as a two step process: (1) representation (mapping a natural language utterance to a semantically meaningful representation) and (2) prediction (inferring an intent given a latent represen-tation).", "These two steps are complementary and interdependent, thereby necessitating that they be jointly improved.", "Therefore, to enhance the domain adaptation abilities of intent classification systems we propose to (1) improve the representation step through observers and (2) improve the prediction step through example-driven training .", "While BERT (Devlin et al., 2018) is a strong model for natural language understanding tasks (Wang et al., 2018), prior work has found a significant amount of BERT's attention is attributed to the [CLS] and [SEP] tokens, though these special tokens do not attribute much attention to the words of the input until the last layer (Clark et al., 2019; Kovaleva et al., 2019).", "Motivated by the concern that attending to these tokens is causing a dilution of representations, we introduce observers .", "Rather than using the latent representation of the [CLS] token, we instead propose to have tokens which attend to the words of the input but are not attended to .", "In this manner, we disentangle BERT's attention with the objective of improving the semantic content captured by the utterance representations.", "A universal goal of language encoders is that inputs with similar semantic meanings have similar latent representations (Devlin et al., 2018).", "To maintain consistency with this goal, we introduce example-driven training wherein an utterance is classified by measuring similarity to a set of examples corresponding to each intent class.", "While standard approaches implicitly capture the latent space to intent class mapping in the learned weights (i.e., through a classification layer), example-driven training makes the prediction step an explicit nonparametric process that reasons over a set of examples.", "By maintaining consistency with the universal goal of language encoders and explicitly reasoning over the examples, we demonstrate improved generalizability to unseen intents and domains.", "By incorporating both observers and example-driven training on top of the CONVBERT model 1 (Mehri et al., 2020), we attain state-of-the-art results on three intent prediction datasets: BANKING 77 (Casanueva et al., 2020), CLINC 150 (Lar-son et al., 2019), and HWU 64 (Liu et al., 2019) in both full data and few-shot settings.", "To measure the generalizability of our proposed models, we carry out experiments evaluating their ability to transfer to new intents and across datasets.", "By simply modifying the set of examples during evaluation and without any additional training, our example-driven approach attains strong results on both transfer to unseen intents and across datasets.", "This speaks to the generalizability of the approach.", "Further, to demonstrate that observers mitigate the problem of diluted representations, we carry out probing experiments and show that the representations produced by observers capture more semantic information than the [CLS] token.", "The contributions of this paper are as follows: (1) we introduce observers in order to avoid the potential dilution of BERT's representations, by disentangling the attention, (2) we introduce example-driven training which explicitly reasons over a set of examples to infer the intent, (3) by combining our proposed approaches, we attain state-of-the-art results across three datasets on both full data and few-shot settings, and (4) we carry out experiments demonstrating that our proposed approach is able to effectively transfer to unseen intents and across datasets without any additional training.", "In this section, we describe several methods for the task of intent prediction.", "We begin by describing two baseline models: a standard BERT classifier (Devlin et al., 2018) and CONVBERT with task-adaptive masked language modelling (Mehri et al., 2020).", "The proposed model extends the CONVBERT model of Mehri et al. (2020) through observers and example-driven training.", "Given the aforementioned two step characterization of intent prediction, observers aim to improve the representation step while example-driven training improves the prediction step.", "Across many tasks in NLP, large-scale pre-training has resulted in significant performance gains (Wang et al., 2018; Devlin et al., 2018; Radford et al., 2018).", "To leverage the generalized language understanding capabilities of BERT for the task of intent prediction, we follow the standard fine-tuning paradigm.", "Specifically, we take an off-the-shelf BERT-base model and perform end-to-end supervised fine-tuning on the task of intent prediction.", "Despite the strong language understanding capabilities exhibited by pre-trained models, modelling dialog poses challenges due to its intrinsically goal-driven, linguistically diverse, and often infor-mal/noisy nature.", "To this end, recent work has proposed pre-training on open-domain conversational data (Henderson et al., 2019; Zhang et al., 2019b).", "Furthermore, task-adaptive pre-training wherein a model is trained in a self-supervised manner on a dataset prior to fine-tuning on the same dataset, has been shown to help with domain adaptation (Mehri et al., 2019; Gururangan et al., 2020; Mehri et al., 2020).", "Our models extend the CONVBERT model of Mehri et al. (2020) which (1) pre-trained the BERT-base model on a large open-domain dialog corpus and (2) performed task-adaptive masked language modelling (MLM) as a mechanism for adapting to specific datasets.", "The pooled representation of BERT-based models is computed using the [CLS] token.", "Analysis of BERT's attention patterns has demonstrated that a significant amount of attention is attributed to Figure 1: A visualization of the observers.", "the [CLS] and [SEP] tokens (Clark et al., 2019; Kovaleva et al., 2019).", "It is often the case that over half of the total attention is to these tokens (Clark et al., 2019).", "Furthermore, the [CLS] token primarily attends to itself and [SEP] until the final layer (Kovaleva et al., 2019).", "It is possible that attending to these special BERT tokens, in combination with the residual connections of the BERT attention heads, is equivalent to a no-op operation.", "However, it is nonetheless a concern that this behavior of attending to tokens with no inherent meaning (since [CLS] does not really attend to other words until the final layer) results in the latent utterance level representations being diluted.", "We posit that a contributing factor of this behavior is the entangled nature of BERT's attention: i.e., the fact that the [CLS] token attends to words of the input and is attended to by the words of the input.", "This entangled behavior may inadvertently cause the word representations to attend to [CLS] in order to better resemble its representation and therefore make it more likely that the [CLS] token attends to the word representations.", "In an effort to mitigate this problem and ensure the representation contains more of the semantic meaning of the utterance, we introduce an extension to traditional BERT fine-tuning called observers .", "Observers, pictured in Figure 1, attend to the tokens of the input utterance at every layer of the BERT-based model however they are never attended to .", "The representation of the observers in the last layer is then used as the final utterance level representation.", "In this manner, we aim to disentangle the relationship between the representation of each word in the input and the final utterance level representation .", "By removing this bi-directional relationship, we hope to avoid the risk of diluting the representations (by inadvertently forcing them to attend to a meaningless [CLS] token) and therefore capture more semantic information in the final utterance level representation.", "Throughout our experiments we use 20 observer tokens (which are differentiated only by their position embeddings) and average their final representations.", "The positions of the observer tokens is consistent across all utterances (last 20 tokens in the padded sequence).", "Specifically, the concept of observers modifies F in Equations 1 and 2. While we maintain the BERT-based model architecture, we instead produce the utterance level representation by averaging the representations of the observer tokens and using that for classification rather than the [CLS] token.", "A universal goal of language encoders is for inputs with similar semantic meanings to have similar latent representations.", "BERT (Devlin et al., 2018) has been shown to effectively identify similar sentences (Reimers and Gurevych, 2019) even without additional fine-tuning (Zhang et al., 2019a).", "Through example-driven training, we aim to reformulate the task of intent prediction to be more consistent with Figure 2: A visualization of the three step process of computing a probability distribution over the set of intents in our example-driven formulation.", "this universal goal of language encoders.", "Using a BERT-like encoder, we train an intent classification model to (1) measure the similarity of an utterance to a set of examples and (2) infer the intent of the utterance based on the similarity to the examples corresponding to each intent.", "Rather than implicitly capturing the latent space to intent class mapping in our learned weights (i.e., through a classification layer), we make this mapping an explicit non-parametric process that reasons over a set of examples.", "Our formulation, similar to metric-based meta learning (Koch et al., 2015), only performs gradient updates for the language encoder, which is trained for the task of sentence similarity .", "Through this example-formulation, we hypothesize that the model will better generalize in few-shot scenarios, as well as to rare intents.", "We are given (1) a language encoder F that encodes an utterance to produce a latent representation, (2) a natural language utterance utt , and (3) a set of n examples { ( x 1 , y 1 ) , . . . , ( x n , y n ) } where x 1 ,...,n are utterances and y 1 ,...,n are their corresponding intent labels.", "With F being a BERT-like model, the following equations describe example-driven intent classification: u = F ( utt ) (1) X i = F ( x i ) (2) = softmax ( u T X ) (3) P ( c ) = (cid:88) i : y i = c i (4) The equations above describe a non-parametric process for intent prediction.", "Instead, through the example-driven formulation (visualized in Figure 2), the underlying language encoder (e.g., BERT) is being trained for the task of sentence similarity.", "A universal goal of language encoders is that inputs with similar semantic meaning should have similar latent representations.", "By formulating intent prediction as a sentence similarity task, we are adapting BERT-based encoders in a way that is consistent with this universal goal.", "We hypothesize that in contrast to the baseline models, this formulation facilitates generalizability and has the potential to better transfer to new intents and domains.", "At training time, we populate the set of examples in a two step process:", "(i) for each intent class that exists in the training batch, we sample one different utterance of the same intent class from the training set and", "(ii) we randomly sample utterances from the training set until we have a set of examples that is double the size of the training batch size (128 example utterances).", "During inference, our example set is comprised of all the utterances in the training data.", "We evaluate our methods on three intent prediction datasets: BANKING 77 (Casanueva et al., 2020), CLINC 150 (Larson et al., 2019), and HWU 64 (Liu et al., 2019).", "These datasets span several domains and consist of many different intents, making them more challenging and more reflective of commercial settings than commonly used intent prediction datasets like SNIPs (Coucke et al., 2018).", "BANKING 77 contains 13,083 utterances related to banking with 77 different fine-grained intents.", "CLINC 150 contains 23,700 utterances spanning 10 domains (e.g., travel, kitchen/dining, utility, small talk, etc.) and 150 different intent classes.", "HWU 64 includes 25,716 utterances for 64 intents spanning 21 domains (e.g., alarm, music, IoT, news, etc.).", "Casanueva et al. (2020) forego a validation set for these datasets and instead only use a training and testing set.", "We instead follow the setup of Mehri et al. (2020), wherein a portion of the training set is designated as the validation set.", "We evaluate in two experimental settings following prior work (Casanueva et al., 2020; Mehri et al., 2020): (1) using the full training set and (2) using 10 examples per intent or approximately 10% of the training data.", "In both settings, we evaluate on the validation set at the end of each epoch and perform early stopping with a patience of 20 epochs for a maximum of 100 epochs.", "Since the few-shot experiments are more sensitive to initialization and hyperparameters, we repeat the few-shot experiments 5 times and take an average over the experimental runs.", "For the few-shot settings, our models use only the few-shot training data for both masked language modelling and as examples at inference time in the example-driven models (i.e., they do not see any additional data).", "Our experiments with observers all use 20 observers, however we include an ablation in the appendix (Table 6; see supplementary materials).", "Our experimental results, as well as the results obtained by Casanueva et al. (2020) and Mehri et al. (2020) are shown in Table 1. Combining observers and example-driven training results in (1) SoTA results across the three datasets and (2) a significant improvement over the BERT-base model, especially in the few-shot setting ( +5.02% on average).", "Furthermore, the results show that the use of observers is particularly conducive to the example-driven training setup.", "Combining these two approaches gains strong improvements over the ConvBERT + MLM model (few-shot: +4.98% , full data: +0.41% ).", "However, when we consider the two proposed approaches independently, there is no consistent improvement for both example-driven (few-shot: -0.46% full data: +0.24% ) and observers (few-shot: +0% , full data: -0.42% ).", "The fact that these two methods are particularly conductive to each other signifies the importance of using them jointly.", "The representation step of intent prediction is tackled by observers, which aim to better capture the semantics of an input by disentangling the attention and therefore avoiding the dilution of the representations.", "The prediction step, is improved through example-driven training which uses the underlying BERT-based model to predict intents by explicitly reasoning over a set of examples.", "This characterization highlights the importance of jointly addressing both steps of the process simultaneously.", "Using observers alone does not lead to significant improvements because the linear classification layer cannot effectively leverage the improved representations.", "Using example-driven training alone does not lead to significant improvements because the [CLS] representations do not capture enough of the underlying utterance semantics.", "The enhanced semantic representation of observers is necessary for example-driven training: by improving the latent representations of utterances, it is easier to measure similarity in the set of examples.", "This section describes several experiments that were carried out to show the unique benefits of observers and example-driven training, as well as to validate our hypothesis regarding the two methods.", "First, we show that with the example-driven formulation for intent prediction, we can attain strong performance on intents unseen during training.", "Next, we show that the generalization to new intents transfers across datasets.", "Next, we carry out a probing experiment that demonstrates that the latent representation of the observers contains greater semantic information about the input.", "Finally, we discuss an ablation over the number of observers used which demonstrates that the bene-fit of observers is primarily a consequence of the disentangled attention.", "By formulating intent prediction as a sentence similarity task, the example-driven formulation allows for the potential to predict intents that are unseen at training time.", "We carry out experiments in the few-shot setting for each dataset, by (1) randomly removing 4 10 intent classes when training in an BANKING 77 CLINC 150 HWU 64 Model Few Full Few Full Few Full Prior Work USE* (Casanueva et al., 2020) 84.23 92.81 90.85 95.06 83.75 91.25 CONVE RT* (Casanueva et al., 2020) 83.32 93.01 92.62 97.16 82.65 91.24 USE+C ONVE RT* (Casanueva et al., 2020) 85.19 93.36 93.26 97.16 85.83 92.62 BERT-BASE (Mehri et al., 2020) 79.87 93.02 89.52 95.93 81.69 89.97 CONVBERT (Mehri et al., 2020) 83.63 92.95 92.10 97.07 83.77 90.43 CONVBERT + MLM (Mehri et al., 2020) 83.99 93.44 92.75 97.11 84.52 92.38 Proposed Models CONVBERT + MLM + Example 84.09 94.06 92.35 97.11 83.44 92.47 CONVBERT + MLM + Observers 83.73 92.83 92.47 96.76 85.06 92.10 CONVBERT + MLM + Example + Observers 85.95 93.83 93.97 97.31 86.28 93.03 Table 1: Accuracy scores ( 100%) on all three intent detection data sets with varying number of training examples ( Few: 10 training utterances per intent; Full: full training data).", "example-driven manner, (2) adding the removed intents back to the set of examples during evaluation and (3) reporting results only on the unseen intents.", "We repeat this process 30 times for each dataset and the results are reported in Table 2. It should be noted that we do not perform MLM training on the utterances corresponding to the unseen intents.", "These results demonstrate that the example-driven formulation generalizes to new intents, without having to re-train the model.", "The performance on the unseen intents approximately matches the performance of the best model which has seen all intents (denoted BESTFULLY TRAINED MODEL in Table 2).", "These results highlight a valuable property of the proposed formulation: namely, that new intent classes can be added in an online manner without having to re-train the model.", "While the off-the-shelf BERT-base and CONVBERT models, which are not at all fine-tuned on the datasets, are able to identify similar sentences to some extent training in an example-driven manner drastically improves performance.", "The addition of observers, in combination with example-driven training, significantly improves performance on this experimental setting ( +18.42% ).", "This suggests that the observers generalize better to unseen intents, potentially because the observers are better able to emphasize words that are key to differentiating between intents (e.g., turn the volume up vs turn the volume down ).", "While transferring to unseen intents is a valuable property, the unseen intents in this experimental setting are still from the same domain.", "To further evaluate the generalizability of our models, we carry out experiments evaluating the ability of models to transfer to other datasets .", "Using the full data setting with 10 training utterances per intent, we (1) train a model on a dataset and (2) evaluate the models on a new dataset, using the training set of the new dataset as examples during inference.", "In this manner, we evaluate the ability of the models to transfer to unseen intents and domains without additional training.", "The results in Table 3 demonstrate the ability of the the model with obsevers and example-driven training to transfer to new datasets, which consist of both unseen intents and unseen domains.", "These results show that the example-driven model performs reasonably well even when transferring to domains and intents that were not seen at training time.", "These results, in combination with the results shown in Table 2 speak to the generalizability of the proposed methods.", "Specifically, by formulating intent prediction as a sentence similarity task through example-driven training, we are maintaining consistency with a universal goal of language encoders (i.e., that utterances with similar semantic meanings have similar latent representations) that effectively transfers to new settings.", "We hypothesized that by disentangling the attention in BERT-based models, the observers would avoid the dilution of representations (which occurs because words attend to a meaningless [CLS] token) and therefore better capture the semantics of the input.", "We validate this hypothesis through the experimental evidence presented in Table 2 wherein the use of observers results in a significant performance improvement on unseen intents.", "To demonstrate that observers better capture the semantics of an input, we carry out a probing experiment using the word-content task of Conneau et al. (2018).", "We generate a latent representation of each utterance using models with and without observers.", "We then train a classifier layer on top of the frozen representations to reproduce the words of the input.", "Similar to Conneau et al. (2018), we avoid using the entire vocabulary for this probing experiment and instead use only the most frequent 1000 words for each dataset.", "With infrequent words, there would be uncertainty about whether the performance difference is a consequence of (1) the semantic content of the representation or (2) the quality of the probing model.", "Since we are concerned with measuring the former, we only consider the most frequent words to mitigate the effect of latter.", "Table 4 shows the micro-averaged F-1 score for the task of reproducing the words in the utterance, given the different latent representations.", "A latent representation that better captures the semantics of the input utterance, will be better able to reproduce the specific words of the utterance.", "The results in Table 4 show that the use of observers results in latent representations that better facilitate the prediction of the input words ( +1.50 or 5% relative improvement).", "These results further validate the hypothesis that the use of observers results in better latent representations.", "To further understand the performance of the observers, we carry out an ablation study over the number of observers.", "The results shown in Table 6 (in the Appendix) demonstrate that while multiple observers help, even a single observer provides ben-efit.", "This suggests that the observed performance gain is a primarily a consequence of the disentangled attention rather than averaging over multiple observers.", "This ablation provides further evidence that the use of observers mitigates the dilution of the utterance level representations.", "Intent prediction is the task of converting a user's natural language utterance into one of several pre-defined classes, in an effort to describe the user's intent (Hemphill et al., 1990; Coucke et al., 2018).", "Intent prediction is a vital component of pipeline task-oriented dialog systems, since determining the goals of the user is the first step to producing an appropriate response (Raux et al., 2005; Young et al., 2013).", "Prior to the advent of large-scale pre-training (Devlin et al., 2018; Radford et al., 2018), approaches for intent prediction utilize task-specific architectures and training methodologies (e.g., multi-tasking, regularization strategies) that aim to better capture the semantics of the input (Bhargava et al., 2013; Hakkani-Tr et al., 2016; Gupta et al., 2018; Niu et al., 2019).", "The large-scale pre-training of BERT makes it more effective for many tasks within natural language understanding (Wang et al., 2018), including intent prediction (Chen et al., 2019a; Castellucci et al., 2019).", "However, recent work has demonstrated that leveraging dialog-specific pre-trained models, such as ConveRT (Henderson et al., 2019; Casanueva et al., 2020) or CONVBERT (Mehri et al., 2020) obtains better results.", "In this paper, we build on a strong pre-trained conversational encoder (CONVBERT) (1) by enhancing its ability to effectively capture the semantics of the input through observers and (2) by re-formulating the problem of intent prediction as a sentence similarity task through example-driven training in an effort to better leverage the strengths of language encoders and facilitate generalizability.", "Analysis of BERT's attention weights shows that a significant amount of attention is attributed to special tokens, which have no inherent meaning (Clark et al., 2019; Kovaleva et al., 2019).", "We address this problem by disentangling BERT's attention through the use of observers.", "There have been several avenues of recent work that have explored disentangling the attention mechanism in Transformers.", "Chen et al. (2019b) explore disentangling the attention heads of a Transformer model conditioned on dialog acts to improve response generation.", "He et al. (2020) disentangle the attention corresponding to the words and to the position embeddings to attain performance gains across several NLP tasks.", "Guo et al. (2019) propose an alternative to the fully-connected attention, wherein model complexity is reduced by replacing the attention connections with a star shaped topology.", "Recent efforts in NLP have shown the effectiveness of relying on an explicit set of nearest neighbors to be effective for language modelling (Khandel-wal et al., 2019), question answering (Kassner and Schtze, 2020) and knowledge-grounded dialog (Fan et al., 2020).", "However, these approaches condition on examples only during inference or in a non end-to-end manner.", "In contrast, we train the encoder to classify utterances by explicitly reasoning over a set of examples.", "The core idea of example-driven training is similar to that of metric-based meta learning which has been explored in the context of image classification, wherein the objective is to learn a kernel function (which in our case is BERT) and use it to compute similarity to a support set (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017).", "In addition to being the first to extend this approach to the task of intent prediction, the key difference of example-driven training is that we use a pre-trained language encoder (Mehri et al., 2020) as the underlying sentence similarity model (i.e., kernel func-tion).", "Ren and Xue (2020) leverage a triplet loss for intent prediction, which ensures that their model learns similar representations for utterances with the same intent.", "We go beyond this, by performing end-to-end prediction in an example-driven manner.", "Our non-parametric approach for intent prediction allows us to attain SoTA results and facilitate generalizability to unseen intents and across datasets.", "In order to enhance the generalizability of intent prediction models, we introduce (1) observers and (2) example-driven training.", "We attain SoTA results on three datasets in both full data and the few shot settings.", "Furthermore, our proposed approach exhibits the ability to transfer to unseen intents and across datasets without any additional training, highlighting its generalizability.", "We carry out a probing experiment that shows the representations produced by observers to better capture the semantic information in the input.", "There are several avenues for future work.", "(1) Observers and example-driven training can be extended beyond intent prediction to tasks like slot filling and dialog state tracking.", "(2) Since observers are disentangled from the attention graph, it is worth exploring whether it possible to force each of the observers to capture a different property of the input (i.e., intent, sentiment, domain, etc.).", "(3) Our mechanism for measuring sentence similarity in our example-driven formulation can be improved.", "Our paper presents several approaches for improving performance on the task of intent prediction in task-oriented dialogs.", "We believe that neither our proposed approaches nor the resulting models have cause for ethical concerns.", "There is limited potential for misuse.", "Given the domain of our data (i.e., task-oriented dialogs), failure of the models will not result in harmful consequences.", "Our paper relies on significant experimentation, which may have result in a higher carbon footprint, however this is unlikely to be drastically higher than the average NLP paper." ]
[ "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "objective", "objective", "objective", "abstain", "objective", "abstain", "objective", "result", "objective", "result", "abstain", "objective", "other", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "other", "method", "abstain", "abstain", "result", "objective", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Natural language inference (NLI) has been widely used as a task to train and evaluate models for language understanding.", "However, the ability of NLI models to perform inferences requiring understanding of figurative language such as idioms and metaphors remains understudied.", "We introduce the IMPLI ( I diomatic and M etaphoric P aired L anguage I nference) dataset, an English dataset consisting of paired sentences spanning idioms and metaphors.", "We develop novel methods to generate 24k semiautomatic pairs as well as manually creating 1.8k gold pairs.", "We use IMPLI to evaluate NLI models based on RoBERTa fine-tuned on the widely used MNLI dataset.", "We then show that while they can reliably detect entailment relationship between figurative phrases with their literal counterparts, they perform poorly on similarly structured examples where pairs are designed to be non-entailing.", "This suggests the limits of current NLI models with regard to understanding figurative language and this dataset serves as a benchmark for future improvements in this direction.", "1 1 Introduction Understanding figurative language (i.e., that in which the intended meaning of the utterance differs from the literal compositional meaning) is a particularly di cult area in NLP (Shutova, 2011; Veale et al., 2016), but is essential for proper natural language understanding.", "We consider here two types of figurative language: idioms and metaphors.", "Idioms can be viewed as non-compositional multiword expressions (Jochim et al., 2018), and have been historically di cult for NLP systems.", "For instance, sentiment systems struggle with multiword expressions in which individual words do not directly contribute to the sentiment (Sag et al., 2002).", "Metaphors involve linking conceptual properties of two or more domains, and are known to be pervasive in everyday language (Lako and Johnson, 1980; Stefanowitsch and Gries, 2008; Steen et al., 2010).", "Recent work has shown that these types of figurative language are impactful across a broad array of NLP tasks (see 2.1).", "Large-scale pre-training and transformer-based architectures have yielded increasingly powerful language models (Vaswani et al., 2017; Devlin et al., 2019; Liu et al., 2019).", "However, relatively little work has explored these models' representations of figurative and creative language.", "NLI datasets have widely been used for evaluating the performance of language models (Dagan et al., 2006; Bowman et al., 2015a; Williams et al., 2018), but there are insu cient figurative language datasets in which a literal sentence is linked to a corresponding figurative counterpart that are large enough to be suitable for evaluating NLI.", "Due to the creative nature of human language, creating a dataset of diverse, high-quality literal / figurative pairs is time-consuming and di cult.", "To address this gap, we build a new English dataset of paired expressions designed to be leveraged to explore model performance via NLI.", "Our dataset, IMPLI ( I diomatic / M etaphoric P aired L anguage I nference), is comprised of both silver pairs, which are built using semi-automated 5375 methods (3.1), as well as hand-written gold pairs (3.4), crafted to reflect both entailment and non-entailment scenarios.", "Each pair consists of a sentence containing a figurative expression (id-ioms / metaphors) and a literal counterpart, designed to be either entailed or non-entailed by the figurative expression (Table 1 shows some examples).", "Our contribution thus consists of three key parts: We create a new IMPLI dataset consisting of 24,029 silver and 1,831 gold sentence pairs consisting of idiomatic and metaphoric phrases that result in both entailment and non-entailment relationship (see Table 2).", "We evaluate language models in an NLI setup, showing that metaphoric language is surprisingly easy, while non-entailing idiomatic relationships remain extremely di cult.", "We evaluate model performance in a number of experiments, showing that incorporating idiomatic expressions into the training data is less helpful than expected, and that idioms that can occur more in more flexible syntactic contexts tend to be easier to classify.", "Figurative language includes idioms, metaphors, metonymy, hyperbole, and more.", "Critically, figurative language is that in which speaker meaning (what the speaker intends to accomplish through an utterance) di ers from the literal meaning of that utterance.", "This leads to problems in NLP systems if they are trained mostly on literal data, as their representations for particular words and / or phrases will not reflect their figurative intended meanings.", "Figurative language has a significant impact on many NLP tasks.", "Metaphoric understanding has been shown to be necessary for proper machine translation (Mao et al., 2018; Mohammad et al., 2016).", "Sentiment analysis also relies critically on figurative language: irony and sarcasm can reverse the polarity of a sentence, while metaphors and idioms may make more subtle changes in the speaker meaning (Ghosh et al., 2015).", "Political discourse tasks including bias, misinformation, and political framing detection benefit from joint learning with metaphoricity (Huguet Cabot et al., 2020).", "Figurative language engendered by creativity on social media also poses di culty for many NLP tasks including identifying depression symptoms (Yadav et al., 2020; Iyer et al., 2019) and hate speech detection (Lemmens et al., 2021).", "We are here focused on idioms and metaphors.", "There is currently a gap in diagnostic datasets for idioms, and our work fills this gap.", "There exist some relevant metaphoric resources (see 2 . 2); metaphors are known to be extremely common and important to understanding figurative language, our resource serves to build upon this work.", "Natural language inference is the task of predicting, given two fragments of text, whether the meaning of one ( premise ) entails the other ( hypothesis ) (Da-gan et al., 2006).", "The task is formulated as a 3-way classification problem, in which the premise and hypothesis pairs are labeled as entailment , contradiction , or neutral , if their relationship could not be directly inferred (Bowman et al., 2015b).", "NLI has been widely used as an evaluation task for language understanding, and there have been a large number of challenging datasets, which have been used to further our understanding of the capabilities of language models (Wang et al., 2018, 2019).", "Paired data for figurative language is relatively sparse, and there is a gap in the diagnostic datasets used for NLI in this area.", "Previous work includes the literal / metaphoric paraphrases of Mohammad et al. (2016) and Bizzoni and Lappin (2018), although both contain only hundreds of samples, insu cient for proper model training and evaluation.", "With regard to NLI, early work proposed the task of textual entailment as a way of understanding metaphor processing capabilities (Agerri et al., 2008; Agerri, 2008).", "Poliak et al. (2018) build a dataset for diverse NLI, which includes some creative language such as puns, albeit making no claims with regard to figurativeness.", "Zhou et al. (2021) build a dataset consisting of paired idiomatic and literal expressions.", "They be-gin with a set of 823 idiomatic expressions yielding 5,170 sentences, and had annotators manually rewrite sentences containing these idioms as literal expressions.", "We expand on this methodology by having annotators only correct definitions for the idioms themselves and use these definitions to automatically generate the literal interpretations of the idioms by replacing them into appropriate contexts: this allows us to scale up to over 24k silver sentences.", "We also expand beyond paraphrasing by incorporating both entailment and non-entailment 5376 Fig. Type Ent.", "Similar to this work, Chakrabarty et al. (2021a) build a dataset for NLI based on figurative language.", "Their dataset consists of figurative / literal pairs recast from previously developed simile and metaphor datasets, along with a parallel dataset between ironic and non-ironic rephrasing.", "This sets the groundwork for figurative NLI, but the dataset is relatively small outside of the irony do-main, and the non-entailments are generated purely by replacing words with their antonyms, restricting the novelty of the hypotheses.", "Their dataset is relatively easy for NLI models; here we show that figurative language can be challenging, particularly with regard to non-entailments.", "Zhou et al. (2021) and Chakrabarty et al. (2021a) provide invaluable resources for figurative NLI; our works aims to covers gaps in a number of areas.", "First, we generate a large number of both entailment and non-entailment pairs, allowing for better evaluation of adversarial non-entailing examples.", "Second, our silver methods allow for rapid development of larger scale data, allowing for model training and evaluation.", "We show that while entailment pairs are relatively easy (accuracy scores ranging from .86 to .89), the non-entailment pairs are exceedingly challenging, with the roberta-large model achieving accuracy scores ranging from .311 to .539.", "Our IMPLI dataset is built from idiomatic and metaphoric sentences paired with entailing and non-entailing counterparts, from both silver pairs (3.1) and manually written sentences (3.4).", "For our purposes, we follow McCoy et al. (2019) in conflating the neutral and contradiction categories into a non-entailment label.", "We then label every pair as either entailment ( ) or non-entailment ( (cid:57) ).", "Due to the di cult nature of the task and to avoid issues with crowdsourcing (Bowman et al., 2020), we employed expert annotators.", "We used two fluent English speakers, both graduate students in linguistics with strong knowledge in figurative language, paid at a rate of $20 / hr.", "For each method below, we ran pilot studies, incorporated annotator feedback and iteratively assessed the viability of identifying and generating appropriate expressions.", "As the annotators were working on generating new expressions, agreement was not calculated: we instead assessed the quality of the resulting expressions (see Section 3.3).", "Table 2 contains an overview of the di erent entailment and non-entailment types collected (Detail examples are also provided in Appendix D).", "First, we explore a method for generating silver pairs using annotators to create phrase definitions which can be inserted automatically into relevant contexts, yielding a large number of possible entailment and non-entailment pairs that di er only with regard to the relevant phrase.", "Our procedure hinges on a key assumption: for any given figurative phrase, we can generate a contextually indepen-dent literal paraphrase.", "We then replace the original expression with the literal paraphrase, following the assumption that the figurative expression necessarily entails its literal paraphrase: He's stuck in bed, which is his hard cheese .", "He's stuck in bed, which is his bad luck .", "Conversely, in contexts where the original phrase is used literally, replacing it with the literal paraphrase should yield a non-entailment relation.", "To build idiomatic pairs, we use three corpora that contain sentences with idiomatic expressions (IEs) labelled as either figurative or literal.", "2 These are the MAGPIE Corpus (Haagsma et al., 2020), the PIE Corpus (Adewumi et al., 2021), and the SemEval 2013 Task 5 (Korkontzelos et al., 2013).", "We collect the total set of IEs that are present in these corpora.", "We then extract definitions for these using freely available online idiom dictionaries.", "3 These definitions are often faulty, incomplete, or improperly formatted.", "We employed annotators to make manual corrections.", "The annotators were given the original IE as well as the definition extracted from the dictionary.", "The annotators were asked to ensure that the dictionary definition given was (1) a correct literal interpretation and (2) fit syntactically in the same environments as the original IE.", "If the definition met both of these criteria, the IE can be replaced by its definition to yield an entailment pair.", "If either criterion was not met, annotators were asked to minimally update the definition so that it satisfied the requirements.", "In total this process yielded 697 IE definitions.", "We then used the above corpora, replacing these definitions into the original sentences (see Figure 1).", "We use the figurative / literal labels from the 2 We here use \"idiomatic expression\" or \"IE\" to refer to the specific idiom in question (ie. \"kick the bucket\", \"spill the beans\"), as opposed to the sentence / context containing it.", "original corpora: replacing them into figurative contexts yields entailment relations, while replacing them into contexts where the phrase is meant literally then yields non-entailments.", "As a second method for generating non-entailment pairs, we asked annotators to write novel, adversarial definitions for IEs.", "Given a particular phrase, they were instructed to invent a new meaning for the IE that was not entailed by the true meaning, but which seemed reasonable presuming they had never heard the original IE.", "Some examples of this process are shown in Table", "3. We then replace these adversarial definitions into figurative sentences from the corpora.", "This yields pairs where the premise is an idiom used figura-tively, and the hypothesis is a sentence that attempts to rephrase the idiom literally, but does so incorrectly, thus yielding non-entailments (Figure 2).", "Metaphors are handled in a similar way: we start with a collection of minimal metaphoric expressions (MEs).", "These are subject-verb-object and adjective-noun constructions from Tsvetkov et al. (2014).", "Each is annotated as being either literal or metaphoric, along with an example sentence.", "We passed these MEs directly to annotators, who were then instructed to replace a word in the ME so that it would be considered literal in a neutral context.", "These can then be replaced in a similar fashion: we start with the original figurative sentence, replace the ME with the literal replacements, and the result is an entailing pair with the metaphoric sentence entailing the literal.", "We apply this procedure to the dataset of Tsvetkov et al. (2014), yielding 100 metaphoric / literal NLI entailment pairs.", "We then take a portion of the Common Crawl dataset 4 , and identify sentences that contain these original MEs.", "We identify sentences that contain the words from the metaphoric phrase, and replace the metaphoric word itself with its literal counterpart.", "This yields 645 additional silver pairs.", "For all silver methods, we also employ syntactic postprocessing to overcome a number of hurdles.", "First, phrases used idiomatically often follow different syntactic patterns than when used literally.", "Original: These point out of this world , but where to is not made clear.", "Replaced: *These point wonderful , but where to is not made clear.", "This phrase in literal contexts functions syntactically as a prepositional phrase, while idiomatically it is used as an adjective.", "When replaced with the definition \"wonderful\" in a literal context, we get a grammatically incoherent sentence.", "Second, phrases in their literal usage often do not form full constituents, due to the string-matching approach of the original datasets.", "Many literal usages of 4 https://commoncrawl.org/ these phrases are thus incompatible with the defined replacement.", "I think [this one has to die ] for the other one to live.", "Turn in [ the raw edges] of both seam allowances towards each other and match the folded edges.", "To avoid these issues, we ran syntactic parsing on the definition and the expression within each context, requiring that the expression in context begins with the same part of speech as the definition and that it does not end inside of another phrase.", "Additionally, for each replacement, we ensured that the verb conjugation matched the context.", "For this, we identified the conjugation in the context, and used a de-lemmatization script to conjugate the replacement verb to match the original.", "In implementing and analyzing this procedure, we noted a number of practical issues.", "First, a large number of the MEs provided are actually idiomatic or proverbial: the focus word does not actually contribute to the metaphor, but rather the entire expression is necessary.", "Similarly, we found that replacing individual parts of MEs is often insu cient to fully remove the metaphoric meaning.", "We iterated over possible solutions to circumvent these issues and found that it is best to simply skip instances for which a replacement does not yield a feasible literal interpretation.", "In order for these automatically created pairs to be useful for NLI-based evaluation, they need to be of su ciently high quality.", "As the annotators were generating novel definitions and pairs, rather than inter-annotator agreement, we instead evaluate the quality of the resulting pairs by testing whether the automatically generated pairs contained the appropriate entailment relation.", "For this task, each annotator was given 100 samples for each general category of silver generations (idiomatic entailments, idiomatic non-entailments, and metaphoric entail-ments).", "They were asked if the entailment relation between the two sentences was as expected.", "An expert than adjudicated disagreements to determine the final percentage of valid pairs.", "To evaluate the syntactic validity of the generated pairs, we additionally ran the Stanford PCFG dependency parser (Klein and Manning, 2003) on 5379 Idioms (cid:57) Idioms Met.", "the pairs.", "Per previous work in NLI (Williams et al., 2018), we evaluate the proportion of sentences for which the root node is S. Table 4 shows the results.", "The semi-supervised examples evoked the correct entailment relation between %88 and %97 of the time: while there is still noise present, this indicates the e ectiveness of the proposed methods.", "With regard to syntax, we see S node roots for between 82% and %90 of the sentences: within the range of the SNLI performance (74%-88%), and slightly behind the MNLI (91%-98%).", "We find that the generated hypotheses are not significantly di erent in quality than the premises.", "This indicates that the method for generation preserves the original syntax.", "These methods allow us to quickly generate a substantial number of high-quality pairs to evaluate NLI systems on figurative language.", "However, they may introduce additional bias as we employ a number of restrictions in order to ensure syntactic and semantic compatibility, and we lack full non-entailment pairs for metaphoric data.", "We therefore expand our dataset with manually generated pairs.", "To create gold pairs, annotators were given a figurative sentence along with the focus of the figurative expression: for idioms, this is the IE; for metaphors, the focus word of the metaphor.", "For idioms, we used the MAGPIE dataset to collect contextually figurative expressions.", "For metaphors, we collected metaphoric sentences from the VUA Metaphor Corpus (Steen et al., 2010), the metaphor dataset of (Mohammad et al., 2016), and instances from the Gutenberg poetry corpus (Jacobs, 2018) annotated for metaphoricity (Chakrabarty et al., 2021b; Stowe et al., 2021) .", "Annotators were instructed to rewrite the sentence literally.", "This was done by removing or rephrasing the figurative component of the sentence.", "This yields gold standard paraphrases for idiomatic and metaphoric contexts.", "encouraged to keep as much of the original utterance as possible, ensuring high lexical overlap, while removing the main figurative element of the sentence.", "For idioms, this comes from adding or adjusting words to force a literal reading of the idiom: The old girl finally kicked the bucket.", "For metaphors, this typically involves keeping the same phrasing while adapting the sentence to have a di erent, non-metaphoric meaning.", "Previous work in NLI has employed the technique of replacing words in the literal sentences with their antonyms to yield non-entailing pairs (Chakrabarty et al., 2021a).", "We replicate this process for idioms: for the manually elicited definitions, we replace key words as determined by annotators with their antonyms.", "This yields sentences which negate the original figurative meaning and are thus suitable non-entailment pairs.", "Previous work found this antonym replacement for figurative language remains relatively easy for NLI systems, which we can additionally explore with regard to idioms.", "These manual annotations provide a number of concrete benefits.", "First, they are not restricted to individual words or phrases (excluding antonyms): the figurative components can be rewritten freely, allowing for diverse, interesting pairs.", "Second, they are written by experts, ensuring higher quality than the automatic annotations, which may be noisy.", "Using the IMPLI dataset, we aim to answer a series of questions via NLI pertaining to language models' ability to understand and represent figurative language accurately.", "These questions are: R1: How well do pre-trained models perform on figurative entailments and non-entailments?", "R2: Does adding idiomatic pairs into the training data a ect model performance?", "R3: Does the flexibility of idiomatic expressions a ect model performance?", "in previous work: it contains a large number of both entailments and non-entailments and is large enough to be used for training the models.", "We obtain baseline NLI models by fine-tuning roberta-base and roberta-large models on the MNLI dataset (Williams et al., 2018), with entailments as the positive class and all others as the negative and evaluate them on their original test sets as well as IMPLI .", "5 Due to variance in neural model performance (Reimers and Gurevych, 2017), we take the mean score over 5 runs using di erent seeds.", "We report results in Table 5.", "We observe that idiomatic entailments are relatively easy to classify, with accuracy scores over .84.", "Non-entailments were much more challenging.", "Silver pairs generated through adversarial definitions were especially di cult: the pairs contain high lexical overlap, and in many cases the premise and hypotheses are semantically similar.", "The replacement into literal samples were easier, as the idiomatic definition clashes more starkly with the original premise, making non-entailment predictions more likely.", "Consistent with Chakrabarty et al. (2021a)'s work in metaphors, non-entailment through antonym replacement is easiest for idioms: the antonymic relationship can be a marker for non-entailment, despite the high word overlap.", "With regard to metaphors, silver entailment pairs are relatively easy.", "Manual pairs are more challenging but are still much easier than idioms.", "This is supported by the fact that metaphors are common in everyday language: these models have likely seen the same (or similar) metaphors in training.", "Our findings show that in fact metaphoricity may not be particularly challenging for deep pre-trained models, as they are able to e ec-tively capture the metaphoric entailment relations.", "The roberta-large model performs better for metaphoric expressions than roberta-base , but 5 Model hyperparameters found in Appendix A. the di erence on other partitions is relatively small.", "6 We also find that lexical overlap plays a significant role here as noted by previous work (McCoy et al., 2019): sentences with high overlap tend to be classified as entailments regardless of the true label (for more, see Appendix B).", "We note that the manual pairs tend to be more di cult for both idioms and metaphors: these pairs can be more flexible and creative, whereas the silver pairs are restricted to more regular patterns.", "R2: Incorporating Idioms into Training To evaluate incorporating idioms into training, we then split the idiom data by idiomatic phrase types, keeping a set of IEs separate as test data to assess whether the model can learn to correctly handle novel, unseen phrases.", "Our goal is to assess whether poor performance is due to models' not containing these expressions in training, or because their ability to represent figurative language inherently limited.", "We hypothesize that the non-compositional nature of these types of figuration should lead to poor performance on unseen phrases, even if the model is trained on other idiomatic data.", "For each task, we split the data into 10 folds by IE and incrementally incorporate these folds into the original MNLI for training, leaving one fold out for testing.", "We experiment with incorporating all training data for both labels, as well as using only entailment or non-entailment samples.", "We then evaluate our results on the entire test set, as well as the entailment and non-entailment partitions.", "Figure 4 shows the results, highlighting that additional training data yields only small improvements.", "Pairs with non-entailment relations remain exceedingly di cult, with performance capping out at only slightly better than chance.", "As hypothesized, additional training data is only somewhat e ective in improving language models' idiomatic capabilities; this is not su cient to overcome di culties from literal usages of idiomatic phrases and adversarial definitions, indicating that idiomatic 6 We found minimal di erences between these models across R1-R3.", "R3: Syntactic Flexibility Finally, we assess models' representation of idiomatic composition-ality.", "Nunberg et al. (1994) indicate that there are two general types of idioms: \"idiomatic phrases\", which exhibit limited flexibility and generally occur only in a single surface form, and \"idiomatically combining expressions\" or ICEs, in which the constituent elements of the idiom carry semantic meaning which can influence their syntactic properties, allowing them to be more syntactically flexible.", "For example, in the idiom spill the beans , we can map the spilling activity to divulging of information, and the beans to the information.", "Because this expression has semantic mappings to figurative meaning for its syntactic constituents, Nunberg et al. (1994) argue that it can be more syntactically flexible, allowing for expressions like the beans that were spilled by Martha to maintain idiomatic meaning.", "For fixed expressions such as kick the bucket , no syntactic constituents map directly to the figurative meaning (\"die\").", "We then expect less syntactic flexibility, and thus the bucket that was kicked by John loses its idiomatic meaning.", "We hypothesize that model performance will be correlated with the degree to which a given idiom type is flexible: more fixed expressions may be easier, as they are seen in regular, fixed patterns that the models can memorize, while more flexible ICEs will be more di cult, as they can appear in di erent patterns, cases, and word order, often even mixing in with other constituents.", "To test this, we define an ICE score as the percentage of times a phrase occurs in our test data in a form that does not match its original base form.", "Higher percentages mean the phrase occurs more frequently in a non-standard form, acting as a measure for the syntactic flexibility of the expression.", "We assessed the performance of the roberta-base model for each idiom type, evaluating Spearman correlations between performance and idioms' ICE scores.", "We found no correlation between ICE scores and performance for entailments, nor for adversarial definition non-entailments ( r = . 004 /. 45, p = . 921 /. 399, see Appendix C).", "However, we do see a weak but significant correlation ( r = . 188, p = 0 . 016) with non-entailments from literal contexts: the model performs better when the phrases are more flexible, contrary to our initial hypothesis.", "One possible explanation is that the model memorizes a specific figurative meanings for each fixed expression, disregarding the possibility of these words being used literally.", "When the expression is used in a literal context, the model then still assumes the figurative meaning, resulting in errors on non-entailment samples.", "The ICEs are more fluid, and thus the model is less likely to have a concrete representation for the given phrase: it is better able to reason about the context and interacting words within the expression, making it easier to distinguish the entailing and non-entailing samples.", "In this work, we introduce the IMPLI dataset, which we then use to evaluate NLI models' capabilities on figurative language.", "We show that while widely used MNLI models handle entailment admirably and metaphoric expressions are relatively easy, non-entailment idiomatic relationships are more di cult.", "Additionally, adding idiom-specific training data fails to alleviate poor performance for non-entailing pairs.", "This highlights how currently language models are inherently limited in representing some figurative phenomena and can provide a target for future model improvements.", "For future work, we aim to expand our data collection processes to new data sources.", "Our dataset creation procedure relies on annotated samples and definitions: as more idiomatic and metaphoric resources become available, this process is broadly extendable to create new figurative / literal pairs.", "Additionally, we only explore this data for evaluating NLI systems: this data could also be used for other parallel data tasks such as figurative language interpretation (Shutova, 2013; Su et al., 2017) and figurative paraphrase generation.", "As natural language generation often relies on training or fine-tuning 5382 models with paired sentences, this data could be a valuable resource for figurative language generation systems." ]
[ "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "objective", "objective", "abstain", "abstain" ]
[ "Despite the recent progress, little is known about the features captured by state-of-the-art neural relation extraction (RE) models.", "Common methods encode the source sentence, conditioned on the entity mentions, before classifying the relation.", "However, the complexity of the task makes it difficult to understand how encoder architecture and supporting linguistic knowledge affect the features learned by the encoder.", "We introduce 14 probing tasks targeting linguistic properties relevant to RE, and we use them to study representations learned by more than 40 different encoder architecture and linguistic feature combinations trained on two datasets, TACRED and SemEval 2010 Task 8.", "We find that the bias induced by the architecture and the inclusion of linguistic features are clearly expressed in the probing task performance.", "For example, adding contextualized word representations greatly increases performance on probing tasks with a focus on named entity and part-of-speech information, and yields better results in RE.", "In contrast, entity masking improves RE, but considerably lowers performance on entity type related probing tasks.", "Relation extraction (RE) is concerned with extracting relationships between entities mentioned in text, where relations correspond to semantic categories such as org:founded by , person:spouse , or org:subsidiaries (Figure 1).", "Neural models have shown impressive results on this task, achieving state-of-the-art performance on standard datasets like SemEval2010 Task 8 (dos Santos et al., 2015; Wang et al., 2016; Lee et al., 2019), TACRED (Zhang et al., 2018; Alt et al., 2019b; Peters et al., 2019; Joshi et al., 2019), and NYT (Lin et al., 2016; Vashishth et al., 2018; Alt et al., 2019a).", "The majority of models implement an encoder architec-[...] included Aerolineas's domestic subsidiary, Austral.", "ture to learn a fixed size representation of the input, e.g. a sentence, which is passed to a classification layer to predict the target relation label.", "These good results suggest that the learned representations capture linguistic and semantic properties of the input that are relevant to the downstream RE task, an intuition that was previously discussed for a variety of other NLP tasks by Conneau et al. (2018).", "However, it is often unknown which exact properties the various models have learned.", "Our aim is to pinpoint the information a given RE model is relying on, in order to improve model performance as well as to diagnose errors.", "A general approach to model introspection is the use of probing tasks .", "Probing tasks (Shi et al., 2016; Adi et al., 2017), or diagnostic classifiers, are a well established method to analyze the presence of specific information in a model's latent representations, e.g. in machine-translation (Belinkov et al., 2017), language modeling (Giulianelli et al., 2018), and sentence encoding (Conneau et al., 2018).", "For each probing task, a classifier is trained on a set of representations, and its performance measures how well the information is encoded.", "The probing task itself is typically selected in accordance with the downstream task, e.g. an encoder trained on RE may be probed for the entity type of a relation argument.", "If the classifier correctly predicts the type, it implies the encoder retains entity type information in the representations, which also directly inform the relation prediction.", "The simplicity of this approach makes it easier to pinpoint the information a model is relying on, as opposed to probing the downstream task directly.", "Our goal in this paper is to understand which features of the input a model conditioned on relation extraction has learned as useful for the task, in order to be able to better interpret and explain model predictions.", "Relation extraction literature is rich with information about useful features for the task (Zhou et al., 2005; Mintz et al., 2009; Surdeanu et al., 2011).", "Consequently, our initial question is whether and how good the sentence representations learned by state-of-the-art neural RE models encode these well-known features, such as e.g. argument entity types, dependency path or argument distance features.", "Another question is how the prior imposed by different encoding architectures, e.g. CNN, RNN, Graph Convolutional Network and Self-Attention, affects the features stored in the learned sentence representations.", "Finally, we would like to understand the effect of additional input features on the learned sentence representations.", "These include explicit semantic and syntactic knowledge like entity information and grammatical role, and as recently proposed, contextualized word representations such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018).", "We therefore significantly extend earlier work on probing tasks as follows: Following the framework of Conneau et al. (2018), we propose a set of 14 probing tasks specifically focused on linguistic properties relevant to relation extraction.", "We evaluate four encoder architectures, also in combination with supporting linguistic knowledge, on two datasets, TACRED (Zhang et al., 2017) and SemEval 2010 Task 8 (Hendrickx et al., 2010), for a total of more than 40 variants.", "We follow up on this analysis with an evaluation on the proposed probing tasks to establish a connection between task performance and captured linguistic properties.", "To facilitate further research and wider adoption, we open-source our relation extraction framework 1 based on AllenNLP (Gardner et al., 2018), and REval 2 , a framework extending the SentEval toolkit (Conneau and Kiela, 2018) with our probing tasks.", "This section introduces the probing tasks we use to evaluate the learned sentence representations.", "We base our work on the setup and tasks introduced by Conneau et al. (2018), but focus on probing tasks related to relation extraction.", "We therefore adopt some of the tasks they propose, and introduce new probing tasks specifically designed for RE.", "As in their work, the probing task classification problem requires only single sentence embeddings as input (as opposed to, e.g., sentence and word embeddings, or multiple sentence representations).", "This fits the standard RE setup quite well, where the task is typically to classify the relation(s) expressed between a pair of entity mentions in a single sentence.", "While we focus on supervised relation extraction, this setup is also applicable in a distantly supervised RE setting, where state-of-the-art approaches are often based on passing sentence representations to a bag-level classifier that computes classification label(s) over all sentences for a given entity pair (Mintz et al., 2009; Lin et al., 2016).", "Similar to Conneau et al. (2018), we also aim to address a set of linguistic properties related to relation extraction ranging from simple surface phenomena (e.g. relation argument distance) to syntactic information (e.g. parse tree depth and argument ordering) and semantic information (e.g. the entity types of relation arguments).", "We use the standard training, validation, test split of the original TACRED dataset for RE and probing task experiments.", "For SemEval we reuse test and use 10% of the training set for validation.", "For TACRED we use the provided named entity, part-of-speech, and dependency parsing information, and parse SemEval with the Stanford Parser (2018-10-05 version) (Manning et al., 2014).", "Surface information These tasks test whether sentence embeddings capture simple surface properties of sentences they encode.", "The sentence length ( SentLen ) task, introduced by Adi et al. (2017), predicts the number of tokens in a sentence.", "We group sentences into n = 10 bins (TACRED, 7 bins for SemEval) by length, selecting bin widths so that training sentences are distributed approximately uniformly across bins, and treat SentLen as a n-way classification task.", "Our next probing task, argument distance ( ArgDist ), predicts the number of tokens between the two relation arguments.", "Similar to SentLen, we group sentences into 10 bins (5 for SemEval) by relative distance.", "Inspired by a common feature in classical RE (Surdeanu et al., 2011), we also test if any named entity exists between the two relation arguments ( EntExist ), treating it as a binary classification problem.", "Addressing this task requires the encoder to produce a sentence embedding that (at least partially) represents the inner context of the relation arguments.", "Syntactic information Syntactic information is highly relevant for relation extraction.", "Many RE approaches utilize e.g. dependency path information (Bunescu and Mooney, 2005; Krause et al., 2012; Mintz et al., 2009), or part-of-speech tags (Zhou et al., 2005; Surdeanu et al., 2011).", "We therefore include the tree depth task ( TreeDepth ) described by Conneau et al. (2018).", "This task tests whether an encoder can group sentences by the depth of the longest path from root to any leaf.", "We group tree depth values into 10 (TACRED, SemEval 7) approximately uniformly distributed classes, ranging from from depth 1 to depth 15.", "To account for shortest dependency path (SDP) information, we include an SDP tree depth task ( SDPTreeDepth ), which tests if the learned sentence embedding stores information about the syntactical link between the relation arguments.", "Again, we group SDP tree depth values into bins, in this case only 6 (4) classes, since the SDP trees are generally more shallow than the original sentence dependency parse tree.", "The argument ordering task ( ArgOrd ) tests if the head argument of a relation occurs before the tail argument in the token sequence.", "An encoder that successfully addresses this challenge captures some information about syntactic structures where the order of a relation's arguments is inverted, e.g. in constructions such as The acquisition of Monsanto by Bayer, as compared to default constructions like Bayer acquired Monsanto.", "We also include 4 tasks that test for the part-of-speech tag of the token directly to the left or right of the relation's arguments: PosHeadL , PosHeadR , PosTailL , PosTailR .", "These tasks test whether the encoder is sensitive to the immediate context of an argument.", "Some relation types, e.g. per:nationality or org:top member , can often be identified based on the immediate argument context, e.g. US president-NN Donald Trump, or Google 's-POSS CEO-NN Larry Page.", "Representing this type of information in the sentence embedding should be useful for the relation classification.", "Argument information Finally, we include probing tasks that require some understanding of what each argument denotes.", "The argument entity type tasks ( TypeHead, TypeTail ) ask for the entity tag of the head, and respectively the tail, argument.", "Entity type information is highly relevant for relation extraction systems since it strongly constrains the set of possible relation labels for a given argument pair.", "We treat these tasks as multi-class classification problems over the set of possible argument entity tags (see Section 3.3).", "Our last task concerns the grammatical function of relation arguments.", "The grammatical role tasks ( GRHead, GRTail ) ask for the role of each argument, as given by the dependency label connecting the argument and its syntactic head token.", "The motivation is that the subject and object of verbal constructions often correspond to relation arguments for some relation types, e.g. Bayer acquired Monsanto.", "We currently test for four roles, namely nsubj, nsubjpass, dobj and iobj , and group all other dependency labels into the other class.", "Note that there are other grammatical relations that may be of interest for relation extraction, for example possessive modifiers (Google's Larry Page), compounds (Google CEO Larry Page), and appositions (Larry Page, CEO of Google).", "This section first introduces the four sentence encoding architectures we consider for evaluation ( 3.1), followed by a description of the supporting linguistic knowledge we evaluate: entity masking and contextualized word representations ( 3.2).", "We also introduce the two datasets we use for training the relation extraction models and probing the sentence representations ( 3.3).", "Generally, methods in relation extraction follow the sequence to vector approach, encoding the input (often a single sentence) into a fixed-size representation, before applying a fully connected relation classification layer (Figure 2).", "A single input is represented as a sequence of T tokens { w t } t =1 ,...,T , and the spans ( head start , head end ) and ( tail start , tail end ) of the two entity mentions in question.", "We focus our evaluation on four widely used approaches that have shown to perform well on RE.", "For all architectures we signal the position of head and tail by the relative offset to each to-Sentence Encoder included Aerolineas subsidiary Austral Relation Classifier o r g : s ub s i d i a r y o r g : pa r en t pe r : e m p l o y ee p(R) s j e iw p ih [...] Probing Classifier p it .", "ken w i as a positional embedding p hi R c and p t i R c concatenated to the input token representation e ti = [ e wi , p hi , p ti ] , where e wi R d is the token embedding.", "CNN We follow the work of Zeng et al. (2014) and Nguyen and Grishman (2015), who both use a convolutional neural network for relation extraction.", "Their models encode the input token sequence { w t } t =1 ,...,T by applying a series of 1-dimensional convolutions of different filter sizes, yielding a set of output feature maps M f , followed by a max-pooling operation that selects the maximum values along the temporal dimension of M f to form a fixed-size representation.", "Bi-LSTM max Similar to Zhang and Wang (2015) and Zhang et al. (2017), we use a Bi-LSTM to encode the input sequence.", "A Bi-LSTM yields a sequence of hidden states { h t } t =1 ,...,T , where h t is a concatenation [ h ft , h b t ] of the states of a forward LSTM h f and a backward LSTM h b .", "Similar to the CNN, we use max pooling across the temporal dimension to obtain a fixed-size representation 3 .", "GCN Graph convolutional networks (Kipf and Welling, 2016) adapt convolutional neural networks to graphs.", "Following the approach of Zhang et al. (2018), we treat the input token sequence { w t } t =1 ,...,T as a graph consisting of T nodes, with an edge between w i and w j , if there exists a dependency edge between the two tokens.", "We 3 We considered taking the final hidden state but found max pooling to perform superior.", "convert the dependency tree into a T T adjacency matrix, after pruning the graph to the shortest dependency path between head and tail .", "A L-layer GCN applied to { w t } t =1 ,...,T yields a sequence of hidden states { h t } t =1 ,...,T contextualized on neighboring tokens with a graph distance of at most L. Forming a fixed size representation is done by max pooling over the temporal dimension and local max pooling over the tokens { w t } , for t [ head start , . . . , head end ] and similar for t [ tail start , . . . , tail end ] .", "Multi-Headed Self-Attention Similar to the Transformer (Vaswani et al., 2017), we compute a sequence of contextualized representations { h t } t =1 ,...,T by applying L layers of multiheaded self-attention to the input token sequence { w t } t =1 ,...,T .", "The representation h t of w t is computed as a weighted sum of a projection V of the input tokens, with respect to the scaled, normalized dot product of Q and K , which are also both linear projections of the input with the procedure repeated for each attention head.", "A fixed-size representation is obtained by taking the final state h T at the last layer L. 3.2 Supporting Linguistic Knowledge Adding additional lexical, syntactic, and semantic input features to neural RE approaches has been shown to considerably improve performance (Zeng et al., 2014; Zhang et al., 2017, 2018).", "Features include e.g. casing, named entity, part-of-speech and dependency information.", "Most recently, pre-learned contextualized word representations (deep language representations) emerged, capturing syntactic and semantic information useful to a wide range of downstream tasks (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018).", "We therefore evaluate the effect of adding explicit named entity and grammatical role information (through entity masking) on our pre-learned sentence representations, and compare it to adding contextualized word representations computed by ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) as additional input features.", "Entity Masking Entity masking has been shown to provide a significant gain for RE performance on the TACRED dataset (Zhang et al., 2017) by replacing each entity mention with a combination of its entity type and grammatical role (subject or object).", "It limits the information about entity mentions available to a model, possibly preventing overfitting to specific mentions and forcing the model to focus more on the context.", "ELMo Embeddings from Language Models, as introduced by Peters et al. (2018), are an approach to compute contextualized word representations by applying a pre-learned, two-layer Bi-LSTM neural network to an input token sequence { w t } t =1 ,...,T .", "ELMo operates on a character level and is pre-trained with the forward and backward direction as a separate unidirectional language model.", "It yields a representation h i = [ h fi , h bi ] for each token w i , with h fi conditioned on the preceding context { w t } t =1 ,...,i 1 and independently h bi , conditioned on the succeeding context { w t } t = i +1 ,...,T .", "BERT Bidirectional Encoder Representations from Transformers (Devlin et al., 2018) improves upon methods such as ELMo and the OpenAI Generative Pre-trained Transformer (GPT) (Radford et al., 2018) by using a masked language model that allows for jointly training forward and backward directions.", "Compared to ELMo, BERT operates on word-piece input and is based on the self-attentive Transformer architecture (Vaswani et al., 2017).", "It computes a representation for a token w i jointly conditioned on the preceding { w t } t =1 ,...,i 1 and succeeding context { w t } t = i +1 ,...,T .", "Table 1 shows key statistics of the TACRED and SemEval datasets.", "TACRED is approximately 10x the size of SemEval 2010 Task 8, but contains a much higher fraction of negative training examples, making classification more challenging.", "TACRED The TAC R elation E xtraction D ataset 4 (Zhang et al., 2017) contains 106k sentences with entity mention pairs collected from the TAC KBP 5 evaluations.", "Sentences are annotated with personand organization-oriented relation types, e.g. per:title , org:founded and no relation for negative examples.", "In contrast to the SemEval dataset the entity mentions are typed with subjects classified into person and organization and objects categorized into 16 fine-grained classes (e.g., date, location, title).", "As per convention, we report our results as micro-averaged F1 scores.", "SemEval 2010 Task 8 The SemEval 2010 Task 8 dataset 6 (Hendrickx et al., 2010) is a standard benchmark for binary relation classification, and contains 8,000 sentences for training and 2,717 for testing.", "Sentences are annotated with a pair of untyped nominals and one of 9 directed semantic relation types, such as Cause-Effect , Entity-Origin as well as the undirected Other type to indicate no relation, resulting in 19 distinct types in total.", "We follow the official convention and report macro-averaged F1 scores with directionality taken into account.", "Table 2 and Table 3 report the accuracy scores of the probing task experiments for models trained on the TACRED and SemEval dataset.", "We did not include the ArgOrd and EntExists task in the SemEval evaluation, since SemEval relation arguments are always ordered in the sentence as indicated by the relation type, and entity types recognizable by standard tools such as Stanford CoreNLP that might occur between head and tail are not relevant to the dataset's entity types and relations.", "4 https://catalog.ldc.upenn.edu/ LDC2018T24 5 https://tac.nist.gov/2017/KBP/index.", "html 6 http://www.kozareva.com/downloads.", "html TypeHead TypeTail SentLen ArgDist ArgOrd EntExist PosLHead PosRHead PosLTail PosRTail TreeDep SDPDep GRHead GRTail F1score Majority vote 66.4 33.5 14.5 14.8 54.7 51.0 22.8 23.0 26.9 20.0 23.7 28.4 58.4 75.2 Length 66.4 33.5 100.0 13.8 54.8 59.4 18.6 24.7 26.9 20.1 30.5 29.6 58.4 75.2 ArgDist 66.4 33.5 16.5 100.0 54.7 77.5 14.9 23.0 26.9 19.8 23.8 35.3 58.4 75.2 BoE 77.7 47.6 61.1 22.6 97.3 66.5 33.7 41.5 32.5 36.3 29.8 31.0 66.3 77.4 39.4 CNN 94.0 85.8 47.6 88.1 98.8 84.5 70.7 76.1 84.0 86.5 28.5 44.0 78.0 88.6 55.9 + ELMo 97.0 90.2 48.7 91.7 99.1 84.3 76.1 81.2 86.6 90.1 28.3 45.0 82.8 91.9 58.8 + BERT 95.9 88.8 44.7 46.0 93.8 79.9 64.7 74.4 80.8 88.4 29.4 41.0 77.7 90.0 59.7 + BERT 96.1 88.8 48.0 43.7 91.9 80.0 56.9 70.3 80.1 87.5 28.0 41.3 75.0 89.6 61.0 CNN 84.2 60.9 46.4 58.3 94.3 81.5 44.3 50.9 54.4 63.9 27.7 40.0 68.5 82.0 59.5 + ELMo 82.8 69.8 47.4 75.6 98.1 82.9 54.2 60.2 65.4 77.3 28.7 42.4 71.9 85.0 61.7 + BERT 87.6 80.3 50.9 29.3 83.2 72.4 39.3 46.1 67.7 80.7 30.1 36.9 67.1 87.4 65.3 + BERT 87.2 79.3 50.6 25.3 78.3 69.8 39.6 42.9 59.9 77.5 30.3 35.1 65.6 86.9 66.1 Bi-LSTM 93.4 81.2 42.0 47.9 99.4 79.2 41.2 50.8 50.6 68.4 28.7 41.7 69.3 85.2 55.3 + ELMo 96.4 89.6 27.9 47.0 97.9 80.9 47.8 52.5 67.2 72.6 25.2 42.8 72.1 90.0 61.8 + BERT 96.0 87.3 31.0 45.5 99.1 78.8 46.1 55.6 61.7 71.3 26.6 42.7 72.2 87.7 62.5 + BERT 96.0 87.7 28.6 45.3 97.7 80.4 48.0 50.9 61.4 67.4 25.1 42.3 70.8 87.0 63.1 Bi-LSTM 81.9 71.4 27.6 35.6 90.6 73.2 36.1 40.5 59.3 66.4 25.7 38.4 64.6 85.3 62.9 + ELMo 82.8 50.7 30.6 19.7 73.4 65.0 32.0 35.9 37.9 41.8 28.0 32.2 63.0 79.5 64.1 + BERT 82.3 77.9 34.1 25.6 87.6 68.4 32.5 36.7 61.5 64.7 27.6 35.1 66.6 86.0 65.4 + BERT 81.7 79.6 30.2 21.3 81.1 67.0 30.6 33.8 55.9 55.1 27.3 34.2 64.1 84.9 66.1 GCN 93.0 81.9 18.8 35.5 86.0 74.4 48.6 48.8 51.2 52.3 24.0 49.9 74.2 85.9 57.4 + ELMo 96.3 86.2 18.7 29.3 77.5 74.0 50.4 52.0 48.9 51.7 23.2 47.4 77.1 86.9 62.1 + BERT 96.0 85.2 20.7 31.2 83.6 74.2 48.6 52.4 47.4 50.4 23.9 48.7 74.4 85.3 62.9 + BERT 96.3 85.7 21.4 32.9 84.3 75.3 50.1 54.6 48.6 52.5 24.5 49.2 76.3 85.8 61.5 GCN 87.6 67.4 18.1 33.1 81.6 72.8 36.8 51.1 44.8 48.8 24.1 47.3 73.2 83.0 63.7 + ELMo 92.7 68.6 18.6 26.4 76.8 71.4 41.9 50.4 43.6 45.1 23.8 47.1 76.3 83.9 65.4 + BERT 93.5 71.5 22.0 33.3 88.5 73.8 44.9 50.6 44.7 47.7 24.4 49.1 72.6 82.3 66.3 + BERT 93.4 72.0 23.7 33.2 90.4 73.9 42.8 50.1 44.0 48.3 24.9 48.0 72.9 83.0 65.9 S-Att.", "Baseline performances are reported in the top section of Table 2 and Table 3.", "Length and ArgDist are both linear classifiers, which use sentence length and distance between head and tail argument as the only feature.", "BoE computes a representation of the input sentence by summing over the embeddings of all tokens it contains.", "Generally, there is a large gap between top baseline performance and that of a trained encoder.", "While SentLength and ArgDist are trivially solved by the respective linear classifier, BoE shows surprisingly good performance on SentLen and ArgOrd, and a clear improvement over the other baselines for named entityand part-of-speech-related probing tasks.", "Encoder Architecture For most probing tasks, except SentLen and ArgOrd, a proper encoder clearly outperforms bag-of-embeddings (BoE), which is coherent with the findings of Adi et al. (2017) and Conneau et al. (2018).", "Similarly, the results indicate that the prior imposed by the encoder architecture preconditions the information encoded in the learned embeddings.", "Models with a local or recency bias (CNN, BiLSTM) perform well on probing tasks with local focus, such as PosHead { L,R } and PosTail { L,R } and distance related tasks (ArgDist, ArgOrd).", "Similarly, models with access to dependency information (GCN) perform well on tree related tasks (SDPTreeDepth).", "Due to the graph pruning step (Zhang et al., 2018), the GCN is left with a limited view of the depen-TypeHead TypeTail SentLen ArgDist PosLHead PosRHead PosLTail PosRTail TreeDep SDPDep GRHead GRTail F1score Majority vote 22.0 21.3 25.7 42.1 62.1 39.3 38.3 34.0 25.4 67.2 37.3 80.9 Length 25.8 24.7 100.0 42.1 62.1 39.1 38.3 46.3 44.3 67.2 40.6 80.9 ArgDist 23.6 22.3 25.7 100.0 62.1 43.7 37.9 35.3 26.2 67.8 45.4 80.9 BoE 58.5 58.0 82.4 84.8 65.1 66.1 49.2 72.5 44.1 69.8 65.4 83.6 55.7 CNN 76.1 76.2 34.9 87.5 66.0 85.8 74.2 73.1 34.1 72.1 70.3 89.1 80.2 + ELMo 81.3 81.8 38.1 88.5 70.0 89.0 79.5 76.4 35.5 71.8 75.1 90.9 84.4 + BERT 83.9 84.1 55.9 90.2 74.0 89.3 81.2 84.6 41.3 73.1 76.8 90.6 86.3 + BERT 83.4 83.7 54.3 90.4 74.4 89.4 82.0 82.8 42.0 73.0 78.3 90.8 86.0 Bi-LSTM 77.1 77.0 50.5 74.9 63.8 75.9 61.8 68.5 41.3 70.3 69.2 87.7 80.1 + ELMo 81.5 81.8 41.1 66.6 62.8 71.8 59.3 64.5 37.5 70.1 70.0 87.6 83.7 + BERT 83.6 83.7 41.8 61.5 62.7 68.9 57.9 63.0 37.1 70.8 67.4 86.7 85.6 + BERT 82.5 82.8 41.8 66.0 63.1 70.8 58.6 64.3 37.7 71.0 68.9 87.5 85.1 GCN 75.4 75.5 35.0 81.5 68.5 87.5 71.2 55.5 35.5 80.3 76.3 91.7 79.6 + ELMo 80.7 80.8 32.2 68.1 68.3 83.4 65.8 53.2 34.4 75.8 80.0 91.1 84.2 + BERT 82.5 83.0 42.5 66.5 73.6 84.7 69.2 66.3 38.9 77.2 82.1 91.0 85.7 + BERT 81.5 81.9 42.7 67.3 73.8 85.1 69.6 67.8 39.6 77.6 84.2 91.9 84.3 S-Att.", "dency tree, which explains the low performance on TreeDepth.", "Surprisingly, while Self-Attention exhibits superior performance on the RE task, it consistently performs lower on the probing tasks compared to the other encoding architectures.", "This could indicate Self-Attention encodes deeper linguistic information into the sentence representation, not covered by the current set of probing tasks.", "Probing Tasks Compared to the baselines, all proper encoders exhibit consistently high performance on TypeHead and TypeTail, clearly highlighting the importance of entity type information to RE.", "In contrast, encoders trained on the downstream task perform worse on SentLen, which intuitively makes sense, since sentence length is mostly irrelevant for RE.", "This is consistent with Conneau et al. (2018), who found SentLen performance to decrease for models trained on more complex downstream tasks, e.g. neural machine translation, strengthen the assumption that, as a model captures deeper linguistic properties it will tend to forget about this superficial feature.", "With the exception of the CNN, all encoders consistently show low performance on the argument distance (ArgDist) task.", "A similar performance pattern can be observed for ArgOrd, where models that are biased towards locality (CNN and BiLSTM) perform better, while models that are able to efficiently model long range dependencies, such as GCN and", "S-Att., show lower performance.", "The superior RE task performance of the latter indicates that their bias may allow them to learn deeper linguistic features.", "The balanced performance of CNN, BiLSTM and GCN encoders across part-of-speech related tasks (PosHeadL, PosHeadR, PosTailL, PosTailR) highlights the importance of part-of-speech-related features to RE, again with the exception of", "S-Att., which performs just slightly above baselines.", "On TreeDepth and SDPTreeDepth (with GCN as the exception), average performance in many cases ranges just slightly above baseline performance, suggesting that TreeDepth requires more nuanced syntactic information, which the models fail to acquire.", "The good performance on grammatical role tasks (GRHead, GRTail) once more emphasizes the relevance of this feature to RE, with the GCN exhibiting the best performance on average.", "This is unsurprising, because the GCN focuses on token-level information along the dependency path connecting the arguments, and hence seems to be able to capture grammatical relations among tokens more readily than the other encoders (even though the GCN also does not have access to the dependency labels themselves).", "Entity Masking Perhaps most interestingly, masking entity mentions with their respective named entity and grammatical role information considerably lowers the performance on entity type related tasks (TypeHead and TypeTail).", "This indicates that masking forces the encoder's focus away from the entity mentions, which is confirmed by the performance decrease in probing tasks with a focus on argument position and distance, e.g. ArgDist, ArgOrd, and SentLen.", "CNN and BiLSTM encoders exhibit the greatest decrease in performance, suggesting a severe overfitting to specific entity mentions when no masking is applied.", "In comparison, the GCN shows less tendency to overfit.", "Surprisingly, with entity masking the self-attentive encoder (S-Attn.) increases its focus on entity mentions and their surroundings as suggested by the performance increase on the distance and argument related probing tasks.", "Word Representations Adding contextualized word representations computed by ELMo or BERT greatly increases performance on probing tasks with a focus on named entity and part-of-speech information.", "This indicates that contextualized word representations encode useful syntactic and semantic features relevant to RE, which is coherent with the findings of Peters et al. (2018) and Radford et al. (2018), who both highlight the effectiveness of linguistic features encoded in contextualized word representations (deep language representations) for downstream tasks.", "The improved performance on syntactic and semantic abilities is also reflected in an overall improvement in RE task performance.", "Compared to ELMo, encoders with BERT generally exhibit an overall better and more balanced performance on the probing tasks.", "This is also reflected in a superior RE performance, suggesting that a bidirectional language model encodes linguistic properties of the input more effectively.", "Somewhat surprisingly, BERT without casing performs equally or better on the probing tasks focused on entity and part-of-speech information, compared to the cased version.", "While this intuitively makes sense for SemEval, as the dataset focuses on semantic relations between concepts, it is surprising for TACRED, which contains relations between proper entities, e.g. person and company names, with casing information more important to identify the entity type.", "Probing vs. Relation Extraction One interesting observation is that encoders that perform better on probing tasks do not necessarily perform better on the downstream RE task.", "For example, CNN+ELMo scores highest for most of the probing tasks, but has an 8.1 lower F1 score than the best model on this dataset,", "S-Att.+BERT cased with masking.", "Similarly, all variants of the self-attentive encoder (S-Att.) show superior performance on RE but consistently come up last on the probing tasks, occasionally performing just above the baselines.", "Conneau et al. (2018) observed a similar phenomena for encoders trained on neural machine translation.", "Relation Extraction The relation extraction task performance 7 on the TACRED dataset ranges between 55.3 (Bi-LSTM) and 57.6 F1 (S-Att.), with performance improving to around 58.8 64.7 F1 when adding pre-learned, contextualized word representations.", "As observed in previous work (Zhang et al., 2017), masking helps the encoders to generalize better, with gains of around 4 8 F1 when compared to the vanilla models.", "This is mainly due to better recall, which indicates that without masking, models may overfit, e.g. by memorizing specific entity names.", "The best-performing model achieves a score of 66.9 F1 (S-Att.+ BERT cased and masking).", "On the SemEval dataset performance of the vanilla models is around 80.0 F1.", "Adding contextualized word representations significantly improves the performance of all models, by 3.5 6 F1.", "The best-performing model on this dataset is a CNN with uncased BERT embeddings with an F1-score of 86.3, which is comparable to state-of-the-art models (Wang et al., 2016; Cai et al., 2016).", "Shi et al. (2016) introduced probing tasks to probe syntactic properties captured in encoders trained on neural machine translation.", "Adi et al. (2017) extended this concept of auxiliary prediction tasks, proposing SentLen, word count and word order tasks to probe general sentence encoders, such as bag-of-vectors, auto-encoder and skip-thought.", "Conneau et al. (2018) considered 10 probing tasks, including SentLen and TreeDepth, and an extended set of encoders such as Seq2Tree and encoders 7 See Appendix for more details on RE task performance, training, and model hyperparameters trained on NMT and NLI for general text classification.", "Their setup, however, is not directly applicable to relation extraction, because the RE task requires not only the input sentence, but also the entity arguments.", "We therefore extend their framework to accommodate the RE setting.", "Another difference to their work is that while their probing tasks focus on linguistic properties of general sentence encoders, we specifically focus on relation extraction.", "To that end, we extend the evaluation to relation extraction by introducing a set of 14 probing tasks, including SentLen and TreeDepth, specifically designed to probe linguistic properties relevant to relation extraction.", "We introduced a set of probing tasks to study the linguistic features captured in sentence encoder representations trained on relation extraction.", "We conducted a comprehensive evaluation of common RE encoder architectures, and studied the effect of explicitly and implicitly provided semantic and syntactic knowledge, uncovering interesting properties about the architecture and input features.", "For example, we found self-attentive encoders to be well suited for the RE on sentences of different complexity, though they consistently perform lower on probing tasks; hinting that these architectures capture deeper linguistic features.", "We also showed that the bias induced by different architectures clearly affects the learned properties, as suggested by probing task performance, e.g. for distance and dependency related probing tasks.", "In future work, we want to extend the probing tasks to also cover specific linguistic patterns such as appositions, and also investigate a model's ability of generalizing to specific entity types, e.g. company and person names.", "We would like to thank all reviewers for their helpful comments and feedback.", "This work has been supported by the German Federal Ministry of Education and Research as part of the projects DEEPLEE (01IW17001) and BBDC2 (01IS18025E), and by the German Federal Ministry for Economic Affairs and Energy as part of the project PLASS (01MD19003E)." ]
[ "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "objective", "method", "objective", "method", "objective", "result", "result", "objective", "other", "other" ]
[ "We review motivations, definition, approaches, and methodology for unsupervised crosslingual learning and call for a more rigorous position in each of them.", "An existing rationale for such research is based on the lack of parallel data for many of the world's languages.", "However, we argue that a scenario without any parallel data and abundant monolingual data is unrealistic in practice.", "We also discuss different training signals that have been used in previous work, which depart from the pure unsupervised setting.", "We then describe common methodological issues in tuning and evaluation of unsupervised cross-lingual models and present best practices.", "Finally, we provide a unified outlook for different types of research in this area (i.e., cross-lingual word embeddings, deep multilingual pretraining, and unsupervised machine translation) and argue for comparable evaluation of these models.", "The study of the connection among human languages has contributed to major discoveries including the evolution of languages, the reconstruc-tion of proto-languages, and an understanding of language universals (Eco and Fentress, 1995).", "In natural language processing, the main promise of multilingual learning is to bridge the digital language divide, to enable access to information and technology for the world's 6,900 languages (Ruder et al., 2019).", "For the purpose of this paper, we define multilingual learning as learning a common model for two or more languages from raw text, without any downstream task labels.", "Common use cases include translation as well as pretraining multilingual representations.", "We will use the term interchangeably with cross-lingual learning .", "Recent work in this direction has increasingly focused on purely unsupervised cross-lingual learning (UCL)i.e., cross-lingual learning without any parallel signal across the languages.", "We provide an overview in 2.", "Such work has been motivated by the apparent dearth of parallel data for most of the world's languages.", "In particular, previous work has noted that data encoding cross-lingual equivalence is often expensive to obtain (Zhang et al., 2017a) whereas monolingual data is much easier to find (Lample et al., 2018a).", "Overall, it has been argued that unsupervised cross-lingual learning opens up opportunities for the processing of extremely low-resource languages and domains that lack parallel data completely (Zhang et al., 2017a).", "We challenge this narrative and argue that the scenario of no parallel data and sufficient monolingual data is unrealistic and not reflected in the real world (3.1).", "Nevertheless, UCL is an important research direction and we advocate for its study based on an inherent scientific interest (to better understand and make progress on general language understanding), usefulness as a lab setting, and simplicity (3.2).", "Unsupervised cross-lingual learning permits no supervisory signal by definition.", "However, previous work implicitly includes monolingual and cross-lingual signals that constitute a departure from the pure setting.", "We review existing training signals as well as other signals that may be of interest for future study (4).", "We then discuss methodological issues in UCL (e.g., validation, hyperparameter tuning) and propose best evaluation practices (5).", "Finally, we provide a unified outlook of established research areas (cross-lingual word embeddings, deep multilingual models and unsupervised machine translation) in UCL (6), and conclude with a summary of our recommendations (7).", "In this section, we briefly review existing work on UCL, covering cross-lingual word embeddings (2.1), deep multilingual pre-training (2.2), and unsupervised machine translation (2.3).", "Cross-lingual word embedding methods traditionally relied on parallel corpora (Gouws et al., 2015; Luong et al., 2015).", "Nonetheless, the amount of supervision required was greatly reduced via crosslingual word embedding mappings, which work by separately learning monolingual word embeddings in each language and mapping them into a shared space through a linear transformation.", "Early work required a bilingual dictionary to learn such a transformation (Mikolov et al., 2013a; Faruqui and Dyer, 2014).", "This requirement was later reduced with self-learning (Artetxe et al., 2017), and ultimately removed via unsupervised initialization heuristics (Artetxe et al., 2018a; Hoshen and Wolf, 2018) and adversarial learning (Zhang et al., 2017a; Conneau et al., 2018a).", "Finally, several recent methods have formulated cross-lingual embedding alignment as an optimal transport problem (Zhang et al., 2017b; Grave et al., 2019; Alvarez-Melis and Jaakkola, 2018).", "Following the success in learning shallow word embeddings (Mikolov et al., 2013b; Pennington et al., 2014), there has been an increasing interest in learning contextual word representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018).", "Recent research has been dominated by BERT (De-vlin et al., 2019), which uses a bidirectional transformer encoder trained on masked language modeling and next sentence prediction, which led to impressive gains on various downstream tasks.", "While the above approaches are limited to a single language, a multilingual extension of BERT (mBERT) has been shown to also be effective at learning cross-lingual representations in an unsupervised way.", "1 The main idea is to combine monolingual corpora in different languages, upsampling those with less data, and training a regular BERT model on the combined data.", "Conneau and Lample (2019) follow a similar approach but perform a more thorough evaluation and report substantially 1 https://github.com/google-research/ bert/blob/master/multilingual.md stronger results, 2 which was further scaled up by Conneau et al. (2019).", "Several recent studies (Wu and Dredze, 2019; Pires et al., 2019; Artetxe et al., 2020b; Wu et al., 2019) analyze mBERT to get a better understanding of its capabilities.", "Early attempts to build machine translation systems using monolingual data alone go back to statistical decipherment (Ravi and Knight, 2011; Dou and Knight, 2012, 2013).", "However, this approach was only shown to work in limited settings, and the first convincing results on standard benchmarks were achieved by Artetxe et al. (2018c) and Lample et al. (2018a) on unsupervised Neural Machine Translation (NMT).", "Both approaches rely on cross-lingual word embeddings to initialize a shared encoder, and train it in conjunction with the decoder using a combination of denoising autoencoding, back-translation, and optionally adversarial learning.", "Subsequent work adapted these principles to unsupervised phrase-based Statistical Machine Translation (SMT), obtaining large improvements over the original NMT-based systems (Lample et al., 2018b; Artetxe et al., 2018b).", "This alternative approach uses cross-lingual n -gram embeddings to build an initial phrase table, which is combined with an n-gram language model and a distortion model, and further refined through iterative back-translation.", "There have been several follow-up attempts to combine NMT and SMT based approaches (Marie and Fujita, 2018; Ren et al., 2019; Artetxe et al., 2019b).", "More recently, Conneau and Lample (2019), Song et al. (2019) and Liu et al. (2020) obtain strong results using deep multilingual pretraining rather than cross-lingual word embeddings to initialize unsupervised NMT systems.", "In this section, we challenge the narrative of motivating UCL based on a lack of parallel resources.", "We argue that the strict unsupervised scenario cannot be motivated from an immediate practical perspective, and elucidate what we believe should be the true goals of this research direction.", "2 The full version of their model (XLM) requires parallel corpora for their translation language modeling objective, but the authors also explore an unsupervised variant using masked language modeling alone.", "Monolingual resources subsume parallel resources.", "For instance, each side of a parallel corpus effectively serves as a monolingual corpus.", "From this argument, it follows that monolingual data is cheaper to obtain than parallel data, so unsupervised crosslingual learning should in principle be more generally applicable than supervised learning.", "However, we argue that the common claim that the requirement for parallel data may not be met for many language pairs in the real world (Xu et al., 2018) is largely inaccurate.", "For instance, the JW300 parallel corpus covers 343 languages with around 100,000 parallel sentences per language pair on average (Agic and Vulic, 2019), and the multilingual Bible corpus collected by Mayer and Cysouw (2014) covers 837 language varieties (each with a unique ISO 639-3 code).", "Moreover, the PanLex project aims to collect multilingual lexica for all human languages in the world, and already covers 6,854 language varieties with at least 20 lexemes, 2,364 with at least 200 lexemes, and 369 with at least 2,000 lexemes (Kamholz et al., 2014).", "While 20 or 200 lexemes might seem insuf-ficient, weakly supervised cross-lingual word embedding methods already proved effective with as little as 25 word pairs (Artetxe et al., 2017).", "More recent methods have focused on completely removing this weak supervision (Conneau et al., 2018a; Artetxe et al., 2018a), which can hardly be justified from a practical perspective given the existence of such resources and additional training signals stemming from a (partially) shared script (4.2).", "Finally, given the availability of sufficient monolingual data, noisy parallel data can often be obtained by mining bitext (Schwenk et al., 2019a,b).", "In addition, large monolingual data is difficult to obtain for low-resource languages.", "For instance, recent work on cross-lingual word embeddings has mostly used Wikipedia as its source for monolingual corpora (Gouws et al., 2015; Vulic and Korho-nen, 2016; Conneau et al., 2018a).", "However, as of November 2019, Wikipedia exists in only 307 languages 3 of which nearly half have less than 10,000 articles.", "While one could hope to overcome this by taking the entire web as a corpus, as facilitated by Common Crawl 4 and similar initiatives, this is not 3 https://en.wikipedia.org/wiki/List_ of_Wikipedias 4 https://commoncrawl.org/ always feasible for low-resource languages.", "First, the presence of less resourced languages on the web is very limited, with only a few hundred languages recognized as being used in websites.", "5 This situation is further complicated by the limited coverage of existing tools such as language detectors (Buck et al., 2014; Grave et al., 2018), which only cover a few hundred languages.", "Alternatively, speech could also serve as a source of monolingual data (e.g., by recording public radio stations).", "However, this is an unexplored direction within UCL, and collecting, processing and effectively capitalizing on speech data is far from trivial, particularly for low-resource languages.", "All in all, we conclude that the alleged scenario involving no parallel data and sufficient monolingual data is not met in the real world in the terms explored by recent UCL research.", "Needless to say, effectively exploiting unlabeled data is important in any low-resource setting.", "However, refusing to use an informative training signalwhich parallel data iswhen it does indeed exist, cannot be justified from a practical perspective if one's goal is to build the strongest possible model.", "For this reason, we believe that semi-supervised learning is a more suitable paradigm for truly low-resource languages, and UCL should not be motivated from an immediate practical perspective.", "Despite not being an entirely realistic setup, we believe that UCL is an important research direction for the reasons we discuss below.", "Inherent scientific interest.", "The extent to which two languages can be aligned based on independent sampleswithout any cross-lingual signalis an open and scientifically relevant problem per se .", "In fact, it is not entirely obvious that UCL should be possible at all, as humans would certainly struggle to align two unknown languages without any grounding.", "Exploring the limits of UCL could help to understand the limits of the principles that the corresponding methods are based on, such as the distributional hypothesis.", "Moreover, this research line could bring new insights into the properties and inner workings of both language acquisition and the underlying computational models that ultimately make UCL possible.", "Finally, such methods may be useful in areas where supervision is impos-5 https://w3techs.com/technologies/ overview/content_language sible to obtain, such as when dealing with unknown or even non-human languages.", "Useful as a lab setting.", "The strict unsupervised scenario, although not practical, allows us to isolate and better study the use of monolingual corpora for cross-lingual learning.", "We believe lessons learned in this setting can be useful in the more practical semi-supervised scenario.", "In a similar vein, monolingual language models, although hardly useful on their own, have contributed to large improvements in other tasks.", "From a research methodology perspective, unsupervised systems also set a competitive baseline, which any semi-supervised method should improve upon.", "Simplicity as a value.", "As we discussed previously, refusing to use an informative training signal when it does exist can hardly be beneficial, so we should not expect UCL to perform better than semi-supervised learning.", "However, simplicity is a value in its own right.", "Unsupervised approaches could be preferable to their semi-supervised counterparts if the performance gap between them is small enough.", "For instance, unsupervised cross-lingual embedding methods have been reported to be competitive with their semi-supervised counterparts in certain settings (Glava et al., 2019), while being easier to use in the sense that they do not require a bilingual dictionary.", "In its most general sense, unsupervised crosslingual learning can be seen as referring to any method relying exclusively on monolingual text data in two or more languages.", "However, there are different training signalsstemming from common assumptions and varying amounts of linguistic knowledgethat one can potentially exploit under such a regime.", "This has led to an inconsistent use of this term in the literature.", "In this section, we categorize different training signals available both from a monolingual and a cross-lingual perspective and discuss additional scenarios enabled by multiple languages.", "From a computational perspective, text is modeled as a sequence of discrete symbols.", "In UCL, the training data consists of a set of such sequences in each of the languages.", "In principle, without any knowledge about the languages, one would have no prior information of the nature of such sequences or the possible relations between them.", "In practice, however, sets of sequences are assumed to be independent, and existing work differs whether they assume document-level sequences (Conneau and Lample, 2019) or sentence-level sequences (Artetxe et al., 2018c; Lample et al., 2018a).", "Nature of atomic symbols.", "A more important consideration is the nature of the atomic symbols in such sequences.", "To the best of our knowledge, previous work assumes some form of word segmentation or tokenization (e.g., splitting by whitespaces or punctuation marks).", "Early work on cross-lingual word embeddings considered such tokens as atomic units.", "However, more recent work (Hoshen and Wolf, 2018; Glava et al., 2019) has primarily used fastText embeddings (Bojanowski et al., 2017) which incorporate subword information into the embedding learning, although the vocabulary is still defined at the token level.", "In addition, there have also been approaches that incorporate character-level information into the alignment learning itself (Heyman et al., 2017; Riley and Gildea, 2018).", "In contrast, most work on contextual word embeddings and unsupervised machine translation operates with a subword vocabulary (Devlin et al., 2019; Conneau and Lample, 2019).", "While the above distinction might seem irrelevant from a practical perspective, we think that it is important from a more fundamental point of view (e.g. in relation to the distributional hypothesis as discussed in 3.2).", "Moreover, some of the underlying assumptions might not generalize to different writing systems (e.g. logographic instead of alphabetic).", "For instance, subword tokenization has been shown to perform poorly on reduplicated words (Vania and Lopez, 2017).", "In relation to that, one could also consider the text in each language as a stream of discrete character-like symbols without any notion of tokenization.", "Such a tabula rasa approach is potentially applicable to any arbitrary language, even when its writing system is not known, but has so far only been explored for a limited number of languages in a monolingual setting (Hahn and Baroni, 2019).", "Linguistic information.", "Finally, one can exploit additional linguistic knowledge through linguistic analysis such as lemmatization, part-of-speech tagging, or syntactic parsing.", "For instance, before the advent of unsupervised NMT, statistical decipherment was already shown to benefit from incorporating syntactic dependency relations (Dou and Knight, 2013).", "For other tasks such as unsupervised POS tagging (Snyder et al., 2008), monolingual tag dictionaries have been used.", "While such approaches could still be considered unsupervised from a cross-lingual perspective, we argue that the interest of this research direction is greatly limited by two factors:", "(i) from a theoretical perspective, it assumes some fundamental knowledge that is not directly inferred from the raw monolingual corpora; and", "(ii) from a more practical perspective, it is not reasonable to assume that such resources are available in the less resourced settings where this research direction has more potential for impact.", "Pure UCL should not use any cross-lingual signal by definition.", "When we view text as a sequence of discrete atomic symbols (either characters or to-kens), a strict interpretation of this principle would consider the set of atomic symbols in different languages to be disjoint, without prior knowledge of the relationship between them.", "Needless to say, any form of learning requires making assumptions, as one needs some criterion to prefer one mapping over another.", "In the case of UCL, such assumptions stem from the structural similarity across languages (e.g. semantically equivalent words in different languages are assumed to occur in similar contexts).", "In practice, these assumptions weaken as the distribution of the datasets diverges, and some UCL models have been reported to break under a domain shift (S-gaard et al., 2018; Guzmn et al., 2019; Marchisio et al., 2020).", "Similarly, approaches that leverage linguistic features such as syntactic dependencies may assume that these are similar across languages.", "In addition, one can also assume that the sets of symbols that are used to represent different languages have some commonalities.", "This departs from the strict definition of UCL above, establishing some prior connections between the sets of symbols in different languages.", "Such an assumption is reasonable from a practical perspective, as there are a few scripts (e.g. Latin, Arabic or Cyrillic) that cover a large fraction of languages.", "Moreover, even when two languages use different writing systems or scripts, there are often certain elements that are still shared (e.g. Arabic numerals, named entities written in a foreign script, URLs, certain punctuation marks, etc.).", "In relation to that, several models have relied on identically spelled words (Artetxe et al., 2017; Smith et al., 2017; Sgaard et al., 2018) or string-level similarity across languages (Riley and Gildea, 2018; Artetxe et al., 2019b) as training signals.", "Other methods use a joint subword vocabulary for all languages, indirectly exploiting the commonalities in their writing system (Lample et al., 2018b; Conneau and Lample, 2019).", "However, past work greatly differs on the nature and relevance that is attributed to such a training signal.", "The reliance on identically spelled words has been considered as a weak form of supervision in the cross-lingual word embedding literature (Sgaard et al., 2018; Ruder et al., 2018), and sig-nificant effort has been put into developing strictly unsupervised methods that do not rely on such signal (Conneau et al., 2018a).", "In contrast, the unsupervised machine translation literature has not payed much attention to this factor, and has often relied on identical words (Artetxe et al., 2018c), string-level similarity (Artetxe et al., 2019b), or a joint subword vocabulary (Lample et al., 2018b; Conneau and Lample, 2019) under the unsupervised umbrella.", "The same is true for unsupervised deep multilingual pretraining, where a shared subword vocabulary has been a common component (Pires et al., 2019; Conneau and Lample, 2019), although recent work shows that it is not important to share vocabulary across languages (Artetxe et al., 2020b; Wu et al., 2019).", "Our position is that making assumptions on linguistics universals is acceptable and ultimately necessary for UCL.", "However, we believe that any connection stemming from a (partly) shared writing system belongs to a different category, and should be considered a separate cross-lingual signal.", "Our rationale is that a given writing system pertains to a specific form to encode a language, but cannot be considered to be part of the language itself.", "6 4.3 Multilinguality While most work in unsupervised cross-lingual learning considers two languages at a time, there have recently been some attempts to extend these methods to multiple languages (Duong et al., 2017; Chen and Cardie, 2018; Heyman et al., 2019), and most work on unsupervised cross-lingual pretraining is multilingual (Pires et al., 2019; Conneau 6 As a matter of fact, languages existed well before writing was invented, and a given language can have different writing systems or new ones can be designed.", "and Lample, 2019).", "When considering parallel data across a subset of the language pairs, multilinguality gives rise to additional scenarios.", "For instance, the scenario where two languages have no parallel data between each other but are well connected through a third (pivot) language has been explored by several authors in the context of machine translation (Cheng et al., 2016; Chen et al., 2017).", "However, given that the languages in question are still indirectly connected through parallel data, this scenario does not fall within the unsupervised category, and is instead commonly known as zero-resource machine translation.", "An alternative scenario explored in the contemporaneous work of Liu et al. (2020) is where a set of languages are connected through parallel data, and there is a separate language with monolingual data only.", "We argue that, when it comes to the isolated language, such a scenario should still be considered as UCL, as it does not rely on any parallel data for that particular language nor does it assume any previous knowledge of it.", "This scenario is easy to justify from a practical perspective given the abundance of parallel data for high-resource languages, and can also be interesting from a more theoretical point of view.", "This way, rather than considering two unknown languages, this alternative scenario would assume some knowledge of how one particular language is connected to other languages, and attempt to align it to a separate unknown language.", "As discussed throughout the section, there are different training signals that we can exploit depending on the available resources of the languages involved and the assumptions made regarding their writing system, which are summarized in Table", "1. Many of these signals are not specific to work on UCL but have been observed in the past in allegedly language-independent NLP approaches, as discussed by Bender (2011).", "Others, such as a reliance on subwords or shared symbols are more recent phenomena.", "While we do not aim to open a terminological debate on what UCL encompasses, we advocate for future work being more aware and explicit about the monolingual and cross-lingual signals they employ, what assumptions they make (e.g. regarding the writing system), and the extent to which these generalize to other languages.", "In particular, we argue that it is critical to consider the assumptions made by different methods when comparing their results.", "Otherwise the blind chase for state-of-the-art performance may benefit models making stronger assumptions and exploiting all available training signals, which could ultimately conflict with the eminently scientific motivation of this research area (see 3.2).", "In this section, we describe methodological issues that are commonly encountered when training and evaluating unsupervised cross-lingual models and propose measures to ameliorate them.", "In conventional supervised or semi-supervised settings, we use a separate validation set for development and hyperparameter tuning.", "However, this becomes tricky in unsupervised cross-lingual learning, where we ideally should not use any parallel data other than for testing purposes.", "Previous work has not paid much attention to this aspect, and different methods are evaluated with different validation schemes.", "For instance, Artetxe et al. (2018b,c) use a separate language pair with a parallel validation set to make all development and hyperparameter decisions.", "They test their final system on other language pairs without any parallel data.", "This approach has the advantage of being strictly unsupervised with respect to the test language pairs, but the optimal hyperparameter choice might not necessarily transfer well across languages.", "In contrast, Conneau et al. (2018a) and Lample et al. (2018a) propose an unsupervised validation criterion that is defined over monolingual data and shown to correlate well with test performance.", "This enables systematic tuning on the language pair of interest, but still requires parallel data to guide the development of the unsupervised validation criterion itself.", "A parallel validation set has also been used for systematic tuning in the context of unsupervised machine translation (Marie and Fujita, 2018; Marie et al., 2019; Sto-janovski et al., 2019).", "While this is motivated as a way to abstract away the issue of unsupervised tuningwhich the authors consider to be an open problemwe argue that any systematic use of parallel data should not be considered UCL.", "Finally, previous work often does not report the validation scheme used.", "In particular, unsupervised crosslingual word embedding methods have almost exclusively been evaluated on bilingual lexicons that do not have a validation set, and presumably use the test set to guide development to some extent.", "Our position is that a completely blind development model without any parallel data is unrealistic.", "Some cross-lingual signals to guide development are always needed.", "However, this factor should be carefully controlled and reported with the necessary rigor as a part of the experimental design.", "We advocate for using one language pair for development and evaluating on others when possible.", "If parallel data in the target language pair is used, the test set should be kept blind to avoid overfitting, and a separate validation should be used.", "In any case, we argue that the use of parallel data in the target language pair should be minimized if not completely avoided, and it should under no circumstances be used for extensive tuning.", "Instead, we recommend to use unsupervised validation criteria for systematic tuning in the target language.", "Evaluation on favorable conditions.", "Most work on UCL has focused on relatively close languages with large amounts of high-quality parallel corpora from similar domains.", "Only recently have approaches considered more diverse languages as well as language pairs that do not involve English (Glava et al., 2019; Vulic et al., 2019), and some existing methods have been shown to completely break in less favorable conditions (Guzmn et al., 2019; Marchisio et al., 2020).", "In addition, most approaches have focused on learning from similar domains, often involving Wikipedia and news corpora, which are unlikely to be available for low-resource languages.", "We believe that future work should pay more attention to the effect of the typology and linguistic distance of the languages involved, as well as the size, noise and domain similarity of the training data used.", "Over-reliance on translation tasks.", "Most work on UCL focuses on translation tasks, either at the word level (where the problem is known as bilingual lexicon induction ) or at the sentence level (where the problem is known as unsupervised machine translation ).", "While translation can be seen as the ultimate application of cross-lingual learning and has a strong practical interest on its own, it only evaluates a particular facet of a model's cross-lingual generalization ability.", "In relation to that, Glava et al. (2019) showed that bilingual lexicon induction performance does not always correlate well with downstream tasks.", "In particular, they observe that some mapping methods that are specifically designed for bilingual lexicon induction perform poorly on other tasks, showing the risk of relying excessively on translation benchmarks for evaluating cross-lingual models.", "Moreover, existing translation benchmarks have been shown to have several issues on their own.", "In particular, bilingual lexicon induction datasets have been reported to misrepresent morphological variations, overly focus on named entities and frequent words, and have pervasive gaps in the gold-standard targets (Czarnowska et al., 2019; Ke-mentchedjhieva et al., 2019).", "More generally, most of these datasets are limited to relatively close languages and comparable corpora.", "Lack of an established cross-lingual benchmark.", "At the same time, there is no de facto standard benchmark to evaluate cross-lingual models beyond translation.", "Existing approaches have been evaluated in a wide variety of tasks including dependency parsing (Schuster et al., 2019), named entity recognition (Rahimi et al., 2019), sentiment analysis (Barnes et al., 2018), natural language inference (Conneau et al., 2018b), and document classification (Schwenk and Li, 2018).", "XNLI (Con-neau et al., 2018b) and MLDoc (Schwenk and Li, 2018) are common choices, but they have their own problems: MultiNLI, the dataset from which XNLI was derived, has been shown to contain superfi-cial cues that can be exploited (Gururangan et al., 2018), while MLDoc can be solved by keyword matching (Artetxe et al., 2020b).", "There are non-English counterparts for more challenging tasks such as question answering (Cui et al., 2019; Hsu et al., 2019), but these only exist for a handful of languages.", "More recent datasets such as XQuAD Methodological issues Examples Validation and hyperparameter tuning Systematic tuning with parallel data or on test data Evaluation on favorable conditions Typologically similar languages; always including English; training on the same domain Over-reliance on translation tasks Overfitting to bilingual lexicon induction; known issues with existing datasets Lack of an established benchmark Evaluation on many different tasks; problems with common tasks (MLDoc and XNLI) Table 2: Methodological issues pertaining to validation and hyperparameter tuning and evaluation practices in current work on unsupervised cross-lingual learning.", "(Artetxe et al., 2020b), MLQA (Lewis et al., 2019) and TyDi QA (Clark et al., 2020) cover a wider set of languages, but a comprehensive benchmark that evaluates multilingual representations on a diverse set of tasksin the style of GLUE (Wang et al., 2018) and languages has been missing until very recently.", "The contemporaneous XTREME (Hu et al., 2020) and XGLUE (Liang et al., 2020) benchmarks try to close this gap, but they are still restricted to languages where existing labelled data is available.", "Finally, an additional issue is that a large part of these benchmarks were created through translation, which was recently shown to introduce artifacts (Artetxe et al., 2020a).", "The three categories of UCL (2) have so far been treated as separate research topics by the community.", "In particular, cross-lingual word embeddings have a long history (Ruder et al., 2019), while deep multilingual pretraining has emerged as a separate line of research with its own best practices and evaluation standards.", "At the same time, unsupervised machine translation has been considered a separate problem in its own right, where cross-lingual word embeddings and deep multilingual pretraining have just served as initialization techniques.", "While each of these families have their own defining features, we believe that they share a strong connection that should be considered from a more holistic perspective.", "In particular, both cross-lingual word embeddings and deep multilingual pretraining share the goal of learning (sub)word representations, and essentially differ on whether such representations are static or context-dependent.", "Similarly, in addition to being a downstream application of the former, unsupervised machine translation can also be useful to develop other multilingual applications or learn better crosslingual representations.", "This has previously been shown for supervised machine translation (McCann et al., 2017; Siddhant et al., 2019) and recently for bilingual lexicon induction (Artetxe et al., 2019a).", "In light of these connections, we call for a more holistic view of UCL, both from an experimental and theoretical perspective.", "Evaluation.", "Most work on cross-lingual word embeddings focuses on bilingual lexicon induction.", "In contrast, deep multilingual pretraining has not been tested on this task, and is instead typically evaluated on zero-shot cross-lingual transfer.", "We think it is important to evaluate both families cross-lingual word embeddings and deep multilingual representationsin the same conditions to better understand their strengths and weaknesses.", "In that regard, Artetxe et al. (2020b) recently showed that deep pretrained models are much stronger in some downstream tasks, while cross-lingual word embeddings are more efficient and sufficient for simpler tasks.", "However, this could partly be attributed to a particular integration strategy, and we advocate for using a common evaluation framework in future work to allow a direct comparison between the different families.", "Theory.", "From a more theoretical perspective, it is still not well understood in what ways crosslingual word embeddings and deep multilingual pretraining differ.", "While one could expect the latter to be learning higher-level multilingual abstractions, recent work suggests that deep multilingual models might mostly be learning a lexical-level alignment (Artetxe et al., 2020b).", "For that reason, we believe that further research is needed to understand the relation between both families of models.", "To summarize, we make the following practical recommendations for future cross-lingual research:", "Be explicit about the monolingual and crosslingual signals used by your approach and the assumptions it makes, and take them into considerations when comparing different models.", "Report the validation scheme used.", "Minimize the use of parallel data by preferring an unsupervised validation criterion and/or using only one language for development.", "Always keep the test set blind.", "Pay attention to the conditions in which you evaluate your model.", "Consider the impact of typology, linguistic distance, and the domain similarity, size and noise of the training data.", "Be aware of known issues with common benchmarks, and favor evaluation on a diverse set of tasks.", "Keep a holistic view of UCL, including crosslingual word embeddings, deep multilingual pretraining and unsupervised machine translation.", "To the extent possible, favor a common evaluation framework for these different families.", "In this position paper, we review the status quo of unsupervised cross-lingual learninga relatively recent field.", "UCL is typically motivated by the lack of cross-lingual signal for many of the world's languages, but available resources indicate that a scenario with no parallel data and sufficient monolingual data is not realistic.", "Instead, we advocate for the importance of UCL for scientific reasons.", "We also discuss different monolingual and crosslingual training signals that have been used in the past, and advocate for carefully reporting them to enable a meaningful comparison across different approaches.", "In addition, we describe methodological issues related to the unsupervised setting and propose measures to ameliorate them.", "Finally, we discuss connections between cross-lingual word embeddings, deep multilingual pre-training, and unsupervised machine translation, calling for an evaluation on an equal footing.", "We hope that this position paper will serve to strengthen research in UCL, providing a more rigorous look at the motivation, definition, and methodology.", "In light of the unprecedented growth of our field in recent times, we believe that it is essential to establish a rigorous foundation connecting past and present research, and an evaluation protocol that carefully controls for the use of parallel data and assesses models in diverse, challenging settings.", "This research was partially funded by a Face-book Fellowship, the Basque Government ex-cellence research group (IT1343-19), the Spanish MINECO (UnsupMT TIN2017-91692-EXP MCIU/AEI/FEDER, UE) and Project BigKnowl-edge (Ayudas Fundacin BBVA a equipos de inves-tigacin cientfica 2018)." ]
[ "objective", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "objective", "method", "other" ]
[ "Despite significant interest in developing general purpose fact checking models, it is challenging to construct a large-scale fact verification dataset with realistic real-world claims.", "Existing claims are either authored by crowdworkers, thereby introducing subtle biases that are difficult to control for, or manually verified by professional fact checkers, causing them to be expensive and limited in scale.", "In this paper, we construct a large-scale challenging fact verification dataset called FAVIQ, consisting of 188k claims derived from an existing corpus of ambiguous information-seeking questions.", "The ambiguities in the questions enable automatically constructing true and false claims that reflect user confusions (e.g., the year of the movie being filmed vs. being released).", "Claims in FAVIQ are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.", "Our experiments show that the state-of-the-art models are far from solving our new task.", "Moreover, training on our data helps in professional fact-checking, outperforming models trained on the widely used dataset FEVER or in-domain data by up to 17% abso-lute.", "Altogether, our data will serve as a challenging benchmark for natural language understanding and support future progress in professional fact checking.", "1 1 Introduction Fact verification, the task of verifying the factuality of the natural language claim, is an important NLP application (Cohen et al., 2011) and has also been used to evaluate the amount of external knowledge a model has learned (Petroni et al., 2021).", "However, it is challenging to construct fact verification data with claims that contain realistic and implicit misinformation.", "Crowdsourced claims from prior work such as FEVER (Thorne et al., 2018a) are Equal Contribution 1 Data available at https://faviq.github.io .", "written with minimal edits to reference sentences, leading to strong lexical biases such as the overuse of explicit negation and unrealistic misinformation that is less likely to occur in real life (Schuster et al., 2019).", "On the other hand, data constructed by professional fact-checkers are expensive and are typically small-scale (Hanselowski et al., 2019).", "In this paper, we show it is possible to use information-seeking questions (Kwiatkowski et al., 2019) and their ambiguities (Min et al., 2020) to construct a large-scale, challenging, and realistic fact verification dataset.", "Information-seeking questions are inherently ambiguous because users do not know the answers to the questions they are posing.", "For example, in Figure 1, the question is ambiguous because the filming of the movie and the release of the movie can both be seen as the creation time.", "We introduce a new dataset FAVIQ FA ct V erification derived from I nformation-seeking Q uestions, which uses such ambiguities to generate challenging fact verification problems.", "For instance, the claim in Figure 1 requires the model to identify that the movie released in 2001 is in fact filmed in 2000 and to return refute .", "Like this, claims generated through the crossover of the disambiguation of information-seeking questions 5154 are likely to contain misinformation that real users are easily confused with.", "We automatically generate such claims by composing valid and invalid question-answer pairs and transforming them into textual claims using a neural model.", "The data is further augmented by claims from regular question-answer annotations.", "In total, FAVIQ consists of 188k claims.", "We manually verified a subset of claims to ensure that they are as natural as human-written claims.", "Our analysis shows that the claims have significantly lower lexical bias than existing crowdsourced claims; claims involve diverse types of distinct entities, events, or properties that are semantically close, being more realistic and harder to verify without a complete understanding of the evidence text.", "Our experiments show that a model with no background knowledge performs only slightly better than random guessing, and the state-of-the-art model achieves an accuracy of 65%, leaving significant room for improvement.", "Furthermore, training on FAVIQ improves the accuracy of verification of claims written by professional fact checkers, outperforming models trained on the target data only or pretrained on FEVER by up to 17% absolute.", "Together, our experiments demonstrate that FAVIQ is a challenging benchmark as well as a useful resource for professional fact checking.", "To summarize, our contributions are three-fold:", "1. We introduce FAVIQ, a fact verification dataset consisting of 188k claims.", "By leveraging information-seeking questions and their natural ambiguities, our claims require the identification of entities, events, or properties that are semantically close but distinct, making the fact verification problem very challenging and realistic.", "2. Our experiments show that the state-of-the-art fact verification models are far from solving FAVIQ, indicating significant room for improvement.", "3. Training on FAVIQ significantly improves the verification of claims written by professional fact checkers, indicating that FAVIQ can support progress in professional fact checking.", "as a benchmark to evaluate the knowledge in model (Petroni et al., 2021).", "One line of work has studied professional fact checking, dealing with claims collected by professional fact checkers in specific domains (Vlachos and Riedel, 2014; Ferreira and Vlachos, 2016; Au-genstein et al., 2019; Hanselowski et al., 2019).", "While such data contains realistic claims that have occurred in the real world, it is expensive to construct as it requires labor from professional fact checkers.", "Moreover, it is less suitable as a benchmark due to lack of a standard evidence corpus such as Wikipedia 2 and ambiguities in labels.", "3 Other fact verification datasets are collected through crowdsourcing (e.g., FEVER (Thorne et al., 2018a) and its variants (Thorne et al., 2018b; Thorne and Vlachos, 2019)) by altering a word or negating the reference text to intentionally make true or false claims.", "This process leads to large-scale datasets but with strong artifacts and unrealistic claims (Schuster et al., 2019; Thorne and Vlachos, 2019; Eisenschlos et al., 2021).", "Consequently, a trivial claim-only baseline with no evidence achieves near 80% (Petroni et al. (2021), verified in Section 4.1).", "While more recent work proposes new crowdsourcing methods that alleviate artifacts (Schuster et al., 2021; Eisenschlos et al., 2021), their claims are still written given particular evidence text, being vulnerable to subtle lexical biases that can be hard to explicitly measure.", "We construct a fact verification dataset from highly ambiguous information-seeking questions.", "Our claims have significantly less lexical bias than other crowdsourced ones (Figure 3), contain realistic misinformation that people are likely to be confused about (Table 4), and are challenging to current state-of-the-art models (Section 4.1).", "Moreover, training a model on our data improves professional fact checking (Section 4.2).", "QA to Verification Task Prior work has also used QA data to create entailment or fact verification benchmarks.", "Most make use of synthetic or annotated questions (Demszky et al., 2018; Jiang et al., 2020; Pan et al., 2021; Chen et al., 2021) 4 2 For this reason, prior work on professional fact checking assumes gold evidence document.", "while we use questions posed by real users to reflect confusions that naturally occur while seeking information.", "Thorne et al. (2021) use information-seeking questions, by converting yes/no questions to support / refute claims, but at a small scale and with unambiguous questions.", "Instead, our work uses large-scale information-seeking questions (with no restriction in answers) to claims.", "We are also unique in using highly ambiguous QA pairs to obtain claims that are more challenging to verify and have significantly fewer lexical cues (quantitative comparisons in Section 3.3).", "We construct FAVIQ FA ct V erification derived from I nformation-seeking Q uestions, where the model is given a natural language claim and predicts support or refute with respect to the English Wikipedia.", "The key idea to construct the data is to gather a set of valid and invalid question-answer pairs (Section 3.1.2) from annotations of information-seeking questions and their ambiguities (Section 3.1.1), and then convert each question-answer pair ( q, a ) to a claim (Section 3.1.3).", "Figure 2 presents an overview of this process.", "We use QA data from Natural Questions (NQ, Kwiatkowski et al. (2019)) and AmbigQA (Min et al., 2020).", "NQ is a large-scale dataset consisting of the English information-seeking questions mined from Google search queries.", "AmbigQA provides disambiguated question-answer pairs for NQ questions, thereby highlighting the ambiguity that is inherent in information-seeking questions.", "Given an ambiguous question, it provides a set of multiple distinct answers, each paired with a new disambiguated question that uniquely has that answer.", "3.1.2 Composing Valid and Invalid QA Pairs FAVIQ is constructed from ambiguous questions and their disambiguation ( A set) and is further augmented by using unambiguous question-answer pairs ( R set).", "From ambiguous questions ( A set) We use the data consisting of a set of ( q, { q 1 , a 1 } , { q 2 , a 2 } ) , where q is an information seeking question that has a 1 , a 2 as multiple distinct answers.", "5 q 1 and q 2 are disambiguated questions for the answers a 1 and a 2 , i.e., q 1 has a 1 as a valid answer and a 2 as an invalid answer.", "We use ( q 1 , a 1 ) and ( q 2 , a 2 ) as valid question-answer pairs, and ( q 1 , a 2 ) and ( q 2 , a 1 ) as invalid question-answer pairs.", "This data is particularly well suited to fact checking because individual examples require identification of entities, events, or properties that are semantically close but distinct: the fact that a user asked an ambiguous question q without realizing the difference between ( q 1 , a 1 ) and ( q 2 , a 2 ) indicates that the distinction is non-trivial and is hard to notice without sufficient background knowledge about the topic of the question.", "From regular questions ( R set) We use the QA data consisting of a set of ( q, a ) : an information-seeking question q and its answer a .", "We then obtain an invalid answer to q , denoted as a neg , from an off-the-shelf QA model for which we use the model from Karpukhin et al. (2020)DPR followed by a span extraction model.", "We choose a neg 5 If q has more than two distinct answers, we sample two.", "with heuristics to obtain hard negatives but not the false negative; details provided in Appendix A. We use ( q, a ) and ( q, a neg ) as a valid and an invalid question-answer pair, respectively.", "We can think of ( q, a neg ) as a hard negative pair chosen adversarially from the QA model.", "6 This data can be obtained on a much larger scale than the A set because annotating a single valid answer is easier than annotating disambiguations.", "We transform question-answer pairs to claims by training a neural model which maps ( q, a ) to a claim that is support if and only if a is a valid answer to q , otherwise refute .", "We first manually convert 250 valid and invalid question-answer pairs obtained through Section 3.1.2 to claims.", "We then train a T5-3B model (Raffel et al., 2020), using 150 claims for training and 100 claims for validation.", "The model is additionally pretrained on data from Demszky et al. (2018), see Appendix A. 3.1.4 Obtaining silver evidence passages We obtain silver evidence passages for FAVIQ by (1) taking the question that was the source of the claim during the data creation (either a user question from NQ or a disambiguated question from AmbigQA), (2) using it as a query for TF-IDF over the English Wikipedia, and (3) taking the top passage that contains the answer.", "Based on our manual verification on 100 random samples, the precision of the silver evidence passages is 70%.", "We provide silver evidence passages primarily for supporting training of the model, and do not explicitly evaluate passage prediction; more discussion in Appendix A. Future work may use human annotations on top of our silver evidence passages in order to further improve the quality, or evaluate passage prediction.", "In order to evaluate the quality of claims and labels, three native English speakers were given 300 random samples from FAVIQ, and were asked to: (1) verify whether the claim is as natural as a human-written claim, with three possible ratings (perfect, minor issues but comprehensible, incom-prehensible), and (2) predict the label of the claim ( support or refute ).", "Validators were allowed to use search engines, and were encouraged to use the English Wikipedia as a primary source.", "Validators found 80.7% of the A set and 89.3% of the R set to be natural, and 0% to be incomprehensible.", "The rest have minor grammatical errors or typos, e.g., missing the.", "In most cases the errors actually come from the original NQ questions which were human-authored, indicating that these grammatical errors and typos occur in real life.", "Lastly, validators achieved an accuracy of 95.0% (92.7% of A and 97.3% of R) when evaluated against gold labels in the datathis indicates high-quality of the data and high human performance.", "This accuracy level is slightly higher than that of FEVER (91.2%).", "Data statistics for FAVIQ are listed in Table", "1. It has 188k claims in total, with balanced support and refute labels.", "We present quantitative and qualitative analyses showing that claims on FAVIQ contain much less lexical bias than other crowdsourced datasets and include misinformation that is realistic and harder to identify.", "Comparison of size and claim length Table 2 compares statistics of a variety of fact verification datasets: SNOPES (Hanselowski et al., 2019), SCIFACT (Wadden et al., 2020), FEVER (Thorne et al., 2018a), FM2 (Eisenschlos et al., 2021), BOOLQ-FV (Thorne et al., 2021) and FAVIQ.", "FAVIQ is as large-scale as FEVER, while its distributions of claim length is much closer to claims authored by professional fact checkers (SNOPES and SCIFACT).", "FM2 is smaller scale, due to diffi-culty in scaling multi-player games used for data construction, and has claims that are slightly longer than professional claims, likely because they are intentionally written to be difficult.", "BOOLQ-FV is smaller, likely due to relative difficulties in collecting naturally-occurring yes/no questions.", "Lexical cues in claims We further analyze lexical cues in the claims on FEVER, FM2, BOOLQ-FV and FAVIQ by measuring local mutual information (LMI; Schuster et al. (2019); Eisenschlos et al. (2021)).", "LMI measures whether the given bigram correlates with a particular label.", "More specifically, LMI is defined as: LMI ( w, c ) = P ( w, c ) log P ( w, c ) P ( w ) P ( c ) , where w is a bigram, c is a label, and P ( ) are estimated by counting (Schuster et al., 2019).", "The distributions of the LMI scores for the top-100 bigrams are shown in Figure", "3. The LMI scores of FAVIQ are significantly lower than those of FEVER, FM2, and BOOLQ-FV, indicating that FAVIQ contains significantly less lexical bias.", "Tables 3 shows the top six bigrams with the highest LMI scores for FEVER and FAVIQ.", "As highlighted, all of the top bigrams in refute claims of FEVER contain negative expressions, e.g., is only, incapable of, did not.", "In contrast, the top bigrams from FAVIQ do not include obvious negations and mostly overlap across different labels, strongly suggesting the task has fewer lexical cues.", "Although there are still top bigrams from FAVIQ causing bias (e.g., related to time, such as on October'), their LMI values are significantly lower compared those from other datasets.", "Qualitative analysis of the refute claims We also analyzed 30 randomly sampled refute claims from FAVIQ and FEVER respectively.", "We categorized the cause of misinformation as detailed in Appendix B, and show three most common categories for each dataset as a summary in Table", "4. On FAVIQ, 60% of the claims involve entities, events or properties that are semantically close, but still distinct.", "For example, they are specified with conjunctions (e.g., was foreign minister and signed the treaty of versailles from germany), or share key attributes (e.g., films with the same ti-tle).", "This means that relying on lexical overlap or partially understanding the evidence text would lead to incorrect predictions; one must read the full evidence text to realize that the claim is false.", "Furthermore, 16.7% involve events, e.g., from fil-ing for bankruptcy for the first time to completely ceasing operations (Table 4).", "This requires full understanding of the underlying event and tracking of state changes (Das et al., 2019; Amini et al., 2020).", "(2021); many of claims contain explicit negations (30%) and antonyms (13%), with misinformation that is less likely to occur in the real world (20%).", "7 4 Experiments We first evaluate state-of-the-art fact verification models on FAVIQ in order to establish baseline performance levels (Section 4.1).", "We then conduct experiments on professional fact-checking datasets to measure the improvements from training on FAVIQ (Section 4.2).", "We experiment with two settings: a zero-shot setup where models are trained on FEVER, and a standard setup where models are trained on FAVIQ.", "For FEVER, we use the KILT (Petroni et al., 2021) version following prior work; we randomly split the official validation set into equally sized validation and test sets, as the official test set is hidden.", "7 For instance, consider the claim Mutiny on the Bounty is Dutch in Table", "4. There is no Dutch producer, director, writer, actors, or actress in the filmwe were not able to find a potential reason that one would believe that the film is Dutch.", "All models are based on BART (Lewis et al., 2020), a pretrained sequence-to-sequence model which we train to generate either support or refute .", "We describe three different variants which differ in their input, along with their accuracy on FEVER by our own experiments.", "Claim only BART takes a claim as the only input.", "Although this is a trivial baseline, it achieves an accuracy of 79% on FEVER.", "TF-IDF + BART takes a concatenation of a claim and k passages retrieved by TF-IDF from Chen et al. (2017).", "It achieves 87% on FEVER.", "We choose TF-IDF over other sparse retrieval methods like BM25 (Robertson and Zaragoza, 2009) because Petroni et al. (2021) report that TF-IDF outperforms BM25 on FEVER.", "DPR + BART takes a concatenation of a claim and k passages retrieved by DPR (Karpukhin et al., 2020), a dual encoder based model.", "It is the state-of-the-art on FEVER based on Petroni et al. (2021) and Maillard et al. (2021), achieving an accuracy of 90%.", "Implementation details We use the English Wikipedia from 08/01/2019 following KILT (Petroni et al., 2021).", "We take the plain text and lists provided by KILT and create a collection of passages where each passage has up to 100 tokens.", "This results in 26M passages.", "We set the number of input passages k to 3, following previous work (Petroni et al., 2021; Maillard et al., 2021).", "Baselines on FAVIQ are jointly trained on the A set and the R set.", "Training DPR requires a positive and a negative passagea passage that supports and does not support the verdict, respectively.", "We use the silver evidence passage associated with FAVIQ as a positive, and the top TF-IDF passage that is not the silver evidence passages as a negative.", "More training details are in Appendix C. Experiments are reproducible from https://github.com/ faviq/faviq/tree/main/codes .", "Table 5 reports results on FAVIQ.", "The overall accuracy of the baselines is low, despite their high performance on FEVER.", "The zero-shot performance is barely better than random guessing, indicating that the model trained on FEVER is not able to generalize to our more challenging data.", "When the baselines are trained on FAVIQ, the best model 5159 Model Dev Test A R A R Training on FEVER (zero-shot) Clain only BART 51.6 51.0 51.9 51.1 TF-IDF + BART 55.8 58.5 54.4 57.2 DPR + BART 56.0 62.3 55.7 61.2 Training on FAVIQ Claim only BART 51.0 59.5 51.3 59.4 TF-IDF + BART 65.1 74.2 63.0 71.2 DPR + BART 66.9 76.8 64.9 74.6 Table 5: Fact verification accuracy on FAVIQ.", "achieves an accuracy of 65% on the A set, indicating that existing state-of-the-art models do not solve our benchmark.", "8 Impact of retrieval The performance of the claim only baseline that does not use retrieval is almost random on FAVIQ, while achieving nearly 80% accuracy on FEVER.", "This result suggests significantly less bias in the claims, and the relative importance of using background knowledge to solve the task.", "When retrieval is used, DPR outperforms TF-IDF, consistent with the finding from Petroni et al. (2021).", "A set vs. R set The performance of the models on the R set is consistently higher than that on the A set by a large margin, implying that claims based on ambiguity arisen from real users are more challenging to verify than claims generated from regular question-answer pairs.", "This indicates clearer contrast to prior work that converts regular QA data to declarative sentences (Demszky et al., 2018; Pan et al., 2021).", "Error Analysis We randomly sample 50 error cases from DPR + BART on the A set of FAVIQ and categorize them, as shown in Table 6.", "Retrieval error is the most frequent type of errors.", "DPR typically retrieves a passage with the correct topic (e.g., about Lie to Me) but that is missing more specific information (e.g., the end date).", "We think the claim having less lexical overlap with the evidence text leads to low recall@ k of the retrieval model ( k = 3 ).", "8 We additionally show and discuss the model trained on FAVIQ and tested on FEVER in Appendix D. They achieve non-trivial performance (67%) although being worse than FEVER-trained models that exploit bias in the data.", "28% of error cases involve events .", "In particular, 14% involve procedural events, and 6% involve distinct events that share similar properties but differ in location or time frame.", "In 18% of error cases, retrieved evidence is valid but not notably explicit , which is naturally the case for the claims occurring in real life.", "FAVIQ has this property likely because it is derived from questions that are gathered independently from the evidence text, unlike prior work (Thorne et al., 2018a; Schuster et al., 2021; Eisenschlos et al., 2021) with claims written given the evidence text.", "16% of the failure cases require multi-hop inference over the evidence.", "Claims in this category usually involve procedural events or compositions (e.g. is Seth Curry's brother and played for Davidson in college).", "This indicates that we can construct a substantial portion of claims requiring multi-hop inference without having to make data that artificially encourages such reasoning (Yang et al., 2018; Jiang et al., 2020).", "Finally, 10% of the errors were made due to a subtle mismatch in properties , e.g., in the example in Figure 6, the model makes a decision based on required minimum number rather than exact number of a particular brand.", "SNOPES (Hanselowski et al., 2019) consists of 6,422 claims, authored and labeled by professional fact-checkers, gathered from the Snopes website.", "9 We use the official data split.", "SCIFACT (Wadden et al., 2020) consists of 1,109 claims based on scientific papers, annotated by domain experts.", "As the official test set is hidden, we use the official validation set as the test set, and separate the subset of the training data as the validation set to be an equal size as the test set.", "For both datasets, we merge not enough info ( NEI ) to refute , following prior work that converts the 3-way classification to the 2-way classification (Wang et al., 2019; Sathe et al., 2020; Petroni et al., 2021).", "the evidence text and is trained to generate either support or refute .", "For SNOPES , the evidence text is given in the original data.", "For SCIFACT, the evidence text is retrieved by TF-IDF over the corpus of abstracts from scientific papers, provided in the original data.", "We use TF-IDF over DPR because we found DPR works poorly when the training data is very small.", "We consider two settings.", "In the first setting, we assume the target training data is unavailable and compare the model trained on FEVER and FAVIQ in a zero-shot setup.", "In the second setting, we allow training on the target data and compare the model trained on the target data only and the model with the transfer learningpretrained on either FEVER or FAVIQ and finetuned on the target data.", "To explore models pretrained on NEI labels, we add a baseline that is trained on a union of the KILT version of FEVER and NEI data from the original FEVER from Thorne et al. (2018a).", "For FAVIQ, we also conduct an ablation that includes the R set only or the A set only.", "Implementation details When using TF-IDF for SCIFACT, we use a sentence as a retrieval unit, and retrieve the top 10 sentences, which average length approximates that of 3 passages from Wikipedia.", "When using the model trained on either FEVER or FAVIQ, we use DPR + BART by default, which gives the best result in Section 4.1.", "As an exception, we use TF-IDF + BART on SCIFACT for a more direct comparison with the model trained on the target data only that uses TF-IDF.", "When the models trained on FEVER or FAVIQ are used for professional fact checking, we find models are poorly calibrated, likely due to a domain shift, as also observed by Kamath et al. (2020) and Desai and Durrett (2020).", "We therefore use a simplified version of Platt scaling, a post-hoc calibration method (Platt et al., 1999; Guo et al., 2017; Zhao et al., 2021).", "Given normalized probabilities of support and refute , denoted as p s and p r , modified probabilities p (cid:48) s and p (cid:48) r are obtained via: (cid:20) p (cid:48) s p (cid:48) r (cid:21) = Softmax (cid:18)(cid:20) p s + p r (cid:21)(cid:19) , where 1 < < 1 is a hyperparameter tuned on the validation set.", "Impact of transfer learning We find that transfer learning is effectivepretraining on large, crowdsourced datasets (either FEVER or FAVIQ) and finetuning on the target datasets always helps.", "Improvements are especially significant on SCIFACT, likely because its data size is smaller.", "Using the target data is still importantmodels finetuned on the target data outperform zero-shot models by up to 20%.", "This indicates that crowdsourced data cannot completely replace professional fact checking data, but transfer learning from crowdsourced data leads to significantly better professional fact checking performance.", "FAVIQ vs. FEVER Models that are trained on FAVIQ consistently outperform models trained on FEVER, both with and without the target data, by up to 4.8% absolute.", "This demonstrates that FAVIQ is a more effective resource than FEVER for professional fact-checking.", "The model on FEVER is more competitive when NEI data is included, by up to 3% absolute.", "While the models on FAVIQ outperform models on FEVER even without NEI data, future work can possibly create NEI data in FAVIQ for further improvement.", "Impact of the A set in FAVIQ The performance of the models that use FAVIQ substantially degrades when the A set is excluded.", "Moreover, models trained on the A set (without R set) perform moderately well despite its small scale, e.g., on SNOPES , achieving the second best performance following the model trained on the full FAVIQ.", "This demonstrates the importance of the A set created based on ambiguity in questions.", "SNOPES benefits more from the A set than the R set, while SCIFACT benefits more from the R set than the A set.", "This is likely because SCIFACT is much smaller-scale (1k claims) and thus benefits more from the larger data like the R set.", "This suggests that having both the R set and the A set is important for performance.", "information-seeking questions.", "We incorporate facts that real users were unaware of when posing the question, leading to false claims that are more realistic and challenging to identify without fully understanding the context.", "Our extensive analysis shows that our data contains significantly less lexical bias than previous fact checking datasets, and include refute claims that are challenging and realistic.", "Our experiments showed that the state-of-the-art models are far from solving FAVIQ, and models trained on FAVIQ lead to improvements in professional fact checking.", "Altogether, we believe FAVIQ will serve as a challenging benchmark as well as support future progress in professional fact-checking.", "We suggest future work to improve the FAVIQ model with respect to our analysis of the model prediction in Section 4.1.2, such as improving retrieval, modeling multi-hop inference, and better distinctions between entities, events and properties.", "Moreover, future work may investigate using other aspects of information-seeking questions that reflect facts that users are unaware of or easily confused with.", "For example, one can incorporate false presuppositions in questions that arise when users have limited background knowledge (Kim et al., 2021).", "As another example, one can explore generating NEI claims by leveraging unanswerable information-seeking questions.", "Furthermore, FAVIQ can potentially be a challenging benchmark for the claim correction, a task recently studied by Thorne and Vlachos (2021) that requires a model to correct the refute claims.", "We thank Dave Wadden, James Thorn and Jinhyuk Lee for discussion and feedback on the paper.", "We thank James Lee, Skyler Hallinan and Sourojit Ghosh for their help in data validation.", "This work was supported by NSF IIS-2044660, ONR N00014-18-1-2826, an Allen Distinguished Investigator Award, a Sloan Fellowship, National Research Foundation of Korea (NRF-2020R1A2C3010638) and the Ministry of Science and ICT, Korea, under the ICT Creative Consilience program (IITP-2022-2020-0-01819)." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "result", "abstain", "objective", "objective", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "other", "other", "method", "other", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Healthcare predictive analytics aids medical decision-making, diagnosis prediction and drug review analysis.", "Therefore, prediction accuracy is an important criteria which also necessitates robust predictive language models.", "However, the models using deep learning have been proven vulnerable towards in-significantly perturbed input instances which are less likely to be misclassified by humans.", "Recent efforts of generating adversaries using rule-based synonyms and BERT-MLMs have been witnessed in general domain, but the ever-increasing biomedical literature poses unique challenges.", "We propose BBAEG (Biomedi-cal BERT-based Adversarial Example Genera-tion), a black-box attack algorithm for biomedical text classification, leveraging the strengths of both domain-specific synonym replacement for biomedical named entities and BERT-MLM predictions, spelling variation and number replacement.", "Through automatic and human evaluation on two datasets, we demonstrate that BBAEG performs stronger attack with better language fluency, semantic coherence as compared to prior work.", "Recent studies have exposed the importance of biomedical NLP in the well-being of human-beings, analyzing the critical process of medical decision-making.", "However, the dialogue managing tools targeted for medical conversations (Zhang et al., 2020), (Campillos Llanos et al., 2017), (Kazi and Kahanda, 2019) between patients and healthcare providers in assisting diagnosis may generate certain insignificant perturbations (spelling errors, paraphrasing), which when fed to the classifier to determine the type of diagnosis required/detecting adverse drug effects/drug recommendation, might provide unreasonable performance.", "Insignificant The work started when the author was a student at IIT Kharagpur, India.", "perturbations might also creep in from the casual language expressed in the tweets (Zilio et al., 2020).", "Thus, the classifier needs to be robust towards these perturbations.", "Generating adversarial examples in text is challenging compared to computer vision tasks because of", "(i) discrete nature of input space and", "(ii) preservation of semantic coherence with original text.", "Initial works for attacking text models relied on introducing errors at the character level or manipulating words (Feng et al., 2018) to generate adversarial examples.", "But due to grammatical disflu-ency, these seem very unnatural.", "Some rule-based synonym replacement strategies (Alzantot et al., 2018), (Ren et al., 2019) have lead to more natural looking examples.", "(Jin et al., 2019) proposed TextFooler, as a baseline to generate adversaries for text classification models.", "But, the adversarial examples created by TextFooler rely heavily on word-embedding based word similarity replacement technique, and not overall sentence semantics.", "Recently, (Garg and Ramakrishnan, 2020) proposed BERT-MLM-based (Devlin et al., 2019) word replacements to create adversaries to better fit the overall context.", "Despite these advancements, there is much less attention towards making robust predictions in critical domains like biomedical, which comes with its unique challenges.", "(Araujo et al., 2020) has proposed two types of rule-based adversarial attacks inspired by natural spelling errors and typos made by humans and synonym replacement in the biomedical domain.", "Some challenges include: 1) Biomedical named entities are usually multi-word phrases such as colorectal adenoma .", "During token replacement, we need the entire entity to be replaced, but the MLM model (token-level replacement) fails to generate correct synonym of entity fitting in the context.", "So, we need a BioNER+Entity Linker (Martins et al., 2019), (Mondal et al., 2019) to link entity to ontology for generating correct synonyms.", "2) Due to several variations of representing medical entities such as Type I Diabetes could be expressed as 'Type One Diabetes' , we explore numeric entity expansion strategies for generating adversaries.", "3) Spelling variations (keyboard swap, modification).", "While we evaluate on two benchmark datasets, our method is general and is applicable for any biomedical classification datasets.", "In this paper, we present BBAEG (Biomedi-cal BERT-based Adversarial Example Generation) 1 , a novel black-box attack algorithm for biomedical text classification task leveraging both the BERT-MLM model for non-named entity replacements combined with NER linked synonyms for named entities to better fit the overall context.", "In addition to replacing words with synonyms, we explore the mechanism of generating adversarial examples using typographical variations and numeric entity modification .", "Our BBAEG attack beats the existing baselines by a wide margin on both automatic and human evaluation across datasets and models.", "To the best of our knowledge, we are the first to introduce a novel algorithm for generating adversarial examples for biomedical text whose success attack is higher than the existing baselines like TextFooler and BAE (Garg and Ra-makrishnan, 2020), (Li et al., 2020).", "The overall contributions of the paper include: 1) We explore several challenges of biomedical adversarial example generation.", "2) We propose BBAEG, a biomedical adversarial example generation technique for text classification combining the power of several perturbation techniques.", "3) We introduce 3 type of attacks for this purpose on two biomedical text classification datasets.", "4) Through human evaluation, we show that BBAEG yields adversarial examples with improved naturalness.", "Problem Definition: Given a set of n inputs ( D, Y ) = [( D 1 , y 1 ), . . .( D n , y n )] and a trained classifier M : D Y , we assume the soft-label black-box setting where the attacker can only query the classifier for output probabilities on a given input, and has no access to the model parameters, gradients or training data.", "For an input of length l consisting of words w i , where 1 i l , ( D i = [ w 1 , ..., w l ] , y ) , we want to generate an adversarial example D adv such that M ( D adv ) (cid:54) = y .", "We would like D adv to be grammatically correct, 1 https://github.com/Ishani-Mondal/BBAEG.git Algorithm 1: BBAEG Algorithm Input: D =[ w 1 , ... w l ], label = y , target classification model M Output: Adversarial example of D = D adv 1 Initialization: D adv D , Tag the entities in D , Named entities are in SNE and the rest in SNNE ; 2 Compute token importance I i w i D ; 3 for i in descending order of I i do 4 L = {} ; 5 if ( w i in SNE and ( w i t ..w i + t ) is a NE) then 6 Syns = synonyms of NE; 7 for s Syns do 8 L [ s ] = D adv [1: i t 1] [s] D adv [ i + t +1: l ] 9 end for ; 10 else if ( w i in SNNE ) then 11 D adv = D adv [1: i 1] [M] D adv [ i +1: l ] ; 12 T = topK filtered and semantically similar tokens for M DM ; 13 for t T do 14 L [ t ] = D adv [1: i 1] [t] D adv [ i +1: l ] 15 end for ; 16 end if ; 17 if t T such that M ( L [ t ]) (cid:54) = y then 18 Return: D adv L [ t (cid:48) ] where M ( L [ t ]) (cid:54) = y and L [ t (cid:48) ] has maximum similarity with D 19 else 20 N 1 = Rotate p characters in w i ( p l ); 21 N 2 = Random insertion of symbols before/end in w i ; 22 Noise = N 1 + N 2 ; 23 for t Noise do 24 L [ t ] = D adv [1: i 1] [t] D adv [ i +1: l ] 25 end for ; 26 if t T such that M ( L [ t ]) (cid:54) = y then 27 Return: D adv L [ t (cid:48) ] where M ( L [ t ]) (cid:54) = y and L [ t (cid:48) ] has maximum similarity with D 28 else if w i contains numeric entity then 29 t = Replace w i by num 2 words ; 30 L [ t ] = D adv [1: i 1] [t] D adv [ i +1: l ] ; 31 Return: D adv L [ t ] if M ( L [ t ]) (cid:54) = y 32 else 33 Return: D adv L [ t (cid:48) ] where L [ t (cid:48) ] causes max reduction in y probability 34 end if ; 35 end if ; 36 end for ; 37 Return D adv None semantically similar to D ( Sim ( D , D adv ) ), where denotes the similarity threshold.", "Our proposed BBAEG algorithm consists of four steps: 1) Tagging the biomedical entities on D and prepare two classes NE (named entities) and Non-NE (non-named entities) 2) Ranking the important words for perturbation 3) Choosing perturbation schemes 4) Final adversaries generation.", "sciSpacy 2 with en-ner-bc5cdr-md to extract biomedical named entities (drugs and diseases), followed by its Entity Linker (Drugs to DrugBank (Wishart et al., 2017), Disease to MESH 3 )).", "After linking the NE to respective ontologies, we use pyMeshSim 4 (for disease) and DrugBank (for drugs) to obtain synonyms.", "In each D i of size l ( w 1 , w 2 , ... [ w i ...w i +2 ] , ...w l ), multi-word expressions ( w i ...w i +2 ) are named entities.", "We put them in Named Entities Set ( SNE ) and other words in non-Named Entity set ( SNNE ) .", "2) Ranking of important words: We estimate token importance I i of each w i D , by deleting w i from D and computing the decrease in probability of predicting the correct label y (Line 2), similar to (Jin et al., 2019).", "Thus, we receive a set for each token which contains the tokens in decreasing order of their importance.", "3) Choosing perturbation schemes: Consider the input D i , we describe a sieve-based approach of perturbing D i .", "Sieves are ordered by precision, with the most precise sieve appearing first.", "Sieve 1 : In the first sieve, we propose to alter the synonyms of the tokens in SNE (Line 5-9) using Ontology linking and the words in SNNE (Line 10-15) using BERT-MLM predicted tokens.", "This stems from the fact that synonym replacement of the non-named entities using BERT-MLM generates reasonable predictions considering the surrounding context (Garg and Ramakrishnan, 2020).", "If the token is a part of SNE , replace them with the domain-specific synonyms one by one, but if the token is part of SNNE , then replace those words by the topK BERT-MLM predictions.", "To achieve high semantic similarity with the original text, we filter the set of top K tokens ( K is a pre-defined constant) (Line 12) predicted by BERT-MLM for the masked token, using a Sentence-Transformer (Reimers and Gurevych, 2019) based sentence similarity scorer.", "Additionally, we filter out predicted tokens that do not belong to the same part of speech as original token.", "If this sieve generates adversaries for D i , then D adv is being returned.", "Sieve 2: (Line 20-28) If the first sieve does not generate adversary, we introduce two typographical noise in the input 1) Spelling Noise-N1: Rotating random p characters (Line 20) 2) Spelling Noise-N2: insertion of symbols to the beginning or end (Line 21).", "If this sieve generates adversaries for D i , then D adv is being returned.", "Sieve 3: (Line 29-31) If Sieve 2 does not generate adversary, we replace the numeric entities by expanding the numeric digit.", "For example: PMD1 can be rewritten as PMD One, Covid19 as Covid nineteen .", "If this sieve generates adversaries for D i , then D adv is being returned.", "4) Final adversaries generation: For each of the three sieves, among all the winning adversaries, the one which is the most similar to original text as measured by (Reimers and Gurevych, 2019) is returned.", "If the sieves do not generate adversaries, we return the perturbed example which causes maximum reduction in the probability of output.", "Datasets and Experimental Details: We evaluate BBAEG on two different biomedical text classification datasets: 1) Adverse Drug Event (ADE) Detection (Gurulingappa et al., 2012) and 2) Twitter ADE dataset (Rosenthal et al., 2017) for the task of classifying whether the sentence contains mention of ADE (binary).", "We use 6 classification models as M : Hierarchical Attention Model (Yang et al., 2016), BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), BioBERT (Lee et al., 2019), Clinical-BERT (Huang et al., 2019), SciBERT (Beltagy et al., 2019).", "We fine-tune these models on the training data (of each corpus) using Adam Optimizer (Kingma and Ba, 2015) with learning rate of 0.00002, 10 epochs and perform adversarial attack on the test data.", "For the BBAEG non-NER synonym attacks, we use BERT-base-uncased MLM to predict the masked tokens.", "We consider top K =10 synonyms from the BERT-MLM predictions and set threshold of 0.75 for cosine similarity between (Reimers and Gurevych, 2019) embeddings of the adversarial and input text, we set p =2 characters for rotation to introduce noise in input.", "For more details refer to the appendix.", "Automatic Evaluation Results: We examine the success of adversarial attack using two criteria: (1) Performance Drop (Adrop): Difference between original (accuracy on original test set) and afterattack accuracy (accuracy on the perturbed test set) (2) Perturbation of input (%): Percentage of perturbed words in adversary generated.", "Success of attack is directly and indirectly proportional with criteria 1 and 2 respectively.", "Effectiveness: Table 1 shows the results of BBAEG attack on two datasets across all the models.", "During our experiments with HAN (general deep learning model), we observe that the attack is the most successful compared to BERT-variants, RoBERTa and the existing baselines, in terms of both the criteria (1 and 2).", "Also, using BioBERT and Sci-BERT (35-45% and 40-50% accuracy drop respectively), the attack is the most successful.", "This stems from the fact that the vocabularies used in the datasets have already been explored during pre-training by the contextual embeddings, thus more sensitive towards small perturbations.", "Moreover, it has been clearly observed that unlike BERT and HAN, RoBERTa is very less susceptible to adversarial attacks (10-20% accuracy drop), perturbing 20-25% words in the input space.", "We also observe that BERT-MLM-based synonym replacement techniques for non-NER, combined with multi-word NER synonym replacement using entity linking outperforms TextFooler(TF) and BAE-based approaches in terms of accuracy drop.", "Ablation Analysis: In Table 3, we perform an ablation analysis on the different perturbation schemes and the effect of the attack using each of the sieves by making use of two fine-tuned contextual embedding model as the target model for ADE classification.", "Synonym replacement (S1) (average 35% accuracy drop) and character rotation (S2-1) (average 38% accuracy drop) seems to be the most promising approach for success attacks on biomedical text classification.", "Moreover, we conduct a deeper analysis to gain an insight of how much the synonyms of NER vs Non-NER entities contribute towards prediction change.", "We have found that the multi-Twitter ADE ADE Accuracy Drop (Semantic Similarity) Accuracy Drop (Semantic Similarity) BioBERT-BBAEG (best variation) 0.43 (0.893) 0.42 (0.906) w/o Synonym Replacement (S1) 0.39 (0.899) 0.40 (0.919) w/o Spelling Noise N1 (S2-1) 0.37 (0.901) 0.35 (0.912) w/o Spelling Noise N2 (S2-2) 0.34 (0.913) 0.31 (0.891)) w/o Number Replacement (S3) 0.30 (0.920) 0.27 (0.915) SciBERT-BBAEG (best variation) 0.45 (0.879) 0.38 (0.881) w/o Synonym Replacement (S1) 0.42 (0.901) 0.35 (0.912) w/o Spelling Noise N1 (S2-1) 0.39 (0.915) 0.36 (0.901) w/o Spelling Noise N2 (S2-2) 0.31 (0.891) 0.31 (0.847) w/o Number Replacement (S3) 0.32 (0.911) 0.36 (0.903) Table 3: Ablation analysis of the sieves (S1-S3) on accuracy drop and average semantic similarities between adversaries and original text.", "word NERs during replacement generates natural-looking examples (compared to MLM-based entity replacement such as pulmonary eosinophillia is replaced by Loeffler Syndrome (for BBAEG) by normalizing to MESH vocabulary, while replaced by disease in BAE predictions as shown in Table 2 and they seem very unnatural.", "This proves that high semantic similarity does not always ensure generation of proper grammatical adversaries.", "Human Evaluation: Apart from automatic evaluation, we also perform human evaluation of our BBAEG attacks on the BERT classifier.", "We perform similar kind of human evaluation by two biomedical domain-experts on randomly selected 100 generated adversarial examples (from each of the different attack algorithms) on each of the two datasets.", "For each sample, 50 annotations were collected.", "Similar setup was performed by (Garg and Ramakrishnan, 2020) during evaluation.", "The main two criteria for evaluation of the perturbed samples are as follows: 1) Naturalness : How much the adversaries generated is semantically similar to the original text content, preserving grammatical correctness on Likert Scale (1-5)?", "To evaluate the naturalness of the adversarial examples, we first present the annotators with 50 different set of original data samples to understand data distribution.", "2) Accuracy of generated instances: on the binary classification of presence of Adverse Drug Reaction (ADR) on the adversarial examples.", "We enumerate the average scores of two annotators (for TextFooler (TF), BAE and our BBAEG) and present those in Table 4.", "During ablation analysis, we observe that the synonym replaced perturbed samples looked more natural to the human evaluators compared to the spelling perturbed samples and number replaced entities.", "When considered jointly, the number replaced and synonym replaced samples seemed more natural to the annotators compared to spelling perturbed samples.", "This arises due to the fact that the number replaced entities when thrown to the annotators they could easily interpret the meaning correctly when given in combination with the original sample.", "For instance, in the examples shown in table 2, the number replaced samples ( 21-year old twenty-one-year old ) look more natural and easily interpretable compared to spelling perturbed samples ( clozapine clpazoine ).", "In this paper, we propose a new technique for generating adversarial examples combining contextual perturbations based on BERT-MLM, synonym replacement of biomedical entities, typographical errors and numeric entity expansion.", "We explore several classification models to demonstrate the ef-ficacy of our method.", "Experiments conducted on two benchmark biomedical datasets demonstrate the strength and effectiveness of our attack.", "As a future work, we would like to explore more about retraining the models with the perturbed samples in order to improve model robustness.", "The author would like to thank the annotators for hard work, and also the anonymous reviewers for their insightful comments and feedback." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "abstain", "method", "objective", "objective", "abstain", "abstain", "objective", "objective", "objective", "result", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "other" ]
[ "Deep Learning-based NLP systems can be sensitive to unseen tokens and hard to learn with high-dimensional inputs, which critically hinder learning generalization.", "We introduce an approach by grouping input words based on their semantic diversity to simplify input language representation with low ambiguity.", "Since the semantically diverse words reside in different contexts, we are able to substitute words with their groups and still distinguish word meanings relying on their contexts.", "We design several algorithms that compute diverse groupings based on random sampling, geometric distances, and entropy maximization, and we prove formal guarantees for the entropy-based algorithms.", "Experimental results show that our methods generalize NLP models and demonstrate enhanced accuracy on POS tagging and LM tasks and significant improvements on medium-scale machine translation tasks, up to +6.5 BLEU points.", "Our source code is available at https://github.com/abdulrafae/dg.", "Natural Language Understanding has seen remarkable success with the rise of Deep Learning.", "However, human languages' variety and richness result in high-dimensional inputs to NLP models, increasing learning complexity and error rates.", "First, open-vocabulary inputs inevitably bring rare and out-of-Vocabulary words (OOVs).", "Second, network complexity increases with input dimension, specifically the curse of dimensionality makes learning difficult on medium and small datasets.", "This paper addresses these limitations by introducing new grouping methods to compute alternative language representations that simplify textual inputs.", "We currently have alternative language representations, such as Pinyin, Metaphone, logogram, Author ordered alphabetically.", "and Emoji, that exist in natural languages and that have been shown to improve various NLP applications (Du and Way, 2017; Liu et al., 2018; Khan et al., 2020).", "While these representations can help, they are not developed for NLP performance.", "Our goal is to design algorithms for computing new language representations specifically to enhance NLP performance in this work.", "We ask: Can we compute a generalized language representation to improve NLP applications?", "An intuitive approach to answering this question is to group similar words in training and test sets and replace each word with its group.", "A word grouping viewed as a many-to-one mapping function can significantly reduce the vocabulary size that lowers the input feature dimensions leading to a generalized NLP model learning.", "For example, let us take two sentences:", "(a) you ask me.;", "(b) she tells me.", "There are five words ask, she, tells, me, and you in the vocabulary.", "Grouping words into A and B will reduce the vocabulary size to two, resulting in a simplified language representation.", "We can apply conventional word clustering to group words after embedding words into a vector space and measuring their distances with cosine similarity.", "However, clustering can map different sentences to the same sequence of groups, making them indistinguishable.", "In our example, we cluster similar pronouns you, she and me into one group indicated by A, and verbs ask, tells by B.", "Then both sentences are rewritten as A B A.", "The distinct meanings of the two original sentences are lost.", "However, if we group diverse semantic words, namely, you, tell as A; she, me, ask as B, then we maintain two samples of", "(a) A B B. and", "(b) B A B.", "So, the distinct meanings of the two sentences are retained.", "This example illustrates the need to group words so that each sentence is uniquely represented.", "Now, to generalize this idea, How can we design an algorithm that simplifies language representation while preserving meaning expressiveness? Our key observation is that the context of semantically diverse words varies more than that of semantically close words.", "In our approach, we measure semantic similarity using the cosine of word embeddings, learned based on context, see (Mikolov et al., 2013; Bojanowski et al., 2017; Pennington et al., 2014).", "Thus, similar contexts indicate semantic similarity and vice versa.", "In this way, our diverse grouping uses context to distinguish words from the same group, leading to a more expressive representation.", "In this paper, we introduce five novel algorithms in three types that group semantically diverse words together.", "We develop novel theoretical methods for diverse grouping and port them in our NLP context.", "We begin by considering random sampling grouping.", "Next, we develop a grouping algorithm based on geometric distances by designing an algorithm that computes a partition of a set of points in some metric space to maximize the sum of intra-group distances.", "This approach is essentially the opposite of the objectives used in clustering problems, such as k -means (Forgy, 1965) and k -medians (Jain and Dubes, 1988), where one seeks to minimize a monotonic function of intra-group distances.", "Finally, we present a grouping algorithm to maximize diversity by maximizing the unigram entropy of the representation.", "We show that the unigram entropy algorithm is C 1 4 C +4 C -approximations of the optimal solutions of maximizing the entropy of the new representation, where C is the number of groups, and is a small positive real number.", "This bound means that in the worst case, our algorithm is about 1/4 away from the optimal, while in typical cases, it could be very close to the optimal.", "Importantly, our theoretical results' outcomes show their usefulness in NLP tasks after we appropriately adjust them.", "In our experiments, each of the above methods significantly enhances the NMT accuracy by up to 6.5 BLEU points (36.9% relatively).", "Our contribution can be summarized as follows: 1. Diversity Grouping Algorithms.", "We introduce various algorithms that group semantically diverse words together based on random sampling, geometric distances, and entropy maximization (3).", "2. Formal guarantees.", "We provide provable guarantees for our entropy-based algorithm (4).", "3. Applications in NLP.", "Importantly, we apply the above algorithms to NLP applications, and we show that they significantly enhance prediction accuracy.", "(5).", "While typical word clustering (Baker and McCallum, 1998; Martin et al., 1998; Feng et al., 2020) (or word class (Halteren et al., 2001)) methods collect similar words together, while our method groups semantically diverse words together.", "However, unlike the common use of clustering to smooth unseen words, our goal is to deduce the input sentence's dimension by grouping diverse words so that a word-group sequence uniquely represents a word sequence.", "Our diverse grouping approach is also close to the sparse representation (Wright et al., 2008), which makes the network parameter matrices sparse without changing its dimension.", "Our methods reduce the Neural Network (NN) input dimension.", "Such a dimension reduction can be seen as a kind of regularization on NNs.", "There have been many types of NN regularization methods.", "(Louizos et al., 2018) adds a parameter norm penalty to the objective function, (Bertsekas, 2014) adds constrained optimization, and many works exploit the sub-structure of network models, such as dropout, early stopping, and weight decay.", "Those approaches are very popular but may limit the capacity of models, while our methods benefit from in-domain linguistic knowledge.", "Work such as (Wang et al., 2018) adds augmented data (e.g., noisy data, pseudo data, etc.) but is a domain-dependent approach that inventively increases the training time.", "Our work is perpendicular to the successful research in word embedding, whereby a word is mapped one-one onto a real number vector trying to preserve word pair distances.", "In contrast, our methods map many words into one group in a discrete space.", "Also, our systems build on BPE, but we do not decompose and recombine words.", "Therefore, our methods are additive to any improved word embedding (May et al., 2019), or BPE (Provilkov et al., 2020) versions.", "Different from the inspiring work that uses Soundex, NYSIIS, Metaphone, logogram (Khan et al., 2020), Pinyin (Du and Way, 2017; Liu et al., 2018), skip-ngram (Bojanowski et al., 2017), and Huffman coding (Chitnis and DeNero, 2015; Khan Algorithm 1 Random Grouping Input : Vocabulary of words V , a phonetic encoding Parameter : Output : Grouping 1: Perform a phonetic encoding (e.g. Metaphone) as baseline 2: Initialize the current group i 1 3: for each unique phonetic encoding do 4: k how many words are mapped 5: Sample { v j } kj =1 from V uniformly at random 6: Set ( v 1 ) i , . . . , ( v k ) i 7: Set V V \\ { v 1 . . . , v k } 8: i i + 1 9: end for 10: return et al., 2020), our study aims to develop new artificial algorithms that lower the dimensions of the textual inputs with smaller vocabularies.", "We now present our algorithms.", "We denote the set of words by V , and the set of groups by V , i.e., V is a subset of the powerset of V .", "Each grouping can be encoded as a function that maps each word w V to some group ( w ) V .", "Our first approach computes a random grouping, as shown in Algorithm 1. This algorithm's complexity is O ( V ) , where V is vocabulary size.", "We map each word to a group chosen uniformly at random.", "C is a hyperparameter indicating the total number of groups.", "Because it is expensive to tune C with exhaustive search, we set C as the total number of Metaphones in English, inspired by previous work (Du and Way, 2017; Liu et al., 2018) in which phonetics improves NMT in spe-cific languages.", "Furthermore, each group's size follows the natural phonetic encoding distribution (e.g., Metaphone (Philips, 1990)) by considering each phonetic encoding as a group.", "For example, each Metaphone is considered as a group, and the number of groups in the random grouping is set to the number of unique Metaphone.", "Algorithm 2 extends Algorithm 1 by learning a Poisson or Gaussian model for the distribution of group sizes.", "We fit the distribution of the Meta-Algorithm 2 Poisson/Gaussian-based Random Grouping Input : Vocabulary of words V , a phonetic encoding Parameter : Groups [ 1 , . . . , C ] , C N Output : Grouping 1: for 1 i C do 2: Randomly sample the group size k from Poisson/Gaussian distribution (which is trained on the English Metaphone distribution) 3: Sample { v j } kj =1 from V uniformly at random 4: Set ( v 1 ) i , . . . , ( v k ) i 5: Set V V \\ { v 1 . . . , v k } 6: end for 7: return phone group sizes into a Poisson or Gaussian distribution.", "Then, we sample the group size according to this Poisson or Gaussian distribution.", "Finally, we sample words for each group uniform randomly.", "We now introduce our grouping algorithm, which uses distances on the vector representations of words.", "The complexity of this algorithm is O ( V 2 ) .", "Our approach is described in Algorithm 3, which is inspired by the classical 2-approximation algorithm for k -center clustering (Gonzalez, 1985).", "Our algorithm works as follows: randomly pick a word from the vocabulary and add it to the list L .", "Pick the second word that is the furthest from the first word, pick the third word which is furthest from the closest of the two selected words, and so on.", "Finally, for each group size k that follows a Metaphone encoding size distribution, group the top k words into group one and remove those k words from the list.", "This process is performed iteratively until all words are assigned.", "We use cosine-similarity to measure the pairwise distance between words.", "Figure (1) illustrates the work of the algorithm.", "We now present our grouping algorithm, which maximizes unigram entropy.", "Our ultimate goal is to maximize the information kept (or reduce the infor-Algorithm 3 Distance-Based Diverse Grouping Input : Vocabulary of words V Parameter : Groups [ 1 , . . . , C ] with sizes k i Output : Grouping 1: Embed V in RN using e.g. word2vec 2: W resulting embedding 3: Randomly pick w 0 W , append w 0 to the ranked list L 4: for 1 j | V | do 5: maxmin = 0 6: for all w i W \\ L do 7: Find mindist i min v L k w i v k 2 8: if mindist i > maxmin then 9: Set maxmin mindist i , W w i 10: end if 11: end for 12: Append W in L 13: end for 14: Perform a phonetic encoding (e.g. Metaphone) 15: i 0 16: for each encoding & i + + do 17: Assign the encoding size as k i 18: end for 19: for 1 i C do 20: Set ( v ) i for the top k i points of L 21: Remove the top k i points from L 22: end for 23: return cat fox dog chase run fast cat fast chase fox run dog cat fast chase fox cat fast chase cat fast cat cat fast chase fox run Figure 1: An example of Distance-Based Diverse Grouping mation loss) from the original input sentences in the newly coded sentences with a reduced vocabulary.", "Distance-based diverse grouping does not consider the probability (relative frequency) of each element (original word), i.e., the input distribution.", "For example, if a word occurs very frequently, e.g., the, which can be followed by many words (different nouns), then the context of the cannot help much to distinguish its meaning.", "Therefore, it is less ambiguous to assign a frequent word (the) than an infrequent word to a unique codeword without sharing the codeword.", "Shannon entropy provides the quantitative measure on information considering such an input distribution.", "Importantly, Maximum Entropy-Based Unigram Diverse Grouping (Entropy) is a more efficient algorithm, with a complexity of NO (1) , where N is the number of running words in training.", "Furthermore, we provide a provable guarantee of about 1 c -approximation.", "The entropy-based diverse grouping aims to maximize the diversity of group assignments in the given text with respect to its entropy.", "Because the entropy is maximal when the underlying distribution is as close to uniform as possible, this objective captures the diversity requirement.", "As an illustration, consider the following text: (1) she is running very fast.; (2) he is running very fast.; (3) running is very popular today.", "We want to form three groups.", "This text has a length fourteen; thus, a grouping with high entropy aims to keep the frequency of each group around 514 .", "Therefore, the frequent words like running and is are likely to be grouped apart; infrequent words will be spread among groups uniformly.", "For instance, the grouping { 1: running, fast, late } , { 2: is she today } , { 3: he very popular } has a group frequency of 514 , 514 , 414 hence achieving high entropy.", "Furthermore, the grouped words appear to be diverse enough so that each pair of groups { 11 , 12 , . . . , 33 } appears exactly once or twice after we perform the grouping.", "We consider the entropy with respect to a distribution induced by the relative frequencies of group unigrams .", "Formally, for any group i we can define a relative frequency of a group as c i = X w V : ( w )= i F w where F w is a relative frequency of a word w .", "If the group is empty, its relative frequency is 0. The unigram entropy of a grouping with V = [ 1 , . . . , C ] is H ( ) = CX i =1 c i log c i (1) We are interested in a grouping that maximizes (1).", "submodular maximization under matroid constrains due to (Lee et al., 2009).", "In our terminology, their algorithm applies three operations to all possible pairs ( w, i ) where w V and i V .", "It terminates when for every ( w, i ) each operation is either impossible to perform or the resulting entropy gain is below (1 + / ( C | V | ) 4 ) H old where H old denotes an entropy of the grouping before the operation.", "These operations are: 1. Put a word w into a group i 2. Remove a word w from a group i 3. Remove a word w from a group i and then put another word v into a group j (we allow either w = v or i = j ).", "After we find the initial grouping, some of the words may remain unassigned.", "We note that in general, adding new words to a grouping may decrease the entropy.", "As an example, assume that we have two groups 1 , 2 with c 1 = 0 .", "25 , c 2 = 0 .", "5 and an ungrouped word w with F w = 0 .", "25 .", "The current entropy of a grouping is 0 .", "25 log 0 .", "25 0 .", "5 log 0 .", "5 .", "Setting ( w ) = 2 means that the contribution of c 2 is now 0 .", "75 log 0 .", "75 < 0 .", "5 log 0 .", "5 hence the total entropy decreased.", "To minimize the potential entropy loss, we map ungrouped words to a group j with the smallest partial entropy G ( c j ) = c j log( c j ) .", "Algorithm 5 explains the detail of the unigram entropy diverse grouping algorithm, respectively.", "We give proofs to this algorithm with the main result stated as follows: Theorem 1. Given any precision parameter > 0 , Algorithm 5 runs in polynomial time, and computes a grouping that is a C 1 4 C +4 C -approximation to the maximum unigram entropy.", "Roughly speaking, our algorithms are about 1/4 away from optimal of maximizing the entropy.", "In typical cases, our algorithms could be very close to the optimal.", "Section 4 describes the details of the proofs.", "In order to apply the optimization techniques from (Lee et al., 2009), which we briefly describe in Section 3.3.1, we need to use a different representation of grouping.", "We view a grouping as the set of all pairs ( w, i ) where ( w ) = i .", "For Algorithm 4 Initial grouping (Lee et al., 2009) Input : Vocabulary of words V , relative frequencies F w Parameter : Groups [ 1 , . . . , C ] , C N , precision parameter Output : Grouping : V V 1: Brute-force search for w 0 with the biggest partial entropy w 0 arg max w V { F w log F w } 2: Assign ( w 0 ) 1 3: Set threshold t 1 + / ( C | V | ) 4 4: until no update is possible do: 5: Try all possible updates and all pairs ( w, i ) : 6: Update 1 Add w to i , compute entropy of the update H 1 7: Update 2 Remove w from i , compute entropy of the update H 2 8: Update 3 Remove arbitrary v from ( v ) , add w to i , compute entropy of the update H 3 9: if Update j can be used on ( w, i ) then 10: if Updated entropy H j > tH ( ) then 11: Perform update j 12: end if 13: end if 14: return a vocabulary V and groups V we denote a set of all possible pairs ( w, i ) as V V .", "For instance, let V = { she, tells, me } and V = [ 1 , 2 ] .", "Then V V = { (she, 1), (she, 2), (tells, 1), (tells, 2), (me, 1), (me, 2) } .", "Consider a grouping such that ( she ) = ( tells ) = 1 and ( me ) = 2 .", "Then can be described as a set is { (she, 1), (tells, 1), (me, 2) } .", "We say that such set defines a grouping and refer to a family of all such sets as grouping set family .", "Note that an arbitrary set in V V may not define a grouping.", "For instance, the set { (she, 1), (tells, 1), (tells, 2), (me, 2) } does not, as it maps tells to more than one group.", "To apply the results of (Lee et al., 2009), we need to show that the grouping set family forms a matroid (Lee et al., 2009) on V V .", "Lemma 1. The grouping set family defines a matroid on V V .", "Algorithm 5 Unigram Entropy Diverse Grouping Input : Vocabulary of words V , relative frequencies", "F w Parameter : Groups [ 1 , . . . , C ] , C N , precision parameter Output : Grouping : V V", "1: Compute initial grouping using Algorithm (4) 2: if ( w ) is undefined for some w then 3: Let W { w V : ( w ) is undefined } 4: Create a new grouping 5: Find a group i 0 with the lowest partial unigram entropy:", "two properties.", "As an example 1 , consider V and V as in Figure (2).", "Firstly, let Q V V be a set that defines a grouping of V .", "For instance, Q = { (point,1), (graph, 1), (noun, 1), (text, 2), (science, 2) } .", "Then every R Q such as R = { (point,1), (graph, 1), (text, 2) } must define a grouping as well.", "Secondly, take two sets S, T V V that both define groupings.", "Then if | T | < | S | , we should always be able to find a pair ( w, i ) S \\ T such that adding ( w, i ) to T results in a grouping.", "In Figure (2), this pair is (point, 1) in S ; T (point, 1) does define a new groping.", "Moreover, the algorithm from (Lee et al., 2009) requires an objective function to be submodular (Lee et al., 2009).", "Intuitively, submodularity means that a function value changes less for larger inputs.", "To see that H is submodular, consider V and V as pictured in Figure (2).", "Consider groupings and induced by R and Q from Figure (2).", "Assume that we add ( word , 1) to and .", "Relative frequencies of 2 remains unchanged for , .", "Then the entropy gain for and depends only on the 1 The full proof is provided in Appendix.", "Because the function L ( x ) = x log ( x + F well ) + x log x is monotone decreasing for all real non-negative values x , we have L ( c 1 ) > L ( c 1 ) .", "Hence, the larger grouping gains less in entropy than the smaller one.", "Now we give a sketch of the proof of Theorem 1. Proof.", "We claim that H ( ) C 1 4 C +4 C H where H is the largest unigram entropy among all groupings.", "We should consider the case H ( ) < H ( ) .", "The groupings and differ only in index i 0 .", "Thus the difference H ( ) H ( ) is equal to the difference in the partial entropies G (cid:16) c i 0 (cid:17) G (cid:16) c i 0 (cid:17) .", "We note that the group i 0 with the smallest partial entropy contributes at most H ( ) /C to the total entropy of .", "Moreover, partial entropy of i 0 is always non-negative.", "We obtain H ( ) H ( ) H ( ) /C .", "Our bound follows by plugging in the estimation H ( ) H / (4 + 4 ) which is the approximation guarantee for the Algorithm 4 from (Lee et al., 2009).", "Combination Methods Below, we will discuss how to incorporate our new representation using any of our grouping methods in NLP tasks.", "Firstly, we group each word independently.", "Applying a grouping function ( ) in Section 3 on each word x 1 , x 2 , x 3 , , x i , , x I in an input sentence x 1 x 2 x 3 Input text TokenBPE Token embeddings CNN/ XLM y 1 y 2 y 3", "one by one generates a sequence of word groups ( x 1 ) , ( x 2 ) , ( x 3 ) , , ( x i ) , , ( x I ) in the same length I .", "Note that we use the term word loosely here; it can mean a word or a subword (of a BPE token), or even a character.", "The first combination method is concatenation , see Figure 3a.", "We apply this method in NMT.", "First, we concatenate two input sources.", "Next, we apply the Byte-Pair-Encoding (Sennrich et al., 2015) (BPE) and word embeddings implemented by Rehurek and Sojka (2010) on each word ( x ) and its codeword ( ( x )) .", "We separately train word embedding on groups and on words.", "Thus, ( ) and ( ) are different functions.", "As shown in Figure 3, the input to the NLP system is the embedded words of a sentence, ( x 1 ) , ( x 2 ) , ( x 3 ) , , ( x i ) , , ( x I ) , where ( x i ) is the concatenation of the embedded words ( x i ) and their groups ( ( x i )) : ( x i ) = [ ( x i ); ( ( x i ))] (2) The second method is linear combination on encoder outputs , see Figure 3b.", "We use this method in part-of-speech (POS) tagging.", "The input to the linear combiner is the grouped sentence, represented by a sequence of hidden states h 1 ( ( x I )) , , h j ( ( x I )) , , h J ( ( x I )) of the last position I in each of the encoder layers j [1 , 2 , , J ] .", "J is the number of nodes at each decoder layer.", "Recall that each hidden state is a real vector R d , which is why we can use the vector space operations such as addition on it.", "For convenience, we denote the last hidden state of the j -th encoder layer, which we take as the input to the decoder, h j ( ( x I )) , by h jI , the last hidden state of the j -th encoder layer of the original textual sentence h j ( ( x I )) by h jI , and the last hidden state of the j -th encoder layer of the grouped sentence h j ( ( ( x I ))) by h jI .", "The combined encoder hidden state h j is a linear interpolation of the hidden states of the textural input and its group input: h j = (1 ) h jI + h jI (3) As shown in Figure (3b), each layer's combined last hidden state is fed into the baseline decoder with the operator of + .", "is the encoder weight of the grouped sentence, and here, = 0 .", "5 .", "In the following context, we will show the our methods' evaluation results on three representative NLP tasks: (1) Machine translation as a recognition and generation problem; (2) Language modeling as a regression problem; and (3) POS tagging as a typical sequence labeling problem.", "We will show that our methods have the potential to improve any NLP application with textual inputs.", "Dataset We empirically verify our method on the IWSLT'17 dataset containing 226 thousand sentences.", "Table 1 shows the vocabulary statistics before and after the pre-processing on the original and the concatenated data.", "We carry out experiments on the English-to-French (EN-FR) language direction.", "We also carry out experiments on additional medium and small NMT tasks.", "For medium-sized tasks, we use the IWSLT'17 dataset with language directions including English to German (EN-DE), German to English (DE-EN), and English to Chinese (EN-ZH).", "We use the MTNT'18 dataset with language directions English to French (MTNT EN-FR) and French to English (MTNT FR-EN) for the small-sized task.", "Baseline and Setup As a filter in pre-processing, every sentence is restricted to 250 characters and 1 .", "5 length ratio between source and target sentences using Moses tokenizer (Koehn et al., 2007).", "The Byte-pair encoding model (with 16 K BPE operations) is jointly trained on the source textual word inputs, cluster ID inputs, and target outputs.", "The baseline NMT model is the Convolutional Sequence to Sequence (Gehring et al., 2017) (ConvS2S), with the following parameter setting: the embedding dimension as 512, the learning rate as 0.25, the gradient clipping as 0.1, the dropout ratio as 0.2, and the optimizer as NAG.", "The training is terminated when the validation loss does not decrease for five consecutive epochs.", "For Chinese translations, we use the IWSLT post-processing script (IWSLT, 2021).", "Finally, the translation accuracy is measured with the BLEU score using SacreBLEU (Post, 2018).", "Distance Measure To empirically compare across random and distance-based grouping algorithms, we measure the intra-group average distance of group pairs as follows: For each group in the source side vocabulary of the training and test set, compute the sum of the cosine distance 1 A B || A |||| B || of the embedding of each word pair, then divide it by the total number of word pairs in this group to get the group diversity.", "Then, average the distance of all groups in the vocabulary.", "Each algorithm generates the same number (63992) of word groups.", "The average distance of Poisson/Gaussian-based Random Grouping is 0.1286, and that of Rank-based Diverse Grouping is 0.1291.", "This finding is consistent with the translation BLEU score in Table 2. The greater the intra-group distance, the higher the accuracy.", "Maximum entropy approaches cannot be compared with this measure because it takes the entropy as an objective function.", "We have provided its provable guarantee in Section 4.", "Results For IWSLT'17 task, Table 2 shows the improvement when applying each of our methods on the ConvS2S baseline.", "All of our methods significantly enhance the accuracy of the NMT systems.", "Among them, the entropy-based diverse grouping achieves the greatest improvement, i.e., +6.5 BLEU points, which is +36.9% relative improvement.", "Analysis Figure 4 compares entropy, distance-based D.G. and the baseline method with respect to the sentence-level BLEU score in a histogram (Neu-big et al., 2019).", "The baseline method generates Source Qu'est-ce que cela signifie pour le procrastinateur ?", "almost double low-quality translations (347) compared to the distance D.G. and entropy methods (178 and 179), while the latter two methods generate many more high-quality translations with BLEU above 20%.", "Table 4 shows the FR-EN baseline and entropy translation outputs, respectively.", "We observe that our entropy method is particularly better than the baseline when the baseline fails in: (1) performing a reasonable translation; (2) missing phrases; (3) mis-translating phrases.", "We evaluate our approach in POS Tagging on Brown Corpus (Francis and Kucera, 1979).", "Brown corpus is a well-known English dataset for POS and contains 57 341 samples.", "We uniform randomly sample 64% data as the training set, 16% as the validation set, and 20% as the test set.", "Our baseline is a Keras (Chollet, 2015) implementation (Joshi, 2018) of Bi-LSTM POS Tagger (Wang et al., 2015).", "We train word embedding (Mikolov et al., 2013) implemented by Rehu rek and Sojka (2010) with 100 dimensions.", "Each of the forward and the backward LSTM has 64 dimensions.", "We use a categorical cross-entropy loss and RMSProp optimizer.", "We also use early stopping based on validation loss.", "We train and evaluate the English part of EN-FR IWSLT'17 dataset.", "We use 256 embedding dimensions, six layers, and eight heads for efficiency.", "We set dropouts to 0.1, the learning rate to 0.0001, and BPE operations to 32 k .", "We used Adam optimizer with betas of 0.9.", "As shown in Table 6, Entropy-based diverse grouping reduces PPL of the baseline system, i.e., 3.76% relatively.", "We introduce a novel approach that generalizes Deep Learning models by grouping input words to maximize their semantic diversity.", "To this end, we design a family of algorithms based on random sampling, geometric distance, and entropy, and provide provable guarantees to the entropy-based diverse grouping.", "Our methods reduce the number of low-quality translation outputs ( < 10% in BLEU) to half and greatly increase the high-quality translation ( > 20% in BLEU) ratio.", "Experiments show that our approach significant improves over state-of-the-art baselines in Neural Machine Translation (i.e., up to +6.5 BLEU points) and achieves higher accuracy in POS Tagging and Language Modeling.", "We appreciate the National Science Foundation (NSF) Award No. 1747728 and the National Science Foundation of China (NSFC) Award No. 61672524 to fund this research.", "We are also thankful for the support of the Google Cloud Research Program." ]
[ "abstain", "abstain", "abstain", "method", "objective", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "method", "objective", "objective", "method", "objective", "abstain", "method", "objective", "abstain", "result", "abstain", "objective", "abstain", "objective", "method", "objective", "result", "objective", "method", "objective", "abstain", "method", "other", "other", "other", "method", "other", "method", "objective", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "result", "other", "other" ]
[ "Writing a good job posting is a critical step in the recruiting process, but the task is often more difficult than many people think.", "It is challenging to specify the level of education, experience, relevant skills per the company information and job description.", "To this end, we propose a novel task of Job Posting Generation (JPG) that is cast as a conditional text generation problem to generate job requirements according to the job descriptions.", "To deal with this task, we devise a data-driven global Skill-Aware Multi-Attention generation model, named SAMA.", "Specifically, to model the complex mapping relationships between input and output, we design a hierarchical decoder that we first label the job description with multiple skills, then we generate a complete text guided by the skill labels.", "At the same time, to exploit the prior knowledge about the skills, we further construct a skill knowledge graph to capture the global prior knowledge of skills and refine the generated results.", "The proposed approach is evaluated on real-world job posting data.", "Experimental results clearly demonstrate the effectiveness of the proposed method 1 .", "Writing high-quality job postings is the crucial first step to attract and filter the right talents in the recruiting process of human resource management.", "Given job descriptions and basic company information, the key to the job posting is to write job requirements, which requires to specify professional skills properly.", "Both too many or few requirements may lead to negative impacts on talent recruiting.", "Because of the extremely large number of job positions and varieties of professional skills, a lot of Corresponding Author 1 https://github.com/NKU-IIPLab/SAMA Basic Information Job Description Position: Market Researcher company scale: 1000 ~ 10000", "1. Assist the General Manager in sourcing travel industry news and in conducting product research and analysis.", "2. Facilitate effective communication between the market research and user experience teams.", "3. Translate key industry texts and compose newsletters for internal communication.", "Job Requirement", "1. 3+ years of research experience at investment banks.", "2. Strong research, data analysis and communication skills.", "3. Proficient user of Microsoft Suite/G Suite.", "companies have to pay much cost in this step to win in the war of talents.", "To this end, we propose the task of Job Posting Generation (JPG) in this paper, and we cast it as a novel conditional text generation task that generates the job requirement paragraph.", "Exploiting the ubiquitous job posting data, we aim to automatically specify the level of necessary skills and generate fluent job requirements in a data-driven manner, as shown in Figure", "1. Although the JPG task is of great significance, the complexity of it poses several key challenges: 1) Generating job requirements needs to not only produce overall fluent text but also precisely organize the key content like skills and other information, which is very difficult to current neural systems.", "Especially, the long-text to long-text generation easily leads to information missing (Shen et al., 2019).", "2) The key points of job descriptions and the skills of job requirements are complex many-to-many relations, which makes the mapping learning very difficult.", "3) How to exploit the global information among the heterogeneous relations between basic company information and the professional skills across the whole dataset is of great importance to generate high-quality job requirements.", "To address these challenges, we focus on the richness and accuracy of skills in generated job requirements and propose a global Skill-Aware Multi-Attention (SAMA) model for JPG task.", "Specifi-cally, we devise a two-pass decoder to generate informative, accurate, and fluent job requirement paragraph.", "The first-pass decoder is to predict multiple skills according to the job description, which is a multi-label classification task (Zhang and Zhou, 2014).", "The second-pass decoder is to generate a complete text according to the predicted skill labels and the input text.", "Moreover, we build a skill knowledge graph to capture the global information in the whole job posting dataset in addition to the local information provided by the input.", "Through the skill knowledge graph, our model obtains the global prior knowledge to alleviate the misusing of skills.", "Extensive experiments are conducted to evaluate our model on real-world job posting data.", "The result demonstrates the effectiveness of the proposed method.", "We propose a novel task of job posting generation that is defined as the conditional generation given a job description and basic company information to generate a job requirement.", "A data-driven generation approach SAMA is proposed to model the complex mapping relationships and generate informative and accurate job requirements.", "We build a real-world job posting dataset and conducte extensive experiments to validate the effectiveness and superiority of our proposed approach.", "We collect a job posting dataset from a famous Chinese online recruiting market, across a period of 19 months, ranging from 2019 to 2020.", "There are 107,616 job postings in total.", "After removing repetitive and too short job postings, 11,221 records are selected.", "This dataset is collected from 6 different industry domains.", "The detailed statistics of the dataset are illustrated in Table", "1. Considering the importance of the skills for JPG, we select 2000 records and manually tag the skills in these records.", "Then we train a word-level LSTM-training validation testing Internet 2055 509 687 Consumer goods 1153 292 356 Real Estate 969 220 276 Finance 1477 366 463 Automobile 997 282 296 Medical 397 94 115 Table 1: The statistics of the dataset.", "CRF model (Huang et al., 2015) to recognize the skills in the whole dataset.", "We also keep the basic information, i.e., job position and company scale information, for the reason that they are the critical attributes of job postings that have impacts on the level of skills.", "In order to capture the global prior knowledge of skills, we construct a skill knowledge graph according to the semantic relations of entities in the job postings.", "As shown in Figure 2, there are three types of entities, i.e., skill, company scale, and job position.", "The entities of skills are divided into two types, generic skills (denoted by G) and professional skills (denoted by P), according to the number of occurrences.", "The relation N.T.M. (need-to-master) exists between job position entity and skill entity.", "Besides, the relation IN exists between company scale entity and skill entity.", "For example, jobseeker who is seeking for a programmer position in a company of 10 to 100 people needs to master the professional skill C++, then there exist three triplets, (programmer, N.T.M., C++), ([10, 100], IN, C++) and (C++, type, P).", "Let D = { ( B i , X i , Y i ) } Ni =1 denote the dataset, where X i = ( x i, 1 , x i, 2 , ..., x i,m ) is the word sequence of job description paragraph.", "Y i = ( y i, 1 , y i, 2 , ..., y i,n ) is the word sequence of job requirement paragraph, B i = ( b pi , b si ) is the basic information, b p and b s are job position and company scale information, N is the size of dataset, m and n are the lengths of sequence X i and Y i , respectively.", "The target of the JPG task is to estimate P ( Y i | X i , B i ) , the conditional probability of a , Text Encoder Skill Prediction Skill Words Set = , Text Decoder Target Sequence Attention 3 Attention 2 Attention 4 , 1 , 2 , 3 , 4 , 5 , 1 2 3 4 , 1 , 2 , 3 , 4 , 5 , , 1 , 2 , 3 , 4 2 1 3 , , ) , , 3 , 1 , 2 1 2 3 4 , 1 , 2 , 3 , 4 , Attention 1 1 Skill knowledge graph , , ) , , ) , ) Figure 3: An illustration of the architecture of SAMA that consists of three parts, i.e., skill prediction part, skill refinement part, and job requirement generation part.", "The skills S i are predicted given the job description.", "To consider the global prior knowledge of skills, the skill knowledge graph gives another set of skills O i , which plays the role of refinement.", "Finally, SAMA fuses multiple attentions to generate the final job requirement paragraph Y i .", "job requirement Y i given a job description X i and basic information B i .", "To tackle the JPG task, we propose a global Skill-Aware Multi-Attention model, named SAMA.", "Figure 3 shows the overall architecture of SAMA.", "Firstly, considering the importance of skill prediction in JPG, we decompose the probability P ( Y i | X i , B i ) into a two-stage generation process, including skill prediction and job requirement paragraph generation: P ( Y i | X i , B i ) = P ( Y i | X i , S i , B i ) P ( S i | X i , B i ) , (1) where S i = ( s i, 1 , s i, 2 , ..., s i,l ) is a skill 2 word sequence of its corresponding job requirement, l is the length of S i .", "Since S i and B i are conditionally independent given X i , we can derive that P ( S i | X i , B i ) = P ( S i | X i ) .", "Secondly, for refining the skills, we leverage the global prior information by the skill knowledge graph G s = ( E 1 , R, E 2 ) where E 1 and E 2 are the sets of head and tail entities and R is the set of relations.", "Given the basic information B i and the skill knowledge graph G s , we obtain a set of skills O i = ( o i, 1 , o i, 2 , ..., o i,k ) .", "where f is an invertible query function, which can ensure the one to one mapping relation between B i and O i .", "Thirdly, to fuse the local and global information, the probability P ( Y i | X i , S i , B i ) during the text generation process is calculated as:", "P ( Y i | X i , S i , B i ) = (1 ) P local ( Y i | X i , S i , B i ) + P global ( Y i | X i , S i , B i ) (3)", "The input job description word sequence X i is first transformed into a sequence of word embeddings.", "To obtain the long-term dependency vector representation, we use a bi-directional LSTM (Schuster and Paliwal, 1997) as the text encoder.", "The input sequence is transformed into a hidden state sequence H = ( h 1 , h 2 , ..., h m ) by concatenating the representations of the forward and backward hidden states h t = [ h t , h m t +1 ] .", "Specifically, the initiated encoder hidden state h 0 is a zero vector, and the last encoder hidden state h m is used for initiating the skill decoder.", "Intuitively, the process of skill prediction is a Multi-Label Classification (MLC) task, which aims to assign multiple skills to each job description.", "To capture the correlations between skills, inspired by Yang et al. (2018), we view this MLC task as a sequence generation problem.", "Formally, the skill decoder layer first takes the hidden state h m of the encoder as input, then derive a context vector C st by an attention mechanism (Luong et al., 2015) to help predict the skill labels.", "(4) where W 1 R d d is trainable weight matrix, d is the hidden vector size.", "Inspired by Yuan et al. (2018), the job description is labelled with multiple skills by generating a skills sequence which joins the skills by delimiter < SEP > and has an unfixed number of skills (e.g., English < SEP > computer science < SEP > c++).", "The skill decoder is based on LSTM, whose hidden vector is computed by: g (cid:48) t = LST M ( g (cid:48) t 1 , C stt ) .", "Specifically, the last skill decoder hidden state g (cid:48) l is used for initiating the text decoder.", "The skill sequence is finally obtained by a softmax classification over the vocabulary of skills, V skill .", "In detail, a non-linear transformation is applied to form the skill decoder semantic representation I st , and then compute the probability P ( S i | X i , B i ) via: I stj = tanh( W 2 [ g (cid:48) j ; C stj ]) P ( s i,j | X i ) = softmax i ( W 3 I stj + b 3 ) , (6) where [; ] is vector concatenation, W 2 R d 2 d , W 3 R | V skill | d and b 3 R | V skill | are parameters.", "The process of skill prediction only considers the local information, which results in some misusing of skills.", "To refine the skill of the generated job requirement, the global information is taken into account by the skill knowledge graph.", "The skill entities are divided into G and P as described in Section", "2. Here, the basic assump-tion is that a generic skill appears more frequently than a professional skill among all the job postings, because the professional skill contains more do-main characters.", "We use a hyperparameter as a threshold to divide the skills entities.", "Given the basic information B i = ( b pi , b si ) , the set of skills O i is obtained from the skill knowledge graph by the query function f .", "In detail, firstly, we obtain the set of entities that have the N.T.M. relation with b pi and the set of entities who have the IN relation with b si .", "Secondly, we get the intersection of the sets obtained in the first step.", "Finally, we keep the entities whose types are P. we embed O i as S (cid:48) i = ( s (cid:48) i, 1 , s (cid:48) i, 2 , ..., s (cid:48) i,k ) , and linearly combine it as a skill graph context vector C ndj by an attention mechanism: ji = exp ( g Tj 1 W 4 s (cid:48) i ) (cid:80) i (cid:48) exp ( g Tj 1 W 4 s (cid:48) i (cid:48) ); C ndj = k (cid:88) i =1 ji s (cid:48) i , (7) where W 4 R d d (cid:48) are parameters, d (cid:48) is the dimensions of the word embeddings.", "Then a nonlinear transformation is applied to form the graph skill semantic representation I nd .", "The probability P global ( Y i | X i , S i , B i ) from V skill is computed via: I ndj = tanh( W 5 [ g j ; C ndj ; C rdj ]) , (8) P global ( y i,j = w | X i , S i , B i ) = (cid:26) softmax i ( W 6 I ndj + b 6 ) , w O i 0 , w / O i , (9) where g and C rd will be introduced in next section, W 5 R d (2 d + d (cid:48) ) , W 6 R | V skill | d , b 6 R | V skill | are trainable parameters.", "Job requirement generation fuses multiple attention mechanisms from three aspects, job descriptions, predicted skills and skills from skill knowledge graph.", "The text decoder, based on another LSTM, aims to generate final word sequence.", "The hidden vector of text decoder is computed by g t = LST M ( e t 1 , g t 1 ) , where e t 1 is the word embedding of the final generated target word at time step t 1 .", "After obtaining g , a nonlinear transformation is applied to form the text decoder semantic representation I rd .", "The probability P local ( Y i | X i , S i , B i ) is computed via: I rdj = tanh( W 7 [ e j 1 ; g j ; C rdj ; C thj ]) , (10) P local ( y i,j | X i , S i , B i ) = softmax i ( W 8 I rdj + b 8 ) , (11) where W 7 R d 2( d + d (cid:48) ) , W 8 R | V text | d , b 8 R | V text | are parameters, V text is the vocabulary of job requirement and V skill is a subset of V text , both C rd and C th are the context vectors generated by attention mechanisms.", "Specifically, C rd is a context vector computed similar as C st because they directly take input sequence into account.", "In addition, the skills S generated by skill decoder are fed into the text decoder to guide the generation process.", "To obtain C th , another attention model is leveraged: ji = exp ( g Tj 1 W 10 s i ) (cid:80) i (cid:48) exp ( g Tj 1 W 10 s i (cid:48) ); C thj = l (cid:88) i =1 ji s i , (13) where W 10 R d d (cid:48) are parameters.", "The generation probability P ( Y i | X i , S i , B i ) is the weighted sum of P local ( Y i | X i , S i , B i ) and P global ( Y i | X i , S i , B i ) as in equation", "3. As shown in equation 8 and equation 10, the vector C th appears explicitly only in P local , which implies that P local puts emphasis on the skill prediction, i.e., the local information, while the vector C nd appears explicitly only in P global , which indicates that P global focuses on the skills given by skill knowledge graph, i.e., the global prior knowledge.", "In this way, SAMA considers not only the local information from the job description but also the global information from the skill knowledge graph.", "The loss function of the model has two parts, the negative log-likelihood of the silver 3 skill labels, LS , and the gold 4 job requirement text, LY :", "LS = l (cid:88) i =1 log P ( S | X, B ) , LY = n (cid:88) i =1 log P ( Y | X, S, B ) , L = LS + L Y , (14)", "where is a hyperparameter, we give more weight to the loss of gold job requirement.", "During inference, the outputs of the skill decoder and the text decoder are predicted as follows: (cid:95) S = arg max SP ( S | X, B ) , (15) 3 The skill labels are silver standard, because it was not created by an expert but extracted by a trained model.", "4 The job requirement text is gold standard, because it was written by human and put out online.", "In this section, we conduct experiments to verify the effectiveness of SAMA.", "Job descriptions and job requirements are tokenized by Pyltp 5 word segmenter.", "Table 1 shows the split of the dataset.", "There are 468 position entities, 9 scale entities, 31,090 skill entities, and 310,413 relation edges in the skill knowledge graph.", "The vocabulary of job descriptions contains 14,189 words, the vocabulary of skills contains 3,523 words, and vocabulary the job requirements contains 18,612 words.", "To achieve the comprehensive and comparative analysis of SAMA, we compared it with two kinds of representative models: the standard generation model and the hierarchical generation model.", "S2SA: Seq2Seq with attention (Luong et al., 2015) is a standard generation model.", "DelNet: Deliberation networks model (Xia et al., 2017) is a hierarchical generation model which has a two-pass decoder to generate and polish the same target sequence.", "VPN: Vocabulary pyramid networks (Liu et al., 2019) is a hierarchical generation model which has the multi-pass encoder and decoder to generate a multi-level target sequence.", "SAMA(w/o pred): SAMA(w/o pred) is a degraded model of SAMA that removes the process of skill prediction for the ablation test.", "In all models, we pretrain word2vec (Mikolov et al., 2013) in the job posting dataset.", "We set the word embedding dimension as 100 and the hidden vector size as 400 in both encoding and decoding.", "We set 5 https://github.com/HIT-SCIR/pyltp Models BLEU-1 BLEU-2 BLEU-3 BLEU-4 ROUGE-1 ROUGE-2 ROUGE-3 ROUGE-4 S2SA 44.78 29.96 20.33 13.11 44.43 20.02 8.87 3.62 DelNet 37.10 25.35 18.28 12.62 44.21 19.29 8.42 3.08 VPN 34.15 23.26 16.90 11.68 40.16 16.82 6.96 2.63 SAMA(w/o pred) 44.70 31.59 23.09 16.32 45.87 22.78 11.25 5.75 SAMA(w/o graph) 45.49 31.89 23.32 16.40 45.93 22.85 11.37 5.84 SAMA 46.15 32.44 23.77 16.83 46.37 23.27 12.17 6.16 Table 2: Word overlap based metrics.", "the maximum number of words in each sequence of skills and each job requirement as 30 and 150, respectively.", "Also, the weighted parameters and are set as 0.5 and 1.4, respectively.", "The threshold is set as 100.", "We apply dropout (Zaremba et al., 2014) at a rate of 0.3.", "Models are trained for 15 epochs with the Adam optimizer (Kingma and Ba, 2015), and the batch size is 5.", "Word overlap based metrics: To evaluate the overall text generation quality, we employ BLEU-N (Papineni et al., 2002) and ROUGE-N (Lin, 2004) as evaluation metrics, in which BLEU-N is a kind of precision-based metric and ROUGE-N is a kind of recall-based metric.", "Skill prediction metrics: Since the correctness of generated skills is of great importance in JPG, we further evaluate the quality of skills in generated job requirements, using Precision, Recall, and F1 value.", "To achieve this, we extract skills in the ground truth and generated text by a matching method based on the skill vocabulary V skill .", "Human-based evaluation: Since it is difficult to measure the comprehensive quality of the generated texts, i.e., both fluency of the texts and accuracy of the skills, in addition to automatic metrics above, we conduct a subjective evaluation following.", "Three graduated student volunteers are asked to evaluate the generated paragraphs.", "We randomly sample 50 pieces of data from the testing set.", "The job requirements generated by different models are pooled and randomly shuffled for each volunteer.", "Each generated paragraph is evaluated as bad (ir-relevant skills or disfluent sentence), normal (basic relevant skills and fluent sentence), or good (rich and relevant skills and fluent sentence).", "Table 2 shows the results of word overlap based metrics.", "In terms of BLEU-N and ROUGE-N, 0 10 20 30 40 50 60 P R F1 S2SA DelNet VPN SAMA(w/o prediction) SAMA(w/o graph) SAMA Figure 4: Skill prediction metrics.", "SAMA performs the best in all word overlap based metrics, which suggests that our model obtains more overlapped words with the ground truth.", "SAMA(w/o graph) and SAMA(w/o pred) obtain competitive results, and both are significantly better than baselines, which demonstrates the effectiveness of skill prediction and prior knowledge of skills, respectively.", "In addition to the overall metrics, Figure 4 further demonstrates the skill-level metrics.", "Figure 4 demonstrates that the job requirements generated by skill aware models (SAMA(w/o pred), SAMA(w/o graph), and SAMA) consist of more accurate and richer skills than those generated by the baselines (S2SA, DelNet, and VPN).", "Among them, SAMA achieves the best performance.", "Besides, SAMA(w/o graph) obtains a higher recall rate, which demonstrates that it can enrich the skill information effectively.", "SAMA(w/o pred) obtains a higher precision rate, which demonstrates that it can refine the skill information effectively.", "Results of the human-based annotation are shown in Table", "3. it can be seen that skill aware models obtain more relevant and informative results (good results) than the baselines, and SAMA obtains the most good results and the least bad results.", "The results are consistent with the automatic metric results.", "S2SA obtains the most normal results.", "This is because S2SA contains less rich and accurate skills in job requirements although with a good fluency.", "DelNet and VPN obtain a large percentage Model bad normal good Kappa S2SA 0.34 0.50 0.16 0.44 DelNet 0.48 0.34 0.18 0.41 VPN 0.56 0.32 0.12 0.38 SAMA(w/o pred) 0.28 0.42 0.30 0.42 SAMA(w/o graph) 0.26 0.42 0.32 0.43 SAMA 0.22 0.40 0.38 0.42 Table 3: Human-based evaluation results interiordesignEA undergraduatereal-estatedesign-institute interiordesignEA undergraduatereal-estatedesign-institute interiordesignEA undergraduatereal-estatedesign-institute Figure 5: Visualization.", "of bad results mainly because of the repeated sentences.", "Besides, SAMA(w/o pred) and SAMA(w/o graph) are both much worse than SAMA on good results.", "This is because SAMA(w/o pred) misses some skills, and SAMA(w/o graph) misuses some skills.", "All models have the kappa scores around 0.4, indicating that evaluators reach high agreement.", "When the model generates the target sequence, there exist differences in the contributions of different words.", "SAMA can synthetically select the most informative words by utilizing the three attention mechanisms.", "Figure 5 6 shows the visualization of three attention mechanisms.", "According to Figure 5, when SAMA generates the skill EA (Environmen-tal Art), it automatically assigns larger weights to more informative words in three sources, e.g., in-terior' of X , interior, design, construction, match-ing' of O , interior, design, drawing, management' of S .", "It shows that SAMA can consider the different contributions and capture the most informative words automatically from multiple sources.", "To illustrate the difference in quality between SAMA and the compared models, we give an example of the generated text in Figure 6, where we", "6 Due to the space limitation, we intercept some texts.", "Input: 1", "2", "3", "4", "1. Responsible for completing the annual sales targets issued by the company.", "2. Decompose annual indicators into quarters and months then implement them.", "3. Ensure that orders are repaid timely and ensure no overdue or bad debts.", "4. Develop new customer and maintain old customers.", "Gold Output: 1", "2", "3", "1. High school education above , with some sales experience", ".", "2. Sales experience in gift group-buying is preferred.", "3. High loyalty, obedient management, and teamwork spirit.", "SAMAOutput: 1 1 2 3", "1. High school education above , more than 1 year of sales experience , sales management is preferred;", "2. Working experience in gift groupbuying terminal customer service system, familiar with gift sales are preferred;", "3. Team spirit , can bear high working pressure.", "1 2 2", "1. High school education or above , marketing and other related majors;", "2. More than 2 years working experience in sales , sales experience in aluminum doors and windows or building materials industry is preferred.", "compare SAMA with the strong baseline S2SA.", "As shown in Figure 6, SAMA captures all three aspects the same as ground truth, while S2SA misses the third aspect.", "Besides, in every aspect SAMA generates more correct and accurate skills, while S2SA obviously performs not good enough and generates inaccurate skills.", "Generally, the main consideration of job seekers is the skills they need to master, such as Python, English, and Go Language.", "Therefore, although S2SA generates some right words, like preferred, it does not increase the quality of the generated text because it generates inaccurate skills.", "We show how the two key hyperparameters of SAMA, and , influence the performance in Figure 7.", "The hyperparameter adjusts the balance of the probabilities between P local and P global and adjusts the balance between two losses, the loss in skill prediction LS and the loss in job requirements generation LY .", "The value of hyperparameter varies from 0.1 to 0.9 and bigger value implies more global prior knowledge of skills.", "Figure 7 shows that the performance gets a peak when the increases.", "It is intuitive that prior knowledge can help generate 16 16.2 16.4 16.6 16.8 17 17.2 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 43.5 44 44.5 45 45.5 46 46.5 47 BLEU 4 weighted parameter BLEU 1 BLEU-1 BLEU-4 16 16.2 16.4 16.6 16.8 17 17.2 43.5 44 44.5 45 45.5 46 46.5 47 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 BLEU 4 BLEU 1 weighted parameter BLEU-1 BLEU-4 Figure 7: Parameter analysis.", "The value of hyperparameter varies from 1.1 to 2.0.", "We give greater weight to the loss of job requirements generation for the reason that it is the target of the JPG task.", "As observed in Figure 7, a weight close to 1 may introduce noises from the skill labels.", "Besides, when the weight continuously increases close to 2, the model is incapable of fully considering the skill labels.", "The related works fall into two categories, human resource management and generation models.", "Human Resource Management (HRM) is an appealing topic for applied researchers, and the recruitment is a key part of HRM.", "With the explosive growth of recruiting data, many studies focus on the efficient automatic HRM, e.g., person-organization fit, intelligent job interview, and job skill ranking.", "Lee and Brusilovsky (2007) designed a job recommender system with considering the preferences of both employers and candidates.", "Qin et al. (2019) proposed a personalized question recommender system for job interview to better interview the candidates.", "Naim et al. (2015) analyzed the videos of interview for quantifying verbal and nonverbal behaviors in the context of job interviews.", "Sun et al. (2019) studied the compatibility of person and organization.", "Xu et al. (2018) proposed a data driven approach for modeling the popularity of job skills.", "Besides, some augmented writing tools, such as Textio 7 and TapRecruit 8 , are developed to assist the HR to write job postings in the way that assuming a draft as input and then polishing the draft.", "In this paper, we also consider improving the efficiency of HRM from the perspective of the job posting writing which is the crucial first step in the process of recruitment.", "Many practical applications are modeled as generation tasks such as keyword extraction, headline generation, and response generation.", "Many generation tasks are formulated as Seq2Seq learning problems.", "Plenty of studies focused on the optimization of the Seq2seq model.", "For example, Lopy-rev (2015) trained a Seq2Seq model with attention for headlines generation task.", "Xing et al. (2017) incorporated topic information into Seq2Seq by a joint attention mechanism to generate informative responses for chatbots.", "Meng et al. (2017) applied a Seq2seq model with a copy mechanism to a keyword extraction task.", "However, models without explicit modeling the sentence planning have a great limitation in generating complex argument structures depending on hierarchy.", "Dong and Lapata (2018) decomposed the semantic parsing process into sketch generation and details filled-in and proposed a structure-aware neural architecture.", "Zhang et al. (2019) formulated outline generation task as a hierarchical structured prediction problem and proposed HiStGen.", "Pudup-pully et al. (2019) proposed a two-stage model which incorporates content selection and planning, for the data-to-text generation task.", "Similar to the above researches, we proposed a hierarchical generation model, namely SAMA, which first labels the job description with multiple skills and then generates the job requirement paragraph, to tackle the JPG task.", "Different from prior arts, SAMA considered the global information across the whole dataset to generate high quality job requirements.", "In this paper, we proposed the job posting generation (JPG) task and formalized it to a conditional text generation problem.", "Besides, we proposed a novel model, SAMA, for this task.", "The merits of SAMA come from three aspects.", "Firstly, it decomposed the long text generation into two stages, including an MLC task and a multiple skills guided text generation task.", "Secondly, it considered both the local and the global information to generate accurate and rich skills.", "Last but not least, the learned mapping relationships can be applied to various downstream tasks, such as automatic resume, and person-job fit.", "Extensive experiments conducted on real-world job posting data demonstrated the effectiveness and superiority of SAMA.", "This research was supported by the National Natural Science Foundation of China under grant No. 61976119, the Natural Science Foundation of Tianjin under grant No. 18JCYBJC15800, and the Major Program of Science and Technology of Tianjin under grant No. 18ZXZNGX00310." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "result", "method", "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Recently BERT has been adopted for document encoding in state-of-the-art text summarization models.", "However, sentence-based extractive models often result in redundant or uninformative phrases in the extracted summaries.", "Also, long-range dependencies throughout a document are not well captured by BERT, which is pre-trained on sentence pairs instead of documents.", "To address these issues, we present a discourse-aware neural summarization model DISCOBERT 1 .", "DISCOBERT extracts sub-sentential discourse units (instead of sentences) as candidates for extractive selection on a finer granularity.", "To capture the long-range dependencies among discourse units, structural discourse graphs are constructed based on RST trees and coreference mentions, encoded with Graph Convolutional Networks.", "Experiments show that the proposed model outperforms state-of-the-art methods by a significant margin on popular summarization benchmarks compared to other BERT-base models.", "Neural networks have achieved great success in the task of text summarization (Nenkova et al., 2011; Yao et al., 2017).", "There are two main lines of research: abstractive and extractive.", "While the abstractive paradigm (Rush et al., 2015; See et al., 2017; Celikyilmaz et al., 2018; Sharma et al., 2019) focuses on generating a summary word-by-word after encoding the full document, the extractive approach (Cheng and Lapata, 2016; Zhou et al., 2018; Narayan et al., 2018) directly selects sentences from the document to assemble into a summary.", "The abstractive approach is more flexible Most of this work was done when the first author was an intern at Microsoft.", "1 Code, illustration and datasets are available at: https://github.com/jiacheng-xu/DiscoBERT.", "and generally produces less redundant summaries, while the extractive approach enjoys better factuality and efficiency (Cao et al., 2018).", "Recently, some hybrid methods have been proposed to take advantage of both, by designing a two-stage pipeline to first select and then rewrite (or compress) candidate sentences (Chen and Bansal, 2018; Gehrmann et al., 2018; Zhang et al., 2018; Xu and Durrett, 2019).", "Compression or rewriting aims to discard uninformative phrases in the selected sentences.", "However, most of these hybrid systems suffer from the inevitable disconnection between the two stages in the pipeline.", "Meanwhile, modeling long-range context for document summarization remains a challenge (Xu et al., 2016).", "Pre-trained language models (De-vlin et al., 2019) are designed mostly for sentences or a short paragraph, thus poor at capturing long-range dependencies throughout a document.", "Empirical observations (Liu and Lapata, 2019) show that adding standard encoders such as LSTM or Transformer (Vaswani et al., 2017) on top of BERT to model inter-sentential relations does not bring in much performance gain.", "In this paper, we present DISCOBERT , a discourse-aware neural extractive summarization model built upon BERT .", "To perform compression with extraction simultaneously and reduce redundancy across sentences, we take Elementary Discourse Unit (EDU), a sub-sentence phrase unit originating from RST (Mann and Thompson, 1988; Carlson et al., 2001) 2 as the minimal selection unit (instead of sentence) for extractive summarization.", "Figure 1 shows an example of discourse segmentation, with sentences broken down into EDUs (anno-tated with brackets).", "By operating on the discourse unit level, our model can discard redundant details in sub-sentences, therefore retaining additional capacity to include more concepts or events, leading to more concise and informative summaries.", "Furthermore, we finetune the representations of discourse units with the injection of prior knowledge to leverage intra-sentence discourse relations.", "More specifically, two discourse-oriented graphs are proposed: RST Graph GR and Coreference Graph GC .", "Over these discourse graphs, Graph Convolutional Network (GCN) (Kipf and Welling, 2017) is imposed to capture long-range interactions among EDUs.", "RST Graph is constructed from RST parse trees over EDUs of the document.", "On the other hand, Coreference Graph connects entities and their coreference clusters/mentions across the document.", "The path of coreference navigates the model from the core event to other occurrences of that event, and in parallel explores its interactions with other concepts or events.", "The main contribution is threefold: ( i ) We propose a discourse-aware extractive summarization model, DISCOBERT , which operates on a sub-sentential discourse unit level to generate concise and informative summary with low redundancy.", "( ii )", "We propose to structurally model 2 We adopt RST as the discourse framework due to the availability of existing tools, the nature of the RST tree structure for compression, and the observations from Louis et al. (2010).", "Other alternatives includes Graph Bank (Wolf and Gibson, 2005) and PDTB (Miltsakaki et al., 2004).", "inter-sentential context with two types of discourse graph.", "( iii )", "DISCOBERT achieves new state of the art on two popular newswire text summarization datasets, outperforming other BERT -base models.", "In this section, we first introduce the Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), a linguistic theory for discourse analysis, and then explain how we construct discourse graphs used in DISCOBERT .", "Two types of discourse graph are considered: RST Graph and Coreference Graph.", "All edges are initialized as disconnected, and connections are later added for a subset of nodes based on RST discourse parse tree or coreference mentions.", "Discourse analysis focuses on inter-sentential relations in a document or conversation.", "In the RST framework, the discourse structure of text can be represented in a tree format.", "The whole document can be segmented into contiguous, adjacent and non-overlapping text spans called Elementary Discourse Units (EDUs).", "Each EDU is tagged as either Nucleus or Satellite, which characterizes its nuclearity or saliency.", "Nucleus nodes are generally more central, and Satellite nodes are more peripheral and less important in terms of content and grammatical reliance.", "There are dependencies among EDUs that represent their rhetorical relations.", "In this work, we treat EDU as the minimal unit for content selection in text summarization.", "Fig-5023 ure 2 shows an example of discourse segmentation and the parse tree of a sentence.", "Among these EDUs, rhetorical relations represent the functions of different discourse units.", "As observed in Louis et al. (2010), the RST tree structure already serves as a strong indicator for content selection.", "On the other hand, the agreement between rhetorical relations tends to be lower and more ambiguous.", "Thus, we do not encode rhetorical relations explicitly in our model.", "In content selection for text summarization, we expect the model to select the most concise and pivotal concept in the document, with low redundancy.", "3 However, in traditional extractive summarization methods, the model is required to select a whole sentence, even though some parts of the sentence are not necessary.", "Our proposed approach can select one or several fine-grained EDUs to render the generated summaries less redundant.", "This serves as the foundation of our DISCOBERT model.", "When selecting sentences as candidates for extractive summarization, we assume each sentence is grammatically self-contained.", "But for EDUs, some restrictions need to be considered to ensure grammaticality.", "For example, Figure 2 illustrates an RST discourse parse tree of a sentence, where [2] This iconic ... series is a grammatical sentence but [3] and shows ... 8 is not.", "We need to understand the dependencies between EDUs to ensure the grammaticality of the selected combinations.", "The detail of the derivation of the dependencies could be found in Sec 4.3.", "The construction of the RST Graph aims to provide not only local paragraph-level but also long-range document-level connections among EDUs.", "We use the converted dependency version of the tree to build the RST Graph GR , by initializing an empty graph and treating every discourse dependency from the i -th EDU to the j -th EDU as a directed edge, i.e. , GR [ i ][ j ] = 1 .", "Text summarization, especially news summarization, usually suffers from the well-known position bias' issue (Kedzie et al., 2018), where most of the key information is described at the very beginning", "3 For example, in Figure 2, details such as the name of the suspected child in [3], the exact location of the photo in [5], and who was carrying the child in [4], are unlikely to be reflected in the final summary.", "of the document.", "However, there is still a decent amount of information spread in the middle or at the end of the document, which is often ignored by summarization models.", "We observe that around 25% of oracle sentences appear after the first 10 sentences in the CNNDM dataset.", "Besides, in long news articles, there are often multiple core characters and events throughout the whole document.", "However, existing neural models are poor at modeling such long-range context, especially when there are multiple ambiguous coreferences to resolve.", "To encourage and guide the model to capture the long-range context in the document, we propose a Coreference Graph built upon discourse units.", "Algorithm 1 describes how to construct the Coreference Graph.", "We first use Stanford CoreNLP (Manning et al., 2014) to detect all the coreference clusters in an article.", "For each coreference cluster, all the discourse units containing the mention of the same cluster will be connected.", "This process is iterated over all the coreference mention clusters to create the final Coreference Graph.", "Figure 1 provides an example, where Pulitzer prizes' is an important entity and has occurred multiple times in multiple discourse units.", "The constructed Coreference Graph is shown on the right side of the document 4 .", "When graph GC is constructed, edges among 1-1, 2-1, 20-1 and 22-1 are all connected due to the mentions of Pulitzer prizes'.", "Figure 3 provides an overview of the proposed model, consisting of a Document Encoder and a Graph Encoder.", "For the Document Encoder, a pretrained BERT model is first used to encode the 4 We intentionally ignore other entities and mentions in this example for simplicity.", "whole document on the token level.", "Then, a self-attentive span extractor is used to obtain the EDU representations from the corresponding text spans.", "The Graph Encoder takes the output of the Document Encoder as input and updates the EDU representations with Graph Convolutional Network based on the constructed discourse graphs, which are then used to predict the oracle labels.", "Assume that document D is segmented into n EDUs in total, i.e., D = { d 1 , d 2 , , d n } , where d i denotes the i -th EDU.", "Following Liu and Lapata (2019), we formulate extractive summarization as a sequential labeling task, where each EDU d i is scored by neural networks, and decisions are made based on the scores of all EDUs.", "The oracle labels are a sequence of binary labels, where 1 stands for being selected and 0 for not.", "We denote the labels as Y = { y 1 , y 2 , , y n } .", "During training, we aim to predict the sequence of labels Y given the document D .", "During inference, we need to further consider discourse dependency to ensure the coherence and grammaticality of the output summary.", "BERT is a pre-trained deep bidirectional Transformer encoder (Vaswani et al., 2017; Devlin et al., 2019).", "Following Liu and Lapata (2019), we encode the whole document with BERT and finetune the BERT model for summarization.", "typically contains more than 500 words, hence we need to make some adaptation to apply BERT for document encoding.", "Specifically, we insert (cid:104) CLS (cid:105) and (cid:104) SEP (cid:105) tokens at the beginning and the end of each sentence, respectively.", "5 In order to encode long documents such as news articles, we also extend the maximum sequence length that BERT can take from 512 to 768 in all our experiments.", "The input document after tokenization is denoted as D = { d 1 , , d n } , and d i = { w i 1 , , w i(cid:96) i } , where (cid:96) i is the number of BPE tokens in the i -th EDU.", "If d i is the first EDU in a sentence, there is also a (cid:104) CLS (cid:105) token prepended to d i ; if d j is the last EDU in a sentence, there is a (cid:104) SEP (cid:105) token appended to d j (see Figure 3).", "The schema of insertion of (cid:104) CLS (cid:105) and (cid:104) SEP (cid:105) is an approach used in Liu and Lapata (2019).", "For simplicity, these two tokens are not shown in the equations.", "BERT model is then used to encode the document: { h B 11 , , h Bn(cid:96) n } = BERT ( { w 11 , , w n(cid:96) n } ) , where { h B 11 , , h Bn(cid:96) n } is the BERT output of the whole document in the same length as the input.", "After the BERT encoder, the representation of the (cid:104) CLS (cid:105) token can be used as sentence representation.", "However, this approach does not work in our setting, since we need to extract the representation for EDUs instead.", "Therefore, we adopt a 5 We also tried inserting (cid:104) CLS (cid:105) and (cid:104) SEP (cid:105) at the beginning and the end of every EDU, and treating the corresponding (cid:104) CLS (cid:105) representation as the representation for each EDU, but the performance drops drastically.", "Self-Attentive Span Extractor (SpanExt), proposed in Lee et al. (2017), to learn EDU representation.", "For the i -th EDU with (cid:96) i words, with the output from the BERT encoder { h Bi 1 , h Bi 2 , , h Bi(cid:96) i } , we obtain EDU representation as follows: ij = W 2 ReLU ( W 1 h Bij + b 1 ) + b 2 a ij = exp( ij ) (cid:80) (cid:96) i k =1 exp( ik ) , h Si = (cid:96) i (cid:88) j =1 a ij h Bij , where ij is the score of the j -th word in the EDU, a ij is the normalized attention of the j -th word w.r.t. all the words in the span.", "h Si is a weighted sum of the BERT output hidden states.", "Throughout the paper, all the W matrices and b vectors are parameters to learn.", "We abstract the above Self-Attentive Span Extractor as h Si = SpanExt ( h Bi 1 , , h Bi(cid:96) i ) .", "After the span extraction step, the whole document is represented as a sequence of EDU representations: h S = { h S 1 , , h S n } R d h n , which will be sent to the graph encoder.", "Given the constructed graph G = ( V , E ) , nodes V correspond to the EDUs in a document, and edges E correspond to either RST discourse relations or coreference mentions.", "We then use Graph Convolutional Network to update the representations of all the EDUs, to capture long-range dependencies missed by BERT for better summarization.", "To modularize architecture design, we present a single Discourse Graph Encoder (DGE) layer.", "Multiple DGE layers are stacked in our experiments.", "Assume that the input for the k -th DGE layer is denoted as h ( k ) = { h ( k ) 1 , . . . , h ( k ) n } R d h n , and the corresponding output is denoted as h ( k +1) = { h ( k +1) 1 , . . . , h ( k +1) n } R d h n .", "The k -th DGE layer is designed as follows: u ( k ) i = W ( k ) 4 ReLU ( W ( k ) 3 h ( k ) i + b ( k ) 3 ) + b ( k ) 4 v ( k ) i = LN ( h ( k ) i + Dropout ( u ( k ) i )) w ( k ) i = ReLU (cid:16) (cid:88) j N i 1 |N i | W ( k ) 5 v ( k ) j + b ( k ) 5 (cid:17) h ( k +1) i = LN ( Dropout ( w ( k ) i ) + v ( k ) i ) , where LN ( ) represents Layer Normalization, N i denotes the neighorhood of the i -th EDU node.", "h ( k +1) i is the output of the i -th EDU in the k -th DGE layer, and h (1) = h S , which is the output from the Document Encoder.", "After K layers of Dataset Document Sum.", "graph propagation, we obtain h G = h ( K +1) R d h n , which is the final representation of all the EDUs after the stacked DGE layers.", "For different graphs, the parameter of DGEs are not shared.", "If we use both graphs, their output are concatenated: h G = ReLU ( W 6 [ h GC ; h GR ] + b 6 ) .", "During training, h G is used for predicting the oracle labels.", "Specifically, y i = ( W 7 h Gi + b 7 ) where ( ) represents the logistic function, and y i is the prediction probability ranging from 0 to 1. The training loss of the model is binary cross-entropy loss given the predictions and oracles: L = (cid:80) ni =1 ( y i log( y i ) + (1 y i ) log(1 y i )) .", "For DISCOBERT without graphs, the output from Document Encoder h S is used for prediction instead.", "The creation of oracle is operated on EDU level.", "We greedily pick up EDUs with their necessary dependencies until R-1 F 1 drops.", "During inference, given an input document, after obtaining the prediction probabilities of all the EDUs, i.e., y = { y 1 , , y n } , we sort y in descending order, and select EDUs accordingly.", "Note that the dependencies between EDUs are also enforced in prediction to ensure grammacality of generated summaries.", "In this section, we present experimental results on two popular news summarization datasets.", "We compare our proposed model with state-of-the-art baselines and conduct detailed analysis to validate the effectiveness of DISCOBERT .", "We evaluate the models on two datasets: New York Times (NYT) (Sandhaus, 2008), CNN and Daily-mail (CNNDM) (Hermann et al., 2015).", "We use the 5026 script from See et al. (2017) to extract summaries from raw data, and Stanford CoreNLP for sentence boundary detection, tokenization and parsing (Man-ning et al., 2014).", "Due to the limitation of BERT, we only encode up to 768 BERT BPEs.", "Table 1 provides statistics of the datasets.", "The edges in GC are undirected, while those in GR are directional.", "For CNNDM, there are 287,226, 13,368 and 11,490 samples for training, validation and test, respectively.", "We use the un-anonymized version as in previous summarization work.", "NYT is licensed by LDC 6 .", "Following previous work (Zhang et al., 2019; Xu and Durrett, 2019), we use 137,778, 17,222 and 17,223 samples for training, validation, and test, respectively.", "Extractive Models: BanditSum treats extractive summarization as a contextual bandit problem, trained with policy gradient methods (Dong et al., 2018).", "NeuSum is an extractive model with seq2seq architecture, where the attention mechanism scores the document and emits the index as the selection (Zhou et al., 2018).", "Compressive Models: JECS is a neural text-compression-based summarization model using BLSTM as the encoder (Xu and Durrett, 2019).", "The first stage is selecting sentences, and the second stage is sentence compression by pruning constituency parsing tree.", "BERT-based Models: BERT-based models have achieved significant improvement on CNNDM and NYT, when compared with LSTM counterparts.", "BertSum is the first BERT-based extractive summarization model (Liu and Lapata, 2019).", "Our baseline model BERT is the re-implementation of BertSum.", "PNBert proposed a BERT-based model with various training strategies, including reinforcement learning and Pointer Networks (Zhong et al., 2019).", "HiBert is a hierarchical BERT-based model for document encoding, which is further pretrained with unlabeled data (Zhang et al., 2019).", "We use AllenNLP (Gardner et al., 2018) as the code framework.", "The implementation of graph 6 https://catalog.ldc.upenn.edu/ LDC2008T19 Model R-1 R-2 R-L Lead3 40.42 17.62 36.67 Oracle (Sentence) 55.61 32.84 51.88 Oracle (Discourse) 61.61 37.82 59.27 NeuSum (Zhou et al., 2018) 41.59 19.01 37.98 BanditSum (Dong et al., 2018) 41.50 18.70 37.60 JECS (Xu and Durrett, 2019) 41.70 18.50 37.90 PNBERT (Zhong et al., 2019) 42.39 19.51 38.69 PNBERT w.", "encoding is based on DGL (Wang et al., 2019).", "Experiments are conducted on a single NVIDIA P100 card, and the mini-batch size is set to 6 due to GPU memory capacity.", "The length of each document is truncated to 768 BPEs.", "We use the pre-trained bert-base-uncased' model and fine tune it for all experiments.", "We train all our models for up to 80,000 steps.", "ROUGE (Lin, 2004) is used as the evaluation metrics, and R-2' is used as the validation criteria.", "The realization of discourse units and structure is a critical part of EDU pre-processing, which requires two steps: discourse segmentation and RST parsing.", "In the segmentation phase, we use a neural discourse segmenter based on the BiLSTM CRF framework (Wang et al., 2018) 7 .", "The segmenter achieved 94.3 F 1 score on the RST-DT test set, in which the human performance is 98.3.", "In the parsing phase, we use a shift-reduce discourse parser to extract relations and identify nuclearity (Ji and Eisenstein, 2014) 8 .", "The dependencies among EDUs are crucial to the grammaticality of selected EDUs.", "Here are the two steps to learn the derivation of dependencies: head inheritance and tree conversion .", "Head inheritance defines the head node for each valid non-terminal tree node.", "For each leaf node, the 7 https://github.com/PKU-TANGENT/ NeuralEDUSeg 8 https://github.com/jiyfeng/DPLP 5027 head is itself.", "We determine the head node(s) of non-terminal nodes based on their nuclearity.", "9 For example, in Figure 2, the heads of text spans [1-5], [2-5], [3-5] and [4-5] need to be grounded to a single EDU.", "We propose a simple yet effective schema to convert RST discourse tree to a dependency-based discourse tree.", "10 We always consider the dependency restriction such as the reliance of Satellite on Nucleus, when we create oracle during preprocessing and when the model makes the prediction.", "For the example in Figure 2, if the model selects [5] being carried ... Liberia. as a candidate span, we will enforce the model to select [3] and shows ... 8, and [2] This ... series, as well.", "The number of chosen EDUs depends on the average length of the reference summaries, dependencies across EDUs as mentioned above, and the length of the existing content.", "The optimal average number of EDUs selected is tuned on the development set.", "Results on CNNDM Table 2 shows results on CNNDM.", "The first section includes Lead3 baseline, sentence-based oracle, and discourse-based oracle.", "The second section lists the performance of baseline models, including non-BERT-based and BERT-based variants.", "The performance of our proposed model is listed in the third section.", "BERT is our implementation of sentence-based BERT model.", "DISCOBERT is our discourse-based BERT model without Discourse Graph Encoder.", "DISCOBERT w.", "GC and DISCOBERT w.", "GR are the discourse-based BERT model with Coreference Graph and RST Graph, respectively.", "DISCOBERT w.", "GR & GC is the fusion model encoding both graphs.", "The proposed DISCOBERT beats the sentence-based counterpart and all the competitor models.", "With the help of Discourse Graph Encoder, the graph-based DISCOBERT beats the state-of-the-art BERT model by a significant margin (0.52/0.61/1.04 on R-1/-2/-L on F 1 ).", "Ablation study with individual graphs shows that the RST Graph is slightly more helpful than the Coreference 9 If both children are N(ucleus), then the head of the current node inherits the head of the left child.", "Otherwise, when one child is N and the other is S, the head of the current node inherits the head of the N child.", "10 If one child node is N and the other is S, the head of the S node depends on the head of the N node.", "If both children are N and the right child does not contain a subject in the discourse, the head of the right N node depends on the head of the left N node.", "Results on NYT Results are summarized in Table 3.", "The proposed model surpasses previous state-of-the-art BERT-based model by a significant margin.", "HIBERT S and HIBERT M used extra data for pre-training the model.", "We notice that in the NYT dataset, most of the improvement comes from the use of EDUs as minimal selection units.", "DISCOBERT provides 1.30/1.29/1.82 gain on R-1/-2/-L over the BERT baseline.", "However, the use of discourse graphs does not help much in this case.", "Due to segmentation and partial selection of sentence, the output of our model might not be as grammatical as the original sentence.", "We manually examined and automatically evaluated model output, and observed that overall, the generated summaries are still grammatical, given the RST dependency tree constraining the rhetorical relations among EDUs.", "A set of simple yet effective post-processing rules helps to complete the EDUs in some cases.", "Automatic Grammar Checking We followed Xu and Durrett (2019) to perform automatic grammar checking using Grammarly.", "Table 4 shows the grammar checking results, where the average number of errors in every 10,000 characters on CNNDM and NYT datasets is reported.", "We compare DISCOBERT with sentence-based BERT model.", "All' shows the summation of the number of errors in all categories.", "As shown in the table, the 5028 Source M All CR PV PT O CNNDM Sent 33.0 18.7 9.0 2.3 3.0 Disco 34.0 18.3 8.4 2.6 4.7 NYT Sent 23.3 13.5 5.9 0.8 3.1 Disco 23.8 13.9 5.7 0.8 3.4 Table 4: Number of errors per 10,000 characters based on automatic grammaticality checking with Gram-marly on CNNDM and NYT.", "Human Evaluation We sampled 200 documents from the test set of CNNDM and for each sample, we asked two Turkers to grade three summaries from 1 to 5. Results are shown in Table 5. Sent-BERT model (the original BERTSum model) selects sentences from the document, hence providing the best overall readability, coherence, and grammaticality.", "In some cases, reference summaries are just long phrases, so the scores are slightly lower than those from the sentence model.", "DISCOBERT model is slightly worse than Sent-BERT model but is fully comparable to the other two variants.", "Examples & Analysis We show some examples of model output in Table 6. We notice that a decent amount of irrelevant details are removed from the extracted summary.", "Despite the success, we further conducted error analysis and found that the errors mostly originated from the RST dependency resolution and the upstream parsing error of the discourse parser.", "The misclassification of RST dependencies and the hand-crafted rules for dependency resolution hurted the grammaticality and coherence of the generated' outputs.", "Common punctuation issues include extra or missing commas, as well as missing quotation marks.", "Some of the coherence issue Clare Hines , who lives in Brisbane, was diagnosed with a brain tumour after suffering epileptic seizures.", "originates from missing or improper or missing anaphora resolution.", "In this example [Johnny is believed to have drowned,] 1 [but actually he is fine,'] 2 [the police say.] 3 , only selecting the second EDU yields a sentence actually he is fine, which is not clear who is he' mentioned here.", "Neural Extractive Summarization Neural networks have been widely used in extractive summarization.", "Various decoding approaches, including ranking (Narayan et al., 2018), index prediction (Zhou et al., 2018) and sequential labelling (Nal-lapati et al., 2017; Zhang et al., 2018; Dong et al., 2018), have been applied to content selection.", "Our model uses a similar configuration to encode the document with BERT as Liu and Lapata (2019) did, but we use discourse graph structure and graph encoder to handle the long-range dependency issue.", "Neural Compressive Summarization Text summarization with compression and deletion has been explored in some recent work.", "Xu and Durrett (2019) presented a two-stage neural model for selection and compression based on constituency tree pruning.", "Dong et al. (2019) presented a neural sentence compression model with discrete operations including deletion and addition.", "Different from these studies, as we use EDUs as minimal selection basis, sentence compression is achieved automatically in our model.", "discourse theory for text summarization has been explored before.", "Louis et al. (2010) examined the 5029 benefit of graph structure provided by discourse relations for text summarization.", "Hirao et al. (2013); Yoshida et al. (2014) formulated the summarization problem as the trimming of the document discourse tree.", "Durrett et al. (2016) presented a system of sentence extraction and compression with ILP methods using discourse structure.", "Li et al. (2016) demonstrated that using EDUs as units of content selection leads to stronger summarization performance.", "Compared with them, our proposed method is the first neural end-to-end summarization model using EDUs as the selection basis.", "approach", "2018.", "Faithful to the Original: Fact Aware Neural Abstractive Summarization.", "has been explored in text summarization over decades.", "LexRank introduced a stochastic graph-based method for computing relative importance of textual units (Erkan and Radev, 2004).", "Ya-sunaga et al. (2017) employed a GCN on the relation graphs with sentence embeddings obtained from RNN.", "Tan et al. (2017) also proposed graph-based attention in abstractive summarization model.", "Fernandes et al. (2018) developed a framework to reason long-distance relationships for text summarization.", "In this paper, we present DISCOBERT , which uses discourse unit as the minimal selection basis to reduce summarization redundancy and leverages two types of discourse graphs as inductive bias to capture long-range dependencies among discourse units.", "We validate the proposed approach on two popular summarization datasets, and observe consistent improvement over baseline models.", "For future work, we will explore better graph encoding methods, and apply discourse graphs to other tasks that require long document encoding.", "Thanks to Junyi Jessy Li, Greg Durrett, Yen-Chun Chen, and to the other members of the Microsoft Dynamics 365 AI Research team for the proofreading, feedback and suggestions." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "other" ]
[ "Advances in the automated detection of offensive Internet postings make this mechanism very attractive to social media companies, who are increasingly under pressure to monitor and action activity on their sites.", "However, these advances also have important implications as a threat to the fundamental right of free expression.", "In this article, we analyze which Twitter posts could actually be deemed offenses under German criminal law.", "German law follows the deductive method of the Roman law tradition based on abstract rules as opposed to the inductive reasoning in Anglo-American common law systems.", "This allows us to show how legal conclusions can be reached and implemented without relying on existing court decisions.", "We present a data annotation schema, consisting of a series of binary decisions, for determining whether a specific post would constitute a criminal offense.", "This schema serves as a step towards an inexpensive cre-ation of a sufficient amount of data for automated classification.", "We find that the majority of posts deemed morally offensive actually do not constitute a criminal offense and still contribute to public discourse.", "Furthermore, laymen can provide sufficiently reliable data to an expert reference but are, for instance, more lenient in the interpretation of what constitutes a disparaging statement.", "The Internet is frequently used for discussing a variety of topics and an important medium for the exchange of opinions, considered crucial for healthy democratic societies.", "However, the rough tone in the Internet frequently leads to defamatory or abusive comments in these discussions.", "The EU has tried to tackle the problem by defining the Equal contribution.", "term illegal hate speech'.", "1 Additionally, in 2017, the European Commission published a communication entitled Tackling Illegal Content Online' aiming for enhanced responsibility of online platforms.", "2 Independently from these recent developments on the EU level, Germany adopted the Network Enforcement Act' 3 in 2017.", "The Act provides for a regulatory framework for illegal content' 4 on social network platforms like Twitter or Facebook.", "It imposes the obligation on these providers to delete illegal content upon notifica-tion within seven days; in case of evidently illegal content within 24 hours.", "5 From a practical point of view, given the number of statements on social media along with their possible notification, feasibility and accuracy of the required legal assessment becomes an important issue.", "Natural Language Processing might thus provide the necessary means to assist the legal assessment.", "In this work, we investigate at which point morally offensive statements in social media constitute defamatory offenses under the German Criminal Code (StGB) 6 , thus representing illegal content' according to the Network Enforcement Act and thereby triggering a deletion obligation for platform providers.", "7 We analyze the legal decision-making process to determine defam-1 Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law and national laws transposing it.", "whether these strict procedural requirements violate EU law, namely Art. 3, Art. 14 e-Commerce Directive (2000/31/EC) i.e. require acting expeditiously' after obtaining knowledge.", "6 Strafgesetzbuch v. 13.11.1998 (BGBl. I S. 3322).", "7 It is not guaranteed that a judge would necessarily arrive at the same conclusion, but a lawyer's expertise serves as a strong indicator for potentially punishable conduct.", "atory offenses ( 185 to 187 StGB), which also clarifies the tension between the right to honor and the freedom of expression.", "Due to its additional complexity, we leave out incitement to hatred against a national, racial, religious or ethnic group or segments of the population ( 130 StGB) as an offense against public peace in this paper.", "Furthermore, we investigate automated detection of postings protected by the freedom of expression in order to assist social media moderators.", "We focus in particular on the process of inexpensive and scalable data annotation, as access to legal expertise is a major bottleneck for providing a sufficient amount of data for classifier training.", "An automated detection of Internet discourse in which individuals or groups are verbally attacked has been intensively investigated under a variety of names, for instance: abusive language (Waseem et al., 2017), ad hominem arguments (Habernal et al., 2018), aggression (Kumar et al., 2018), cyberbullying (Xu et al., 2012; Macbeth et al., 2013), hate speech (Warner and Hirschberg, 2012; Ross et al., 2016; Del Vigna et al., 2017), offensive language usage (Razavi et al., 2010), profanity (Schmidt and Wiegand, 2017), threats (Oostdijk and van Halteren, 2013) or socially unacceptable discourse (Fier et al., 2017).", "The majority of the work focuses on the English language with few exceptions for instance for German (Ross et al., 2016), Dutch (Oostdijk and van Halteren, 2013), Italian (Del Vigna et al., 2017) or Slovene (Fier et al., 2017).", "The dataset annotated in Fier et al. (2017) is the only one that includes a coarse-grained binary annotation category indicating if an utterance violates Slovene law.", "To the best of our knowledge, automatic determination as to whether the (textual) content of a posting constitutes a criminal offense has never been previously attempted.", "Previous work focused on detecting postings with socially unacceptable content but without considering actual legal implications for freedom of expression.", "Approaches that bring together Natural Language Processing with the legal perspective are in contrast significantly fewer, especially considering the fact that the legal evaluation depends on the applicable legal regime.", "Previous work focused on predicting the outcome of court trials, which all have in common that they derive their data from a rather large set of court-provided information.", "Bruninghaus and Ashley (2003) works on a combination of U.S. case law and normative rules: they experiment with clustering and regression models for predicting the outcome of U.S. cases.", "Katz et al. (2017) predicts U.S. supreme court rulings by using a random forest classifier; Kastellec (2010) investigates mappings from case facts to court decisions as outcomes.", "Waltl et al. (2017) predicts the outcome of decisions in German tax law.", "Aletras et al. (2016) predicts decisions of the European Court of Human Rights.", "Deriving data from court decisions might be an approach that is practical if relevant case law exists for the respective legal problem, which particularly makes sense from the perspective of the Anglo-American common law system.", "8 3 Operationalising Legal Assessment Unlike under Anglo-American common law, for legal systems based on Roman law (civil law' systems), the dogmatic perception of the respective legal disposition lies at the heart of legal decision-making.", "Our approach thus differs from the above-cited works by placing the focus on the abstract concept of an existing legal norm.", "The advantage of our approach is therefore that we pursue a solution to address legal problems by creating new data out of abstract legal rules, independently of whether they have been decided by a court.", "We rely solely on the Internet posting for this consideration, which is the same information available to moderators of social media platforms.", "To build the bridge from legal thinking to a technical implementation, we start by analyzing the legal requirements for social media content.", "We find that the decision-making process to determine criminal offenses can be formulated as a sequence of binary decisions when applying the legal dependencies between German criminal law and the fundamental rights of the individual as shown in Figure 1. The derived schema of binary decisions is shown in Figure 2, which we will use in the following section.", "We now turn to a discussion and analysis of the legal decision process to clarify how we derived this sequence of binary decisions.", "8 Common law' refers to the Anglo-American legal system that derives the law from judicial decisions, in contrast to the civil law' system of continental Europe that focuses on the abstraction of legal concepts in codified statutory law.", "See: B.A. Garner (2001) A Dictionary of Modern Legal Usage (2nd, revised ed.) New York: OUP.", "Scope So what constitutes illegal content' that the Network Enforcement Act is targeting?", "The legal definition of the term illegal content' 9 is referring to offenses stipulated in the German Criminal Code.", "These references include, inter alia, defamatory offenses in 185 to 187 StGB 10 that cover the criminal punishment of insulting or defamatory statements.", "Accordingly, if a statement posted on social media fulfills the required elements of these offenses, the provider has the above-described obligation based on the Network Enforcement Act to delete said statement upon notification.", "11 For this paper, we exclude 130 9 See 1(3) rechtswidrige Inhalte'.", "10 185: insult'( Beleidigung ), 186: defamation' ble Nachrede ) and 187: intentional defamation' ( Verleum-dung ).", "The reference to these defamatory offenses has however been criticized in literature, see: Erbs/Kohlhaas, Strafrechtliche Nebengesetze, 220.", "EL Juli 2018, 1 NetzDG, Rn.", "16-18.", "11 See 3(2)(2),(3) of the Act.", "StGB 12 , that covers incitement to hatred against a national, racial, religious group or a group defined by their ethnic origins, due to an additional complexity of the assessment.", "To understand their elements in detail, it is crucial to refer to the more general legal concept behind these criminal offenses: as illustrated in Figure 1 the intention behind 185 to 187 StGB is leading to the protection of the victim's personality right, namely their right to honor under the German Constitution.", "13 It is this right that is potentially at stake when social media users are disseminating statements with third parties as potential victims.", "130, incitement to hatred' ( Volksverhetzung ).", "13 Derived from Art. 2(1) and Art. 1(1) of the German Constitution (Grundgesetz); BVerfGE 35, 202; E 54, 148, 155.", "Consequently, the scope of protection of 185 to 187 StGB follows the respective interpretation of the right to honor.", "Thus, all three offenses share the approach to the possible victim as a holder of the right to honor: a living individual that might be addressed by a name, personal pronoun or user-mention as shown in Example 1.", "A group of persons can be considered as a potential victim if that group is distinguishable from the general public such that every member of that group could feel their honor is infringed as shown in Example 2. 14", "(a) My school's language teachers are all idiots.", "(b) The female students of this year's grad class are all dump.", "Consequently, only certain groups do qualify as potential defamatory object.", "Example 3 illustrates groups that would be too broad to be distinguishable from the general public.", "15", "Example 3: Counterexamples for addressing too unspecific or large groups Collective entities such as governments or press companies with a recognized social role and who act with a collective, single will are included in the right to honor as shown in Example 4. 16", "We translate these conditions of 185 to 187 StGB into an either/or-question, respectively whether either a living individual or a specific group is an object of the respective statement.", "Disparaging Statement The next step in the legal assessment is then the existence of insulting or defamatory conduct with respect to the above-mentioned object, in the form of an expressed disparaging statement.", "This requirement is again shared by 185 to 187 StGB.", "It is already fulfilled by expressing contempt or disrespect through the allegation of shortcomings that could reduce the victim's social standing as shown in Example 5. 17", "Example 5: Disparaging statements From the perspective of the underlying fundamental rights, it is this disparaging statement which constitutes the interference with the potential victim's right to honor.", "The existence of a disparaging statement is implemented by a yes/no-question .", "18 Value Judgment or Factual Claim?", "As simplified in Figure 1, the legal assessment then varies between 185 StGB as general disposition and 186 and 187 StGB with special rules and an increased penalty range.", "For the different scope of these dispositions, the difference between the legal terms value judg-ment' and factual claim' (i.e. the assertion of facts, may they be true or untrue) is crucial.", "Value judgments constitute expressions of personal opinions as shown in Example 6: 19", "A factual claim can be clearly classified as true or untrue and is accordingly capable of proof as shown in Example 7. 20", "185 StGB, stipulating the insult (Beleidi-gung'), comprises value judgments and untrue factual claims, irrespective of their dissemination towards third parties.", "186 and 187 StGB on the other side provide for special rules for the assertion or dissemination of untrue facts, i.e. towards third parties.", "As the publication of statements on social media constitutes an assertion' or dissem-ination', untrue facts for our study are only treated by 186, 187 StGB.", "This reduces the scope of 185 StGB to value judgments only.", "From the perspective of the right to honor, only untrue factual claims may constitute a violation, while the assertion of true facts is always covered by the freedom of expression.", "21 The distinction has consequences on the procedural level: because only the assertion of untrue facts violates the right to honor, during criminal proceedings, the court has to assess the truth by taking evidence.", "A technical implementation of this assessment would therefore require access to unlimited knowledge that goes beyond the textual information on which we work.", "Accordingly, we stop our examination in case of a factual claim.", "22 3.4 Value Judgments: Balancing of Rights As the distinction between value judgment and factual claim is an alternative decision, 23 we continue our implementation for value judgments.", "In criminal proceedings, the court would have to consider at this point once more fundamental rights: value judgments being not classifiable as untrue or true generally fall under the scope of the freedom of expression of the potential offender.", "24 In the German Criminal Code, this is reflected by 193 StGB: even if a statement falls under the scope of said criminal offenses, it might still be 21 BVerfGE 99, 185, 197; E 97, 381, 403.", "22 Consequently, we do not implement subsequent conditions of 186, 187 StGB, as shown in Figure 1, respectively whether facts cannot be proven to be true ( 186 StGB) or whether the untruth was intended and known ( 187 StGB).", "23 Ambiguous statements that are based on facts, but are overall characterized by a valuation of these facts, fall under the category of value judgments'.", "24 Art. 5(1)1 of the German Constitution (Grundgesetz).", "According to Art. 5(2) the freedom of expression then again is limited by the right to honor.", "justified based on 193 StGB as exercise of legitimate interests.", "The most prominent example of one of these conflicting interests is the offender's freedom of expression.", "On the constitutional level, then, the decision of whether a social media posting constitutes a punishable criminal offense and leads to the platform provider's deletion obligation can thus ultimately be perceived as a balancing between freedom of expression and the right to honor.", "Consequently, the court would have to balance these concurrent rights depending on the case at hand.", "But how could that balancing, usually comprising an evaluation of various factors, be carried over to a technical implementation?", "Over the years, German case law from the Federal Constitutional Court has developed guidelines for this balancing to be considered by the judge, which take the step of implying the typical outcome of the balancing.", "We implement these guidelines in three yes/no-questions: 25 Abusive Insult Statements that constitute breaking a taboo by themselves and intend only the defamation of the victim without any substantiated contribution are classified as abusive insult' (For-malbeleidigung) .", "According to settled case law, these statements are already excluded from the scope of freedom of expression.", "26 Consequently, a justification based on 193 StGB is, in this regard, denied and the elements of 185 StGB are fulfilled along with a violation of the right to honor.", "Given these severe consequences for free speech, the German Constitutional Court has so far only once approved a statement as constituting an abusive insult' as shown in Example 8: 27 A disabled person is called \"cripple\" Example 8: Abusive insult Topic of Public Interest For statements that contain a contribution to the public discourse with respect to a particular relevant topic of public interest, the settled case law of the German Federal Constitutional Court mandates a presumption in 25 As illustrated in Figure 2, the judge would perform the balancing freely based on all circumstances (which we do not implement) if there is no abusive insult' and if topic of public interest' and abusive criticism' are both yes or both no. 26 Maunz/Drig, Grundgesetz-Kommentar, 84.", "EL August 2018, Art. 5 Abs.", "1, Rn.", "62.", "favor of free speech.", "28 Merkel prostitutes herself for the German car industry costing tax payers Example 9: Topic of public interest Example 9 comments on the right to stay of refugees, by this participating to the public debate in Germany about refugees from Syria.", "Accordingly, such statements usually outweigh the right to honor.", "They thus usually can be made, justified as having a legitimate interest based on 193 StGB, therefore usually not punishable .", "Abusive Criticism Finally, as abusive criticism' (Schmhkritik) settled case law has defined statements that go beyond plausible criticism by primarily intending to abusively offend the victim, hereby neglecting a substantiated contribution.", "29 In Example 10, the statement: Minister M, that asshole, is lying to all of us!!", "despite the word asshole', still contributes to the public discourse, which is why its primary purpose is not (only) to abusively offend.", "Abusive criticism thus usually leads to favoring the right to honor over freedom of expression.", "Without justification pursuant to 193 StGB, such statements are therefore usually punishable .", "In this section, we now use the schema in Figure 2 to annotate data and learn more about the reliability of an automated classification.", "In order to legally assess social media postings, we first need to annotate a corpus as a starting point for an analysis.", "Randomly sampling postings from the Internet is a possible strategy to collect data for an annotation, but we would not have any certainty that enough offending postings occur.", "Therefore, we decide to use an existing corpus that has already been annotated for moral offensiveness.", "We use the corpus provided by the GermEval shared 28 BVerfGE 7, 198, 212.", "task for detecting offensive language usage (Rup-penhofer et al., 2018).", "This dataset contains a mixture of German Twitter postings with a focus on German politics that are marked if the tweet is considered morally offensive from the subjective perception of the annotator.", "We work with a subset of 1,100 postings from this corpus, two-thirds of the postings (844) are marked as morally offensive.", "This enables us to investigate which statements commonly found in political debates are protected by the freedom of expression and which are not.", "Annotation The reference annotation of these postings is provided by a fully-qualified lawyer of German law applying the schema in Figure 2. We additionally received 200 postings from a second fully-qualified lawyer in order to compute an agreement score between the two legal experts, which is shown in Table 1. We report accuracy and Cohen's (Cohen, 1960) for each decision and show the agreement for a joint-decision where we treat all decisions for a posting as a single decision.", "The legal experts disagree slightly on the assumption of abusive criticism.", "This is not surprising as the evaluation of courts might differ in different instances, especially regarding the balancing of interests in the case at hand.", "Analysis Figure 3 shows the annotation results of the postings marked as morally offensive .", "We find that about half of the postings have to be categorized early on as not punishable for not containing a defamatory object, i.e. no living individual addressed or the addressed group is too unspecific.", "The remaining half is still to a large extent usually not punishable , mostly because the posts still contribute to a topic of public interest, despite 844 Tweets marked as containing an offense Third party reference 53% (587) 50%(425) (Usually) not punish-able 39%(333) 8%(65) 1%(12) 2%(15) yes Not punish-able no50% (419) Factualclaim Depends on individual circum-stances (Usually)Punish-able Figure 3: Legal categorization of annotated Tweets that were marked as containing an offense of being disparaging.", "A small number of cases are either factual claims that would require taking evidence by the court or value judgments that do not concern topics of public interest.", "Thus, despite containing statements that may be deemed morally offensive, the vast majority of statements are legally acceptable, i.e. protected by the freedom of expression.", "The punishable cases often contain insulting buzzwords such as slut, fat-ass or scumbag when directed at a private individual, not at a person of public interest.", "Furthermore, punishable statements addressing a specific group use more frequently offending comparisons or descriptions but no typical single or two-word insults.", "However, it is important to recall that the dataset has a focus on political debates.", "Accordingly, most statements tackle a topic of public interest, and are thus considered usually not punishable granting a high degree of protection under the freedom of expression.", "This analysis also shows that shared tasks such as OffensEval (Zampieri et al., 2019) tackle essentially only one step in the legal assessment, namely whether a statement is disparaging .", "Thus, they fall short of valuing the freedom of expression, which is in particular a problem for public discourse such as political debates, where opinions are often accompanied by bad' language.", "For an automated detection, it would seem straight forward to distinguish between punishable and not punishable postings.", "This approach requires an extremely large amount of data for each of the two classes, which we do not have.", "The data distribution is skewed with the punishable class being extremely small, which makes this direct ap-D e f a m .", "proach infeasible.", "Instead, we investigate how well each of the binary decisions shown in Figure 2 can be learned independently, which has a less skewed distribution.", "We use a bi-directional LSTM (Hochreiter and Schmidhuber, 1997) for classification.", "30 We use the 300-dimensional German pre-trained word embeddings provided by Grave et al. (2018), which are trained on the German common crawl.", "Table 2 shows averaged 10-fold CV results for each decision point.", "We observe that the accuracy is close to the underlying distribution of the two classes.", "The classification of the defamatory object has a mediocre performance.", "In particular, an insufficient coverage of group names and names of individuals in the dataset seem to be the main cause as the no classes usually perform considerably better than the yes classes.", "The classification of the decisions under defamatory conduct follows a similar trend.", "The few positive instances: factual claim , abusive insult and abusive criticism prevent a reliable distinction of these cases.", "The next step would be to investigate how well the classification works in sequence, i.e. continuing the classification with the positively categorized instances of the previous step.", "However, the independent classification shows already that the amount of data is insufficient.", "Therefore, we turn next to the more pressing question of how to generate more data in a scalable way, especially without relying on expensive legal experts as annotators.", "A scalable annotation of more data requires that laymen can be instructed in a way that enables them to solve the task at hand.", "Laymen are readily available, for instance via crowdsourcing but also as student assistants who can be more cheaply employed than legal experts for annotating data.", "Setup We compare the annotation performance of both random crowd workers and student assistants.", "The crowd workers and the student assistants were required to speak German.", "We have no information on the educational background of the crowd workers, but we ensured that the student assistants were not students of law-related subjects.", "We prepared a simplified manual 31 based on Figure 2, which is supplemented with text examples for each decision to guide the layman through the annotation of each decision.", "We use the crowdsourcing platform figure-eight.com to let crowd workers and student assistants re-annotate the 1,100 postings for which we have a reference annotation by a legal expert.", "Each posting is annotated by three annotators.", "The annotation results are shown in Table 3.", "It is to be expected that some annotators will perform better than others, but distinguishing the good' from the bad' is an additional challenge, which we will not deal with here.", "Instead, we aggregate the annotations of all participants in a voting-like fashion, taking in each case the majority vote for each decision.", "32 This provides us with an approximation of the average layman performance on this task, which is the key information that we are interested in.", "Analysis The results show that student assistants solve this task considerably better than crowd workers.", "In particular, the recognition of references to specific group poses the biggest challenges for crowd workers, which also explains why this group performs much more poorly than the student assistants.", "As shown in Figure 2, the evaluation for a post ends if neither a living individual nor specific group is addressed.", "If either of the first two decisions is incorrect, an annotator automatically makes up to five additional followup errors.", "The student assistants applied the manual considerably more consistently than the crowd 31 github.com/Horsmann/NAACL-2019-legal 32 We restrict the comparisons to postings for which we have three votes of the respective sub-group.", "workers, leading to fewer follow-up errors.", "Determining the referenced individual is also frequently challenging when several Twitter users are referenced by an at-mention, which introduces an uncertainty that the statement might refer to one of the linked users.", "We also find that the laymen tend to apply a more lenient interpretation of what is disparaging and consider many statements as nondisparaging, i.e. already an allegation of shortcomings 33 , which could reduce the victim's social standing is disparaging in the legal sense.", "The annotation results of the student assistants are encouraging for obtaining sufficient training data for a larger study on automated classification, i.e. a correct automated classification of the first two decisions would already be able to exclude many cases that do not have to be deleted based on the Network Enforcement Act.", "We investigated which offenses found in German political Tweets constitute defamatory offenses under German criminal law, that social media operators are obliged to delete under the Network Enforcement Act.", "Following the dogmatic approach of civil law systems, we started with an analysis of the legal framework for defamatory offenses in the German Criminal Code along with its foundations in the balancing between the potential offender's freedom of expression and the potential victim's right to honor.", "We derived from this consideration a schema suited for data anno-33 e.g. I am not sure whether John knows what he's doing.", "tation consisting of a sequence of binary decisions to determine if a statement constituted a defamatory offense, which we used for annotating data.", "We find that the majority of the morally offensive postings in our dataset still contribute to the public discourse and are, hence, protected by the freedom of expression.", "We also investigated if laymen can be instructed to use this annotation schema to facilitate an inexpensive annotation of more data for classifier training.", "We find that laymen suited to the task can be found, but in particular the legal notions of a specific group of persons and the scope of what is considered disparaging are challenging for them.", "In future work, we will investigate the usefulness of layman-annotated data for an automated classification.", "Furthermore, we will expand our work by investigating additionally the criminal offense of incitement to hatred ( 130 StGB) and its implication on the freedom of expression.", "We would like to thank Tatiana Gnster for taking the time to provide us with a second legal opinion.", "Furthermore, we would like to thank Emilie Mathieu for helpful corrections and Michael Wo-jatzki for the helpful discussions about the user-study design." ]
[ "abstain", "abstain", "method", "abstain", "result", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "method", "objective", "objective", "result", "objective", "objective", "other", "other" ]
[ "As a broad and major category in machine reading comprehension (MRC), the generalized goal of discriminative MRC is answer prediction from the given materials.", "However, the focuses of various discriminative MRC tasks may be diverse enough: multi-choice MRC requires model to highlight and integrate all potential critical evidence globally; while extractive MRC focuses on higher local boundary preciseness for answer extraction.", "Among previous works, there lacks a unified design with pertinence for the overall discriminative MRC tasks.", "To fill in above gap, we propose a lightweight PO S-Enhanced I terative Co-Attention Net work ( POI-Net ) as the first attempt of unified modeling with pertinence, to handle diverse discriminative MRC tasks synchronously.", "Nearly without introducing more parameters, our lite unified design brings model significant improvement with both encoder and decoder components.", "The evaluation results on four discriminative MRC benchmarks consistently indicate the general effectiveness and applicability of our model, and the code is available at https://github.", "com/Yilin1111/poi-net .", "Machine reading comprehension (MRC) as a challenging branch in NLU, has two major categories: generative MRC which emphasizes on answer generation (Kocisk et al., 2018), and discriminative MRC which focuses on answer prediction from given contexts (Baradaran et al., 2020).", "Among them, discriminative MRC is in great attention of researchers due to its plentiful application scenarios, such as extractive and multi-choice MRC two major subcategories.", "Given a question with corresponding passage, extractive MRC asks for precise answer span extraction in passage (Joshi et al., Corresponding author.", "Multi-choice MRC Example ...", "In addition, Lynn's pioneering efforts also provide public educational forums via Green Scenes a series of three hour events , each focusing on specific topics teaching Hoosiers how to lead greener lifestyles .", "...", "Q: What can we learn about Green Scenes ?", "A. It is a scene set in a three-hour film .", "B. It is a series of events focusing on green life .", "( Golden )", "C. It is a film set in Central Indiana.", "D. It is a forum focusing on green lifestyle .", "Extractive MRC Example ...", "Early versions were in use by 1851, but the most successful indicator was developed for the high speed engine inventor and manufacturer Charles Porter by Charles Richard and exhibited at London Exhibition in 1862 .", "...", "Q: Where was the Charles Porter steam engine indicator shown?", "Golden Answer: London Exhibition Imprecise Answer 1: London Exhibition in 1862 Imprecise Answer 2: exhibited at London Exhibition Table 1: Different focuses of multi-choice MRC task (RACE) and extractive MRC task (SQuAD 2.0).", "2017; Trischler et al., 2017; Yang et al., 2018), while multi-choice MRC requires suitable answer selection among given candidates (Huang et al., 2019; Khashabi et al., 2018).", "Except for the only common goal shared by different discriminative MRCs, the focuses of extractive and multi-choice MRC are different to a large extent due to the diversity in the styles of predicted answers: multi-choice MRC usually requires to highlight and integrate all potential critical information among the whole passage; while extractive MRC pays more attention to precise span boundary extraction at local level, since the rough scope of answer span can be located relatively easily, shown in Table 1.", "In MRC field, several previous works perform general-purpose language modeling with considerable computing cost at encoding aspect (Devlin et al., 2019; Clark et al., 2020; Zhang et al., 2020c), or splice texts among diverse MRC tasks simply to expand training dataset (Khashabi et al., 2020), without delicate and specialized design for sub-8682 E P0 POS-Enhanced Embedding Layer P0 P1 PN Q0 Q1 QN E P1 EPN E Q0 E Q1 EQN Preliminary Interaction Layer E P0 E P0 1 E P1 E P1 1 EPNEPN 1 E Q0 E Q0 1 E Q1 E Q1 1 EQNEQN 1 T x Iterative Interaction Layer E P0 E P0 TE P1 E P1 TEPNEPNTE Q0 E Q0 TE Q1 E Q1 TEQNEQNT Attention Integration Layer Output Layer E P0 E P1 EPN MPC C CS S S x E P0 E P1 EPN x x E P0 E P0 1 E P1 E P1 1 EPNEPN 1 E Q1 E Q0 EQN x x x E Q1 E Q1 1 E Q0 E Q0 1 EQNEQN 1 MPE Q1 E Q0 EQNC C CS S S MPC C CS S S x E P0 E P1 EPN x x E P0 E P0 E P1 E P1 EPNEPN x x x E Q1 E Q1 E Q0 E Q0 EQNEQNMPC C CS S S t t t E P0 E P1 EPN t-1 t-1 t-1 EQNE Q1 E Q0 t-1 t-1 t-1 t t t E P0 E P1 EPNE Q0 E Q1 EQN Encoding Interaction Integration Output Figure 1: Overview of POI-Net .", "categories in discriminative MRC.", "Others utilize excessively detailed design for one special MRC subcategory at decoding aspect (Sun et al., 2019b; Zhang et al., 2020a), lacking the universality for overall discriminative MRC.", "To fill in above gap in unified modeling for different discriminative MRCs, based on core focuses of extractive and multi-choice MRC, we design two complementary reading strategies at both encoding and decoding aspects.", "The encoding design enhances token linguistic representation at local level, which is especially effective for extractive MRC.", "The explicit possession of word part-of-speech (POS) attribute of human leads to precise answer extraction.", "In the extractive sample from Table 1, human extracts golden answer span precisely because London Exhibition is a proper noun (NNP) corresponding to interrogative qualifier (WDT) Where in the question, while imprecise words like 1862 (cardinal number, CD) and exhibited (past tense verb, VBD) predicted by machines will not be retained.", "Thus, we inject word POS attribute explicitly in embedding form.", "The decoding design simulates human reconsideration and integration abilities at global level, with especial effect for multi-choice MRC.", "To handle compound questions with limited attention, human will highlight critical information in turns, and update recognition and attention distribution iteratively.", "Inspired by above reconsideration strategy, we design Iterative Co-Attention Mechanism with no additional parameter, which iteratively executes the interaction between passage and question-option ( Q O ) pair globally in turns.", "In the multi-choice example from Table 1, during the first interaction, model may only focus on texts related to rough impression of Q O pair such as Green Scenes , ignoring plentiful but scattered critical information.", "But with sufficient iterative interaction, model can ultimately collect all detailed evidence (bold in Table 1).", "Furthermore, we explore a series of attention integration strategies for captured evidence among interaction turns.", "We combine two above methods and propose a novel model called POI-Net ( PO S-Enhanced I terative Co-Attention Net work), to alleviate the gap between machines and humans on discriminative MRC.", "We evaluate our model on two multi-choice MRC benchmarks, RACE (Lai et al., 2017) and DREAM (Sun et al., 2019a); and two extractive MRC benchmarks, SQuAD 1.1 (Rajpurkar et al., 2016) and SQuAD 2.0 (Rajpurkar et al., 2018), obtaining consistent and significant improvements, with nearly zero additional parameters.", "We aim to design a lightweight, universal and effective model architecture for various subcategories of discriminative MRC, and the overview of our model is shown in Figure 1, which consists of four main processes: Encoding (2.1), Interaction (2.2), Integration (2.3) and Output (2.4).", "Based on pre-trained contextualized encoder ALBERT (Lan et al., 2020), we encode input tokens with an additional POS embedding layer, as Figure 2 shows.", "Since the input sequence will be tokenized into subwords in the contextualized encoder, we tokenize sequences in word-level with nltk tokenizer (Bird et al., 2009) additionally and implement POS-Enhanced Encoder , where each subword in a complete word will share the same POS tag.", "In detail, input sequences are fed into nltk POS tagger to obtain the POS tag of each word such as JJ.", "Subject to Penn Treebank style, our adopted POS tagger has 36 POS tag types.", "Considering on the specific scenarios in discriminative MRC, we add additional SP E tag for special tokens (i.e., [ CLS ] , [ SEP ] ), P AD tag for padding tokens and ERR tag for potential unrecognized tokens.", "Appendix A shows detailed description of POS tags.", "The input embedding in our model is the normalized sum of Subword Embedding and POS Embedding .", "Following the basic design in embedding layers of BERT-style models, we retain Token Embedding E t , Segmentation Embedding E s and Position Embedding E p in subword-level, constituting Subword Embedding .", "For POS Embedding EPOS , we implement another embedding layer with the same embedding size to Subword Embedding , guaranteeing all above indicator embeddings are in the same vector space.", "Formulaically, the input embedding E can be represented as: E = Norm ( E t + E s + E p + EPOS ) , where Norm () is a layer normalization function (Ba et al., 2016).", "POI-Net employs a lightweight Iterative CoAttention module to simulate human inner reconsidering process, with no additional parameter.", "POI-Net splits all N input token embeddings into passage domain ( P ) and question (or Q O pair) domain ( Q ) to start P Q interactive process.", "To generate the overall impression of the given passage or question like humans, POI-Net concentrates all embeddings in corresponding domain into one Concentrated Embedding by max pooling: CE 1 P = MaxP ooling ( EP 0 , ..., EPN ) RH , CE 1 Q = MaxP ooling ( EQ 0 , ..., EQN ) RH , where H is the hidden size, P N/QN is the token amount of P/Q domain.", "Then POI-Net calculates the similarity between each token in EP /E Q and CE 1 Q /CE 1 P , to generate attention score s for each token contributing to the P Q pair.", "In detail, we use cosine similarly for calculation: s 1 P 0 , ..., s 1 PN = Cosine ([ EP 0 , ..., EPN ] , CE 1 Q ) , s 1 Q 0 , ..., s 1 QN = Cosine ([ EQ 0 , ..., EQN ] , CE 1 P ) .", "We normalize these scores to [0 , 1] by min-max scaling, then execute dot product with corresponding input embeddings: E 1 Pi = s 1 Pi E Pi , E 1 Qi = s 1 Qi E Qi , where s Pi is the normalized attention score of i th passage token embedding, E 1 Pi is the attention-enhanced embedding of i -th passage token after preliminary interaction (the 1 -st turn interaction).", "To model human reconsideration ability between passage and question in turns, we add iterable modules with co-attention mechanism, as the Iterative Interaction Layer in Figure 1.", "Detailed processes in the t -th turn interaction are similar to preliminary interaction: CE tQ = MaxP ooling ( E t 1 Q 0 , ..., E t 1 QN ) RH , CE tP = MaxP ooling ( E t 1 P 0 , ..., E t 1 PN ) RH , 8684 s tP 0 , ..., s tPN = Cosine ([ EP 0 , ..., EPN ] , CE tQ ) , s tQ 0 , ..., s tQN = Cosine ([ EQ 0 , ..., EQN ] , CE tP ) , E tPi = s tPi E Pi , E tQi = s tQi E Qi .", "Note that, during all iteration turns, we calculate attention scores with the original input embedding E instead of attention-enhanced embedding E t 1 from the ( t -1)-th turn, due to:", "on base size model), comparing to adopted method;", "2) With the same embedding E , attention integration in 2.3 can be optimized into attention score integration, which is computationally efficient with no additional embedding storage 1 .", "Human recommends to integrate all critical information from multiple turns for a comprehensive conclusion, instead of discarding all findings from previous consideration.", "In line with above thought, POI-Net returns attention-enhanced embedding E t = s t E of each turn (we only store s t in an optimized method), and integrates them with specific strategies.", "We design four integration strategies according to the contribution proportion of each turn and adopt Forgetting Strategy ultimately.", "Average Strategy : The attention network treats normalized attention score s t of each turn equally, and produces the ultimate representation vector R with average value of s t : R = 1 TT (cid:88) t =1 s t E RN H , where T is the total amount of iteration turns.", "Weighted Strategy : The attention network treats s t with two normalized weighted coefficients tP , tQ , which measure the contribution of the t -th turn calculation: R = (cid:80) Tt =1 tP s tP (cid:80) Tt =1 tP EP + (cid:80) Tt =1 tQ s tQ (cid:80) Tt =1 tQ EQ , t P = Max ( s t 1 Q 0 , ..., s t 1 QN ) , tQ = Max ( s t 1 P 0 , ..., s t 1 PN ) , 1 Approximate 15.3% training time is saved on average for each iteration turn.", "where s 0 Pi = s 0 Qi = 1 .", "0 .", "The design motivation for tP , tQ is intuitive: when Concentrated Embedding CE tQ /CE tP (calculating attention score at the t -th turn) has higher confidence (behaving as higher maximum value in s t 1 Q /s t 1 P due to max pooling calculation), system should pay more attention to input embedding E tP /E tQ at the t -th turn 2 .", "Forgetting Strategy : Since human will partly forget knowledge from previous consideration and focus on findings at current turn, we execute normalization operation of attention scores from two most previous turns iteratively:", "During the iterative normalization, the ultimate proportion of attention scores from previous turns will be diluted gradually, which simulates the effect of forgetting strategy 3 .", "Intuition Strategy : In some cases, human can solve simple questions in intuition without excessive consideration, thus we introduce two attenuation coefficients tP , tQ for attention scores from the t -th turn, which decrease gradually as the turn of iteration increases:", "Setting tP / tQ as learnable parameters cannot bring further improvement since the contribution proportion of each turn varies with the specific circumstance of input samples.", "3 Method of activation functions in LSTM (Hochreiter and Schmidhuber, 1997) may filter out information completely in one single-turn calculation, which cannot bring consistent improvement in our experiments.", "The input sequence for multi-choice MRC is [ CLS ] P [ SEP ] Q + O i [ SEP ] , where + denotes concatenation, O i denotes the i -th answer options.", "In Output Layer , the representation vector R RN H is fed into a max pooling operation to generate general representation: R = MaxP ooling ( R ) RH .", "Then a linear softmax layer is employed to calculate probabilities of options, and standard Cross Entropy Loss is employed as the total loss.", "Option with the largest probability is determined as the predicted answer.", "The input sequence for extractive MRC can be represented as [ CLS ] P [ SEP ] Q [ SEP ] , and we use a linear softmax layer to calculate start and end token probabilities in Output Layer .", "The training object is the sum of Cross Entropy Losses for the start and end token probabilities: L = y s log ( s ) + y e log ( e ) , s, e = softmax ( Linear ( R )) RN , where s/e are the start/end probabilities for all tokens and y s /y e are the start/end targets.", "score ij = s i + e j , 0 i j N,", "then the span with the maximum score score has is the predicted answer.", "The score of null answer is: score no = s 0 + e 0 , where the 0 -th token is [ CLS ] .", "The final score is calculated as score has score no , and a threshold is set to determine whether the question is answerable, which is heuristically computed in linear time.", "POI-Net predicts the span with the maximum score if the final score is above the threshold, and null answer otherwise.", "The experiments are run on 8 NVIDIA Tesla P40 GPUs and the implementation of POI-Net is based on the Pytorch implementation of ALBERT", "(Paszke et al., 2019).", "We set the maximum iteration turns in Iterative Co-Attention as 3 .", "Table 2 shows the hyper-parameters of POI-Net achieving reported results.", "As a supplement, the warmup rate is 0.1 for all tasks.", "We evaluate POI-Net on two multi-choice MRC benchmarks: RACE (Lai et al., 2017), DREAM (Sun et al., 2019a), and two extractive MRC benchmarks: SQuAD 1.1 (Rajpurkar et al., 2016) and SQuAD 2.0 (Rajpurkar et al., 2018).", "The detailed introduction is shown as following: RACE is a large-scale multi-choice MRC task collected from English examinations which contains nearly 100K questions.", "The passages are in the form of articles and most questions need contextual reasoning, and the domains of passages are diversified.", "DREAM is a dialogue-based dataset for multi-choice MRC, containing more than 10K questions.", "The challenge of the dataset is that more than 80% of the questions are non-extractive and require reasoning from multi-turn dialogues.", "SQuAD 1.1 is a widely used large-scale extractive MRC benchmark with more than 107K passage-question pairs, which are produced from Wikipedia.", "Models are asked to extract precise word span from the Wikipedia passage as the answer of the given passage.", "SQuAD 2.0 retains the questions in SQuAD 1.1 with over 53K unanswerable questions, which are similar to answerable ones.", "For SQuAD 2.0, models must not only answer questions when possible, but also abstain from answering when the question is unanswerable with the paragraph.", "We take accuracy as evaluation criteria for multi-choice benchmarks, while exact match (EM) and", "4 Due to the test sets of SQuAD 1.1 and SQuAD 2.0 are not open for free evaluation with different random seeds, we report the results on development set instead.", "a softer metric F1 score for extractive benchmarks.", "The average results of three random seeds are shown in Table 3, where we only display several BERT-style models with comparable parameters.", "Appendix B reports the complete comparison results with other public works on each benchmark.", "The results show that, for multi-choice benchmarks, our model outperforms most baselines and comparison works, and passes the significance test (Zhang et al., 2021) with p value < 0 .", "01 in DREAM (2.0% average improvement) and RACE (1.7% average improvement).", "And for extractive benchmarks, though the performance of baseline ALBERT is strong, our model still boosts it essentially (1.3% average improvement on EM for SQuAD 1.1 and 2.3% for SQuAD 2.0).", "Furthermore, we report the parameter scale and train-ing/inference time costs in 4.4.", "In this section, we implement POI-Net on ALBERT base for further discussions, and such settings have the similar quantitative tendency to POI-Net on ALBERT xxlarge .", "To evaluate the contribution of each component in POI-Net , we perform ablation studies on RACE and SQuAD 1.1 development sets and report the average results of three random seeds in Table 4.", "The results indicate that, both POS Embedding and Iterative Co-Attention Mechanism provide considerable contributions to POI-Net , but in different roles for certain MRC subcategory.", "For multi-choice MRC like RACE, Iterative CoAttention Mechanism contributes much more than POS Embedding (3.86% v.s. 1.14%), since multi-choice MRC requires to highlight and integrate critical information in passages comprehensively.", "Therefore, potential omission of critical evidence may be fatal for answer prediction, which is guaranteed by Iterative Co-Attention Mechanism , while precise evidence span boundary and POS attributes are not as important as the former.", "On the contrary, simple POS Embedding even brings a little more improvement than the well-designed Iterative Co-Attention (0.99% v.s. 0.85% on EM) for extractive MRC.", "In these tasks, model focuses on answer span extraction with precise boundaries, and requires to discard interference words which not exactly match questions, such as redundant verbs, prepositions and infinitives ( politically and socially unstable instead of to be politically and socially unstable ), or partial interception of proper nouns ( Seljuk Turks instead of Turks ).", "With the POS attribute of each word, POI-Net locates the boundaries of answer spans precisely 5 .", "Since extractive MRC does not require comprehensive information integration like multi-5 Note that, the improvement of POI-Net on EM score is consistently higher than F1 score, as corroboration.", "is less significant.", "Besides, we also implement POI-Net on other contextualized encoders like BERT, and achieve significant improvements as Table 4 shows.", "The consistent and significant improvements over various baselines verify the universal effectiveness of POI-Net .", "To study how POS Embedding enhances token representation, we make a series of statistics on SQuAD 1.1 development set about:", "1) POS type of boundary words from predicted spans, as Table 5 shows;", "2) error POS classification of POI-Net and its baseline ALBERT base , as Figure 3 shows.", "The statistical results show, with POS Embedding , the overall distribution of the POS types of answer boundary words predicted by POI-Net is more similar to golden answer, compared with its baseline; and the amount of error POS classification cases by POI-Net also reduces significantly.", "And there are also two further findings:", "1) The correction proportion of error POS classification (8.09%) is much higher than correction proportion of overall error predictions (1.82%) in POI-Net , which indicates the correction of POS classification benefits mostly from the perception of word POS attributes by POS Embedding , instead of the improvement on overall accuracy.", "2) Though answers in SQuAD 1.1 incline to distribute in several specific POS types (NN, CD, NNS and JJ), POS Embedding prompts model to consider words in each POS type more equally than the baseline, and the predicted proportions of words in rarer POS type (IN, VBN, RB, VBG and so on) increase.", "Robustness is one of the important indicators to measure model performance, when there is numerous rough data or resource in applied tasks.", "To measure the anti-interference of POS Embedding , we randomly modify part of POS tags from nltk POS tagger to error tags, and the results on SQuAD 1.1 development set are shown in Table 6.", "The results indicate that, POI-Net possesses satisfactory POS Embedding robustness, and the improvement brought by POS Embedding will not suffer a lot with a slight disturbance (5%).", "We argue that the robustness of POI-Net may benefit from the integration with other contextualized embeddings, such as Token Embedding E t which encodes the contextual meaning of current word or subword.", "Though more violent interference (20%) may further hurt token representations, existing mature POS taggers achieve 97% + accuracy, which can prevent the occurrence of above situations.", "To explore the most suitable integration strategy and maximum iteration turn in Iterative CoAttention Mechanism , we implement our proposed strategies with different maximum iteration turns,", "together with a baseline replacing Iterative CoAttention mechanism by a widely used Multihead Co-Attention mechanism (Devlin et al., 2019; Zhang et al., 2020a, 2021) for comparison in Figure 4.", "We take RACE as the evaluated benchmark due to the significant effect of attention mechanism to multi-choice MRC.", "As the figure shows, forgetting strategy leads to the best performance, with slight improvement than weighted strategy.", "Both these two strategies are in line with the logical evidence integration in human reconsidering process, while average strategy and intuition strategy may work against common human logic.", "From the trends of four strategies in multiple iterations, we conclude that 2 or 3 iteration turns for Iterative Co-Attention lead to an appropriate result, due to:", "1) Fewer iteration turns may lead to inadequate interaction between passage and question, and model may focus on rough cognition instead of exhaustive critical information;", "2) Excessive iteration turns may lead to over-integration of information, declining the contribution by real critical evidence.", "Compared to the typical Multi-head CoAttention mechanism, our proposed Iterative CoAttention mechanism obtains higher performance with more iterations, indicating it has stronger iterative reconsideration ability.", "Besides, Iterative Co-Attention defeats Multihead Co-Attention on both parameter size and training time cost.", "As the parameter comparison in Table 7 shows, POI-Net basically brings no additional parameter except an linear embedding layer for POS Embedding .", "Multi-head Co-Attention mechanism and models based on it (like DUMA in Table", "3) introduces much more parameters, with slightly Model Parameters ALBERT base (Lan et al., 2020) 12M ALBERT base (rerun) 11.14M Multi-head Co-Attention on ALBERT base 17.94M POI-Net on ALBERT base 11.15M ALBERT xxlarge (Lan et al., 2020) 235M ALBERT xxlarge (rerun) 212.29M Multi-head Co-Attention on ALBERT xxlarge 404.50M POI-Net on ALBERT xxlarge 212.30M Table 7: Training parameters in POI-Net and baselines.", "lower performance.", "We also record time costs on RACE for one training epoch on ALBERT base , Iterative Co-Attention costs 54 , 62 , 72 , 83 , 96 minutes from 0 -turn iteration to 4 -turn iterations, while Multi-head Co-Attention costs 54 , 65 , 76 , 89 , 109 minutes instead, with 8 .", "3% increase on average.", "We perform a visualization display for discriminative MRC examples in Table 1, as Figure 5 shows.", "For the extractive example, benefited from POS Embedding , POI-Net predicts the precise answer span, based on the interrogative qualifier where and POS attributes of controversial boundary tokens exhibited, at, London, Exhibition, 1862 .", "And for the multi-choice example, without proposed Iterative Co-Attention Mechanism , the overall distribution of attention is more scattered.", "The baseline can only notice special tokens like [ CLS ] at the 0 -th turn, and even interrogative qualifier how due to the similar usage to what in the question.", "With the execution of Iterative Co-Attention , POI-Net pays more attention on discrete critical words like Green Scenes and events at the 1 -st turn, series and focusing at the 2 -nd turn and greener lifestyle at the 3 -rd turn.", "After the integration of all above critical evidence, POI-Net predicts the golden option ultimately.", "To cope with challenging MRC tasks, numerous powerful pre-trained language models (PLMs) have been proposed (Devlin et al., 2019; Lewis et al., 2020; Raffel et al., 2020).", "Though advanced PLMs demonstrate strong ability in contextual representation, the lack of explicit semantic and linguistic clues leads to the bottleneck of previous works.", "Benefited from the development of semantic role labeling (Li et al., 2018) and dependency syntactic parsing (Zhou and Zhao, 2019), some researchers focus on enhancing semantic representations.", "Zhang et al. (2020b) strengthen token representation by fusing semantic role labels, while Zhang et al. (2020c) and Bai et al. (2021) implement additional self attention layers to encode syntactic dependency.", "Furthermore, Mihaylov and Frank (2019) employ multiple discourse-aware semantic annotations for MRC on narrative texts.", "Instead of semantic information, we pay attention to more accessible part-of-speech (POS) information, which has been widely used into non-MRC fields, such as open domain QA (Chen et al., 2017), with much lower pre-processing calculation consumption but higher accuracy (Bohnet et al., 2018; Strubell et al., 2018; Zhou et al., 2020).", "However, previous application of POS attributes mostly stays in primitive and rough embedding methods (Huang et al., 2018), leading to much slighter improvement than proposed POI-Net .", "In discriminative MRC field, various attention mechanisms (Raffel and Ellis, 2015; Seo et al., 2017; Wang et al., 2017; Vaswani et al., 2017) play increasingly important roles.", "Initially, attention mechanism is mainly adopted on extractive MRC (Yu et al., 2018; Cui et al., 2021), such as multiple polishing of answer spans (Xiong et al., 2017) and multi-granularity representations generation (Zheng et al., 2020; Chen et al., 2020).", "Recently, researchers notice its special effect for multi-choice MRC.", "Zhang et al. (2020a) model domains bidirectionally with dual co-matching network, Jin et al. (2020) use multi-step attention as classifier, and Zhu et al. (2020) design multi-head co-attentions for collaborative interactions.", "We thus propose a universal Iterative CoAttention mechanism, which performs interaction between paired input domains iteratively, to hopefully enhance discriminative MRC.", "Unlike other works introducing numerous parameters by complicated attention network (Zhang et al., 2020a), our POI-Net is more effective and efficient with almost no introduction of additional parameters.", "In this work, we propose PO S-Enhanced I terative Co-Attention Net work ( POI-Net ), as a lightweight unified modeling for multiple subcategories of discriminative MRC.", "POI-Net utilizes POS Embedding to encode POS attributes for the preciseness of answer boundary, and Iterative Co-Attention Mechanism with integration strategy is employed to highlight and integrate critical information at decoding aspect, with almost no additional parameter.", "As the first effective and unified modeling with pertinence for different types of discriminative MRC, evaluation results on four extractive and multi-choice MRC benchmarks consistently indicate the general effectiveness and applicability of our model." ]
[ "abstain", "abstain", "abstain", "objective", "result", "other", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective" ]
[ "Out-of-scope intent detection is of practical importance in task-oriented dialogue systems.", "Since the distribution of outlier utterances is arbitrary and unknown in the training stage, existing methods commonly rely on strong assumptions on data distribution such as mixture of Gaussians to make inference, resulting in either complex multi-step training procedures or hand-crafted rules such as confidence threshold selection for outlier detection.", "In this paper, we propose a simple yet effective method to train an out-of-scope intent classifier in a fully end-to-end manner by simulating the test scenario in training, which requires no assumption on data distribution and no additional postprocessing or threshold setting.", "Specifically, we construct a set of pseudo outliers in the training stage, by generating synthetic outliers using inliner features via self-supervision and sampling out-of-scope sentences from easily available open-domain datasets.", "The pseudo outliers are used to train a discriminative classifier that can be directly applied to and generalize well on the test task.", "We evaluate our method extensively on four benchmark dialogue datasets and observe significant improvements over state-of-the-art approaches.", "Our code has been released at https:// github.com/liam0949/DCLOOS .", "Conversational system is becoming an indispensable component in a variety of AI applications and acts as an interactive interface provided to users to improve user experience.", "Language understanding is essential for conversational systems to provide appropriate responses to users, and intent detection is usually the first step of language understanding.", "The primary goal is to identify diverse intentions Equal contribution.", "behind user utterances, which is often formalized as a classification task.", "However, intent classes defined during training are inevitably inadequate to cover all possible user intents at the test stage due to the diversity and randomness of user utterances.", "Hence, out-of-scope (or unknown) intent detection is essential, which aims to develop a model that can accurately identify known (seen in training) intent classes while detecting the out-of-scope classes that are not encountered during training.", "Due to the practical importance of out-of-scope intent detection, recent efforts have attempted to solve this problem by developing effective intent classification models.", "In general, previous works approach this problem by learning decision boundaries for known intents and then using some confidence measure to distinguish known and unknown intents.", "For examples, LMCL (Lin and Xu, 2019) learns the decision boundaries with a margin-based optimization objective, and SEG (Yan et al., 2020b) assumes the known intent classes follow the distribution of mixture of Gaussians.", "After learning the decision boundaries, an off-the-shell outlier detection algorithm such as LOF (Breunig et al., 2000) is commonly employed to derive confidence scores (Yan et al., 2020b; Shu et al., 2017; Lin and Xu, 2019; Hendrycks and Gimpel, 2017).", "If the confidence score of a test sample is lower than a predefined threshold, it is identified as an outlier.", "However, it may be problematic to learn decision boundaries solely based on the training examples of known intent classes.", "First, if there are sufficient training examples, the learned decision boundaries can be expected to generalize well on known intent classes, but not on the unknown.", "Therefore, extra steps are required in previous methods, such as using an additional outlier detection algorithm at the test stage or adjusting the confidence threshold by cross-validation.", "On the other hand, if there are not sufficient training examples, the learned boundaries may not generalize well on both known and unknown intents.", "As a result, these methods often underperform when not enough training data is given.", "Hence, it is important to provide learning signals of unknown intents at the training stage to overcome these limitations.", "In contrast to previous works, we adopt a different approach by explicitly modeling the distribution of unknown intents.", "Particularly, we construct a set of pseudo out-of-scope examples to aid the training process.", "We hypothesize that in the semantic feature space, real-world outliers can be well represented in two types: hard outliers that are geometrically close to the inliers and easy outliers that are distant from the inliners.", "For the hard ones, we construct them in a self-supervised manner by forming convex combination of the features of inliers from different classes.", "For the easy ones, the assumption is that they are very unrelated to the known intent classes, so they can be used to simulate the randomness and diversity of user utterances.", "They can be easily constructed using public datasets.", "For example, in our experiments, we randomly collect sentences from datasets of other NLP tasks such as question answering and sentiment analysis as open-domain outliers.", "In effect, by constructing pseudo outliers for the unknown class during training, we form a consistent ( K + 1) classification task ( K known classes + 1 unknown class) for both training and test.", "Our model can be trained with a cross-entropy loss and directly applied to test data for intent classification and outlier detection without requiring any further steps.", "As shown in Figure 1 (better view in color and enlarged), our method can learn better utterance representations, which make each known intent class more compact and push the outliers away from the inliers.", "Our main contributions are summarized as follows.", "We propose a novel out-of-scope intent detection approach by matching training and test tasks to bridge the gap between fitting to training data and generalizing to test data.", "We propose to efficiently construct two types of pseudo outliers by using a simple self-supervised method and leveraging publicly available auxiliary datasets.", "We conduct extensive experiments on four real-world dialogue datasets to demonstrate the effectiveness of our method and perform a detailed ablation study.", "Early studies on outlier detection often adopt unsupervised clustering methods to detect malformed data (Hodge and Austin, 2004; Chandola et al., 2009; Zimek et al., 2012).", "In recent years, a substantial body of work has been directed towards improving the generalization capacity of machine learning models on out-of-distribution (OOD) data (Ruff et al., 2021; Hendrycks et al., 2020a).", "Hendrycks and Gimpel (2017) find that simple statistics derived from the outputting softmax probabilities of deep neural networks can be helpful for detecting OOD samples.", "Following this work, Liang et al. (2018) propose to use temperature scaling and add small perturbation to input images to enlarge the gap between in-scope and OOD samples.", "Lee et al. (2017) propose to add a Kullback-Leibler divergence term in the loss function to encourage assigning lower maximum scores to OOD data.", "Recently, there is a line of work that employs synthetic or real-world auxiliary datasets to provide learning signals for improving model robustness under various forms of distribution shift (Goodfel-low et al., 2015; Orhan, 2019; Hendrycks et al., 2019; Lee et al., 2017).", "Particularly, Hendrycks et al. (2018) propose to leverage large-scale public datasets to represent outliers during training time and form a regularization term based on that.", "This idea is similar to our proposal of constructing open-domain outliers, but we use a simpler, end-to-end, ( K +1) -way discriminative training procedure without any regularization term or threshold parameter.", "While Hendrycks et al. (2020b) find that pretrained transformer-based models like BERT are intrinsically more robust to OOD data, they suggest that there are still margins for improvement.", "Therefore, we build our model on top of BERT to improve intent detection under significant distribution shift.", "Previous methods for out-of-scope (or out-of-distribution) intent detection are commonly threshold-based, where models output a decision score and then compare it with a threshold that is predefined or selected by cross-validation.", "There are mainly three branches of related work.", "The first group uses a confidence score which determines the likelihood of an utterance being out-of-scope.", "For example, Shu et al. (2017) build m binary Sigmoid classifiers for m known classes respectively and select a threshold to reject OOD inputs that may have lower probabilities than the threshold across all m classifiers.", "Similar to the OOD data generation method used in Lee et al. (2017), Ryu et al. (2018) employ GAN (Goodfel-low et al., 2014) to generate simulated OOD examples with the generator and learn to reject simulated OOD examples with the discriminator.", "The second group identifies out-of-scope sentences through reconstruction loss.", "For example, Ryu et al. (2017) build an autoencoder to encode and decode in-scope utterances and obtain reconstruction loss by comparing input embeddings with decoded ones.", "Out-of-scope utterances result in higher reconstruction loss.", "The third group leverages off-the-shell outlier detection algorithms such as local outlier factor (LOF) (Breunig et al., 2000), one-class SVM (Scholkopf et al., 2001), robust covariance estimators (Rousseeuw and Driessen, 1999), and isolation forest (Liu et al., 2008) to detect out-of-scope examples.", "Utterance embeddings belonging to a specific class will be mapped to the corresponding cluster (usually modeled by a Gaussian distribution) while out-of-scope samples will be pushed away from all in-scope clusters.", "Examples of this kind include SEG (Yan et al., 2020a) and LMCL (Lin and Xu, 2019).", "Very recently, Zhang et al. (2021) propose to learn adaptive decision boundaries after pre-training instead of using off-the-shell outlier detection algorithms.", "In addition, some other work focuses on out-of-scope detection in few-shot scenarios.", "Tan et al. (2019) leverage independent source datasets as simulated OOD examples to form a hinge loss term.", "Zhang et al. (2020) propose to pretrain BERT by a natual language understanding task with large-scale training data to transfer useful information for few-shot intent detection.", "Finally, for our proposal of constructing synthetic outliers, the most similar method is Mixup proposed by Zhang et al. (2018).", "However, their method is designed for data augmentation to enhance in-distribution performance and requires corresponding combinations in the label space (Thu-lasidasan et al., 2019).", "Problem Statement In a dialogue system, given K predefined intent classes S known = { C i } Ki =1 , an unknown intent detection model aims at predicting the category of an utterance u , which may be one of the known intents or an out-of-scope intent C oos .", "Essentially, it is a K + 1 classification problem at the test stage.", "At the training stage, a set of N labeled utterances D l = { ( x i , c i ) | c i S known ) } Ni =1 is provided for training.", "Previous methods typically train a K -way classifier for the known intents.", "Overview of Our Approach The mismatch between the training and test tasks, i.e., K -way classification vs. ( K + 1) -way classification, leads to the use of strong assumptions and additional complexity in previous methods.", "Inspired by recent practice in meta learning to simulate test conditions in training (Vinyals et al., 2016), we propose to match the training and test settings.", "In essence, as shown in Figure 2, we formalize a ( K + 1) -way classification task in the training stage by constructing out-of-scope samples via self-supervision and from open-domain data.", "Our method simply trains a ( K + 1) -way classifier without making any assumption on the data distribution.", "After training, the classifier can be readily applied to the test task without any adaptation or post-processing.", "In the following, we elaborate on the details of our proposed method, including representation learning, Figure 2: An illustration of our proposed method.", "We employ BERT (Devlin et al., 2019) a deep Transformer network as text encoder.", "Specifically, we take the d -dimensional output vector of the special classification token [CLS] as the representation of an utterance u , i.e., h = BERT ( u ) R d , where d = 768 by default.", "The training set D l is then mapped to D trl = { ( h i , c i ) | h i = BERT ( u i ) , ( u i , c i ) D l } Ni =1 in the feature space.", "We construct two different types of pseudo outliers to be used in the training stage: synthetic outliers that are generated by self-supervision, and open-domain outliers that can be easily acquired.", "Synthetic Outliers by Self-Supervision To improve the generalization ability of the unknown intent detection model, we propose to generate hard outliers in the feature space, which may have similar representations to the inliers of known intent classes.", "We hypothesize that those outliers may be geometrically close to the inliers in the feature space.", "Based on this assumption, we propose a self-supervised method to generate the hard outliers using the training set D trl .", "where h and h are the representations of two utterances which are randomly sampled from different intent classes in D trl , i.e., c (cid:54) = c , and h oos is the synthetic outlier.", "For example, can be sampled from a uniform distribution U (0 , 1) .", "In this case, when is close to 0 or 1 , it will generate harder outliers that only contain a small proportion of mix-up from different classes.", "In essence, hard outliers act like support vectors in SVM (Cortes and Vapnik, 1995), and harder outliers could help to train a more discriminative classifier.", "The generated outliers h oos are assigned to the class of C oos , the ( K + 1) -th class in the feature space, forming a training set D trco = { ( h oosi , c i = C oos ) } Mi =1 .", "Notice that since the outliers are generated in the feature space, it is very efficient to construct a large outlier set D trco .", "Open-Domain Outliers In practical dialogue systems, user input can be arbitrary free-form sentences.", "To simulate real-world outliers and provide learning signals representing them in training, we propose to construct a set of open-domain outliers, which can be easily obtained.", "Specifically, the set of free-form outliers D fo can be constructed by collecting sentences from various public datasets that are disjoint from the training and test tasks.", "There are many datasets available, including the question answering dataset SQuaD 2.0 (Rajpurkar et al., 2018), the sentiment analysis datasets Yelp (Meng et al., 2018) and IMDB (Maas et al., 2011), and dialogue datasets from different domains.", "In the feature space, D fo is mapped to D trfo = { ( h oosi , c i = C oos ) | h oosi = BERT ( u i ) , u i D fo } Hi =1 .", "Both synthetic outliers and open-domain outliers are easy to construct.", "As will be demonstrated in Section 4, both of them are useful, but synthetic outliers are much more effective than open-domain outliers in improving the generalization ability of the trained ( K + 1) -way intent classifier.", "After constructing the pseudo outliers, in the feature space, our training set D tr now consists of a set of inliers D trl and two sets of outliers D trco and D trfo , i.e., D tr = D trl D trco D trfo and |D tr | = N + M + H .", "Therefore, in the training stage, we can train a ( K + 1) -way classifier with the intent label set S = S known { C oos } , which can be directly applied in the test stage to identify unknown intent and classify known ones.", "In particular, we use a multilayer perceptron network, ( ) , as the classifier in the feature space.", "The selection of the classifier is flexible, and the only requirement is that it is differentiable.", "Then, we train our model using a cross-entropy loss: L = 1 |D tr | (cid:88) D tr log exp(( h i ) c i / ) (cid:80) j S exp(( h i ) j / ) , where ( h i ) c i refers to the output logit of ( ) for the ground-truth class c i , and R + is an adjustable scalar temperature parameter.", "In this section, we present the experimental results of our proposed method on the targeted task of unknown intent detection.", "Given a test set comprised of known and unknown intent classes, the primary goal of an unknown intent detection model is to assign correct intent labels to utterances in the test set.", "Notice that the unknown intent label C oos is also included as a special class for prediction.", "We evaluate our proposed method on four benchmark datasets as follows, three of which are newly released dialogue datasets designed for intent detection.", "The statistics of the datasets are summarized in Table 2. CLINC150 (Larson et al., 2019) is a dataset specially designed for out-of-scope intent detection, which consists of 150 known intent classes from 10 domains.", "The dataset includes 22 , 500 in-scope queries and 1 , 200 out-of-scope queries.", "For the in-scope ones, we follow the original splitting, i.e., 15 , 000 , 3 , 000 and 4 , 500 for training, validation, and testing respectively.", "For the out-of-scope ones, we group all of the 1 , 200 queries into the test set.", "StackOverflow (Xu et al., 2015) consists of 20 classes with 1 , 000 examples in each class.", "We follow the original splitting, i.e., 12 , 000 for training, 2 , 000 for validation, and 6 , 000 for test.", "Banking (Casanueva et al., 2020) is a fine-grained intent detection dataset in the banking domain.", "It consists of 9 , 003 , 1 , 000 , and 3 , 080 user queries in the training, validation, and test sets respectively.", "M-CID (Arora et al., 2020) is a recently released dataset related to Covid19 .", "We use the English subset of this dataset referred to as M-CID-EN in our experiments, which covers 16 intent classes.", "The splitting of M-CID-EN is 1 , 258 for training, 148 for validation, and 339 for test.", "We extensively compare our method with the following unknown intent detection methods.", "Maximum Softmax Probability (MSP) (Hendrycks and Gimpel, 2017) employs the confidence score derived from the maximum softmax probability to predict the class of a sample.", "The idea under the hood is that the lower the confidence score is, the more likely the sample is of an unknown intent class.", "DOC (Shu et al., 2017) considers to construct m 1 -vs-rest sigmoid classifiers for m seen classes respectively.", "It uses the maximum probability from these classifiers as the confidence score to conduct classification.", "SEG (Yan et al., 2020a) models the intent distribution as a margin-constrained Gaussian mixture distribution and uses an additional outlier detector local outlier factor CLINC150 StackOverflow Banking M-CID-EN Methods Accuracy Macro-F1 Accuracy Macro-F1 Accuracy Macro-F1 Accuracy Macro-F1 25% MSP 66.60 51.20 33.94 45.68 48.15 48.47 52.05 43.14 DOC 64.43 44.60 60.68 60.51 37.78 46.35 49.32 46.59 SEG 72.86 65.44 47.00 52.83 51.11 55.68 44.51 50.14 LMCL 68.57 62.42 41.60 48.21 52.77 56.73 41.44 46.99 Softmax 76.50 67.74 46.17 50.78 57.88 58.32 41.95 45.46 Ours 88.44 80.73 68.74 65.64 74.11 69.93 87.08 79.67 50% MSP 68.61 51.20 56.33 62.92 53.83 65.33 61.21 54.33 DOC 62.46 70.01 61.62 68.97 58.29 57.30 59.97 62.28 SEG 77.05 79.42 68.50 74.18 68.44 76.48 67.91 72.37 LMCL 78.63 80.42 64.34 71.80 63.59 73.99 63.42 69.04 Softmax 82.47 82.86 65.96 71.94 67.44 74.19 64.72 69.35 Ours 88.33 86.67 75.08 78.55 72.69 79.21 81.05 79.73 75% MSP 73.41 81.81 76.73 77.63 71.92 80.77 72.89 77.34 DOC 74.63 78.63 63.98 62.07 72.02 78.04 69.79 71.18 SEG 81.92 86.57 80.83 84.78 78.87 85.66 75.73 79.97 LMCL 84.59 88.21 80.02 84.47 78.66 85.33 77.11 80.96 Softmax 86.26 89.01 77.41 82.28 78.20 84.31 76.99 80.82 Ours 88.08 89.43 81.71 85.85 81.07 86.98 80.24 82.75 Table 1: Overall accuracy and macro f1-score for unknown intent detection with different proportion of seen classes.", "unknown intent detection.", "LMCL (Lin and Xu, 2019) considers to learn discriminative embeddings with a large margin cosine loss.", "It also uses LOF as the outlier detection algorithm.", "Softmax (Yan et al., 2020a) uses a softmax loss to learn discriminative features based on the training dataset, which also requires an additional outlier detector such as LOF for detecting the unknown intents.", "To compare with existing methods, we follow the setting in LMCL (Lin and Xu, 2019).", "Specifically, for each dataset, we randomly sample 75% , 50% , and 25% of the intent classes from the training set as the known classes to conduct training, and we set aside the rest as the unknown classes for test.", "Notice that for training and validation, we only use data within the chosen known classes and do not expose our model to any of test-time outliers.", "Unless otherwise specified, in each training batch, we keep the ratio of inliers, open-domain outliers and self-supervised outliers roughly as 1 : 1 : 4 .", "This setting is empirically chosen and affected by the memory limit of NVIDIA 2080TI GPU, which we use for conducting the experiments.", "The number of pseudo outliers can be adjusted according to different environments, and a larger number of self-supervised outliers typically takes more time to converge.", "We use Pytorch (Paszke et al., 2019) as the back-end to conduct the experiments.", "We use the pretrained BERT mdoel ( bert-base-uncased ) provided by Wolf et al. (2019) as the encoder for utterances.", "We use the output vector of the special classification token [CLS] as the utterance embedding and fix its dimension as 768 by default throughout all of our experiments.", "To ensure a fair comparison, all baselines and our model use the same encoder.", "For model optimization, we use AdamW provided by Wolf et al. (2019) to fine-tune BERT and Adam proposed by Kingma and Ba (2015) to train the MLP clasisfier ( ) .", "We set the learning rate for BERT as 1 e 5 as suggested by Devlin et al. (2019).", "For the MLP clasisfier, the learning rate is fixed as 1 e 4 .", "Notice that the fine-tuning of BERT CLINC150 StackOverflow Banking M-CID-EN Methods Unknown Known Unknown Known Unknown Known Unknown Known 25% MSP 73.20 50.62 22.59 50.30 49.98 48.39 56.27 37.86 DOC 71.08 43.91 66.11 59.39 31.41 47.14 53.08 44.92 SEG 79.90 65.06 46.17 54.16 53.22 55.81 42.73 51.99 LMCL 75.61 62.01 38.85 50.15 55.29 56.81 36.99 49.50 Softmax 83.04 67.34 45.52 51.83 62.52 58.10 35.39 46.22 Ours 92.35 80.43 74.86 63.80 80.12 69.39 91.15 76.80 50% MSP 57.78 68.03 35.18 70.09 29.31 66.28 58.55 53.80 DOC 57.62 70.17 47.96 71.07 49.88 57.50 47.22 64.16 SEG 78.02 79.43 60.89 75.51 60.42 76.90 61.04 73.80 LMCL 79.89 80.42 53.12 71.80 50.30 74.62 51.11 71.29 Softmax 84.19 82.84 56.80 73.45 60.28 74.56 56.30 70.98 Ours 90.30 86.54 71.88 79.22 67.26 79.52 82.44 79.39 75% MSP 57.83 82.02 41.73 80.03 23.86 81.75 39.56 80.50 DOC 64.62 78.76 49.50 62.91 39.47 78.72 49.41 72.99 SEG 76.12 86.67 62.30 86.28 54.43 86.20 51.51 82.34 LMCL 80.42 88.28 61.40 84.47 53.26 85.89 54.61 83.16 Softmax 83.12 89.61 54.07 84.11 56.90 84.78 58.73 82.66 Ours 86.28 89.46 65.44 87.22 60.71 87.47 69.00 83.89 Table 3: Macro f1-score of the known classes and f1-score of the unknown class with different proportion of seen classes.", "is conducted simultaneously with the training of the classifier ( ) with the same cross-entropy loss.", "The MLP classifier ( ) has a two-layer architecture with [ 1024 , 1024 ] as hidden units.", "The temperature parameter is selected by cross-validation and set as 0 .", "1 in all experiments.", "Following LMCL (Lin and Xu, 2019), we use overall accuracy and macro f1-score as evaluation metrics.", "All results reported in this section are the average of 10 runs with different random seeds, and each run is stopped until reaching a plateau on the validation set.", "For baselines, we follow their original training settings except using the aforementioned BERT as text encoder.", "We present our main results in Table 1 and Table 3. Specifically, Table 1 gives results in overall accuracy and macro f1-score for all classes including the outlier class, while Table 3 shows results in macro f1-score for the known classes and f1-score for the outlier class respectively.", "It can be seen that, on all benchmarks and in almost every setting, our model significantly outperforms the baselines.", "As shown in Table 3, our method achieves favorable performance on both unknown and known intent classes simultaneously.", "It is worth mentioning that the large improvements of our method in scenarios with small labeled training sets (25% and 50% settings) indicate its great potential in real-life applications, since a practical dialogue system often needs to deal with a larger proportion of outliers than inliers due to different user demographic, ignorance/unfamiliarity of/with the platform, and limited intent classes recognized by the system (especially at the early development stage).", "More importantly, referring to Table 3, as the proportion of known intents increases, it can be seen that the performance gains of the baselines mainly lie in the known classes.", "In contrast, our method can strike a better balance between the known and unknown classes without relying on additional outlier detector, margin tuning, and threshold selection, demonstrating its high effectiveness and generality.", "Take the Softmax baseline for example, in the 75% case of CLINC150, it achieves a slightly higher result than our model on the known classes but a substantially lower result on the unknown ones.", "We conduct an ablation study on the effectiveness of the two kinds of pseudo outliers and summarize the results in Table 4. The first row of the three settings ( 25% , 50% , and 75% ) stands for training solely with the labeled examples of CLINC150", "without using any pseudo outliers.", "In general, self-supervised synthetic outliers and open-domain outliers both lead to positive effects on classification performance.", "For each setting, comparing the second row with the third, we can observe that the synthetic outliers produced by convex combinations lead to a much larger performance gain than that of pre-collected open-domain outliers.", "Finally, combining them for training leads to the best results, as shown in the fourth row of each setting.", "pseudo outliers separately, as shown in Figure 3. We first fix the number of open-domain outliers as zero and then increase the number of self-supervised outliers.", "The results are displayed in Figure 3", "(a),", "(b) and", "(c).", "In particular, as the number of self-supervised outliers grows, the performance first increases quickly and then grows slowly.", "On the other hand, we fix the number of self-supervised outliers as zero and then increases the number of open-domain outliers.", "The results are shown in Figure 3", "(d),", "(e) and", "(f), where it can be seen that dozens of open-domain outliers already can bring significant improvements, though the gain is much smaller compared to that of the self-supervised outliers.", "Finally, we investigate the impact of the number of self-supervised outliers on overall intent detection accuracy with both the number of inliers and the number of open-domain outliers fixed as 100 per training batch.", "As shown in Figure 4, we increase the number of self-supervised outliers from 0 to 5000 .", "Note that 400 is the default setting used in Table 1 and Table 3. We can see that comparable results can be obtained for a wide range of numbers.", "However, when the number grows to 5000 , the performance exhibits a significant drop.", "We hypothesize that as the number increases, the D trco D trfo Acc Macro-F1 F1 Unknown 25% 19.79 41.05 (cid:88) 81.96 71.15 87.8 (cid:88) 37.55 45.14 36.91 (cid:88) (cid:88) 88.44 80.73 92.35 50% 38.78 60.35 (cid:88) 83.12 82.62 85.03 (cid:88) 48.62 63.19 28.82 (cid:88) (cid:88) 88.33 86.67 90.30 75% 57.43 73.6 (cid:88) 84.16 86.9 80.36 (cid:88) 69.61 79.42 48.29 (cid:88) (cid:88) 88.08 89.43 86.28 Table 4: An ablation study on the effectiveness of pseudo outliers.", "generated synthetic outliers may be less accurate, because some convex combinations may fall within the scope of known classes.", "To summarize, self-supervised outliers play a much more important role than open-domain outliers for unknown intent classification.", "Self-supervised outliers not only provide better learning signals for the unknown intents, but also impose an important positive effect on the known ones.", "For the open-domain outliers, if used alone, they can only provide limited benefit.", "But in combination with the self-supervised ones, they can further enhance the performance.", "To demonstrate the flexibility of our method in selecting open-domain outliers as described in Section 3.2, we train our model on CLINC150 using open-domain outliers from different sources.", "The results are summarized in Table 5.", "Specifi-cally, Open-bank and Open-stack stand for using Figure 5: Comparison of training time (per epoch) and test time with baselines.", "the training set of Banking and StackOverflow as the source of open-domain outliers respectively.", "Open-big stands for the source of open-domain outliers used in other experiments, which consists of 0 .", "5 million sentences randomly selected from SQuaD 2.0 (Rajpurkar et al., 2018), Yelp (Meng et al., 2018), and IMDB (Maas et al., 2011).", "It can be seen that the performance of our model is insensitive to the selection of open-domain outliers.", "We provide a quantitative comparison on the training and test efficiency for our method and the baselines, by calculating the average time (in seconds) for training per epoch and the total time for testing under the 75% setting.", "Here, we only compare with the strongest baselines.", "As shown in Figure 5, even with the pseudo outliers, the training time of our method is comparable to that of the baselines.", "Importantly, in the test stage, our method demonstrates significant advantages in efficiency, which needs much less time to predict intent classes for all samples in the test set.", "We have proposed a simple, effective, and efficient approach for out-of-scope intent detection by overcoming the limitation of previous methods via matching train-test conditions.", "Particularly, at the training stage, we construct self-supervised and open-domain outliers to improve model generalization and simulate real outliers in the test stage.", "Extensive experiments on four dialogue datasets show that our approach significantly outperforms state-of-the-art methods.", "In the future, we plan to investigate the theoretical underpinnings of our approach and apply it to more applications.", "We would like to thank the anonymous reviewers for their helpful comments.", "This research was supported by the grant HK ITF UIM/377." ]
[ "abstain", "abstain", "objective", "method", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "method", "abstain", "abstain", "method", "method", "method", "result", "objective", "objective", "objective", "objective", "other", "other", "other", "abstain", "other", "other", "other", "method", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "objective", "result", "result", "objective", "other", "other" ]
[ "Synthesizing data for semantic parsing has gained increasing attention recently.", "However, most methods require handcrafted (high-precision) rules in their generative process, hindering the exploration of diverse unseen data.", "In this work, we propose a generative model which features a (non-neural) PCFG that models the composition of programs (e.g., SQL), and a BART-based translation model that maps a program to an utterance.", "Due to the simplicity of PCFG and pre-trained BART, our generative model can be efficiently learned from existing data at hand.", "Moreover, explicitly modeling compositions using PCFG leads to a better exploration of unseen programs, thus generate more diverse data.", "We evaluate our method in both in-domain and out-of-domain settings of text-to-SQL parsing on the standard benchmarks of GEOQUERY and SPIDER , respectively.", "Our empirical results show that the synthesized data generated from our model can substantially help a semantic parser achieve better compositional and domain generalization.", "Recently, synthesizing data for semantic parsing has gained increasing attention (Yu et al., 2018a, 2020; Zhong et al., 2020).", "However, these models require handcrafted rules (or templates) to synthesize new programs or utterance-program pairs.", "This can be sub-optimal as fixed rules cannot capture the underlying distribution of programs which usually vary across different domains (Herzig and Berant, 2019).", "Meanwhile, designing such rules also requires human involvement with expert knowledge.", "To alleviate this, we propose to learn a generative model from the existing data at hand.", "Our key observation is that programs (e.g., SQL) Work done at Salesforce Research.", "are formal languages that are intrinsically compositional.", "That is, the underlying grammar of programs is usually known and can be used to model the space of all possible programs effectively.", "Typically, grammars are used to constrain the program space during decoding of neural parsers (Yin and Neubig, 2018; Krishnamurthy et al., 2017).", "In this work, we utilize grammars to generate (unseen) programs, which are then used to synthesize more parallel data for semantic parsing.", "Concretely, we use text-to-SQL as an example task, and propose a generative model to synthesize utterance-SQL pairs.", "As illustrated in Figure 1, we first employ a probabilistic context-free grammar (PCFG) to model the distribution of SQL queries.", "Then with the help of a SQL-to-text translation model, the corresponding utterances of SQL queries are generated subsequently.", "Our approach is in the same spirit as back-translation (Sennrich et al., 2016).", "The major difference is that the target language', in our case, is a formal language with known underlying grammar.", "Just like the training of a semantic parser, the training of the data synthesizer requires a set of utterance-SQL pairs.", "Hence, our generative model is unlikely to be useful if it is as data-hungry as a semantic parser.", "Our two-stage data synthesis approach, i.e. the PCFG and the translation model, is designed to be more sample-efficient compared to a neural semantic parser.", "To achieve better sample efficiency, we use the nonneural parameterization of PCFG (Manning and Schtze, 1999) and estimate it via simple counting.", "For the translation model, we use the pre-trained text generation model BART (Lewis et al., 2020).", "We sample synthetic data from the generative model to pre-train a semantic parser.", "The resulting parameters can presumably provide a strong compositional inductive bias in the form of initializations.", "We conduct experiments on two text-to-SQL parsing datasets, namely GEOQUERY (Zelle and Mooney, 1996) and SPIDER (Yu et al., 2018b).", "In the query split of GEOQUERY , where training and test sets do not share SQL patterns, synthesized data helps boost the performance of a base parser by a large margin of 12.6%, leading to better compositional generalization of a parser.", "In the cross-domain 1 setting of SPIDER , synthesized data also boosts the performance by 3.1% in terms of execution accuracy, resulting in better domain generalization of a parser.", "Our work can be summarized as follows: We propose to efficiently learn a generative model that can synthesize parallel data for semantic parsing.", "We empirically show that the synthesized data can help a neural parser achieve better compositional and domain generalization.", "Our code and data are available at https://github.com/berlino/ tensor2struct-public .", "Data Augmentation Data augmentation for semantic parsing has gained increasing attention in recent years.", "Dong et al. (2017) use back-translation (Sennrich et al., 2016) to obtain paraphrase of questions.", "Jia and Liang (2016) induce a high-precision SCFG from training data to generate more new recombinant examples.", "Yu et al. (2018a, 2020) follow the same spirit and use a handcrafted SCFG rule to generate new parallel data.", "However, the production rules of these approaches usually have low coverage of meaning representations.", "In this work, instead of using SCFG that accounts for rigid alignments between utterance and programs, we use a two-stage approach that implicitly models the alignments by taking advantage of powerful conditional text generators such 1 We use the terms domain and database interchangeably.", "as BART.", "In this way, our approach can generate more diverse data.", "The most related work to ours is GAZP (Zhong et al., 2020) which synthesizes parallel data directly on test databases in the context of cross-database semantic parsing.", "Our work complements GAZP and shows that synthesizing data indirectly in training databases can also be beneficial for cross-database semantic parsing.", "Crucially, we learn the distribution of SQL programs instead of relying on handcrafted templates as in GAZP.", "The induced distribution helps a model explore unseen programs, leading to better compositional generalization of a parser.", "Generative Models In the history of semantic parsing, grammar-based generative models (Wong and Mooney, 2006, 2007; Zettlemoyer and Collins, 2005; Lu et al., 2008) have played an important role.", "However, learning and inference of such models are usually expensive as they typically require grammar induction (from text to logical forms).", "Moreover, their grammars are designed specifically for linguistically faithful languages, e.g., logical forms, thus not suitable for programming languages such as SQL.", "In contrast, our generative model is more flexible and efficient to train due to the two-stage decomposition.", "In this section, we explain how our method can applied to text-to-SQL parsing.", "Formally, the labeled data for text-to-SQL parsing is given as a set of triples ( x, d, y ) , and each triple represents an utterance x , the corresponding SQL query y and relational database d .", "A probabilistic semantic parser is trained to maximize p ( y | x, d ) .", "The goal of this work is to learn a generative model of q ( x, y | d ) given databases such that it can synthesize more data (i.e., triplets) for training a semantic parser p ( y | x, d ) .", "Note that we use different notations q and p to represent the generative model and the discriminative parser , respectively, where p ( y | x, d ) is not a posterior distribution of q .", "Instead, p is a separate model trained with different parameterization with q .", "This is primarily due to the intractability of posterior inference of q ( y | x, d ) .", "Specifically, we use a two-stage process to model the generation of utterance-SQL pairs as follows: q ( x, y | d ) = q ( y | d ) q ( x | y, d ) (1) sql = (select select, cond? where) select = (agg aggs) agg = (agg_type agg_id, column col_id) agg_type = NoneAggOp | Max | Min cond = And(cond left, cond right) | Or(cond left, cond right) | Not(cond", "where q ( y | d ) models the distribution of SQLs given a database, and q ( x | y, d ) models the translation process from SQL to utterances.", "We use abstract syntax trees (ASTs) to model the underlying grammar of SQL, following Yin and Neubig (2018) and Wang et al. (2020b).", "Specifically, we use ASDL (Wang et al., 1997) formalism to define ASTs.", "To illustrate, Figure 2 shows a simplified ASDL grammar for SQL.", "The ASDL grammar of SQL can be represented by a set of context-free grammar (CFG) rules, as elaborated in the Appendix.", "By assuming the strong independence of each production rule, we model the probability of generating a SQL as the product of the probability of each production rule q ( y ) = (cid:81) Ni = q ( T i ) .", "It is well known that estimating the probability of a production rule via maximum-likelihood training is equivalent to simple counting, which is defined as follows: q ( N ) = C ( N ) (cid:80) C ( N ) (2) where C is the function that counts the number of occurrences of a production rule.", "With generated SQL queries at hand, we then show how we map SQLs to utterances to obtain more paired data.", "We notice that SQL-to-utterance translation, which belongs to the general task of conditional text generation, shares the same output space with summarization and machine translation.", "Fortunately, pre-trained models (Devlin et al., 2019; Radford et al., 2019) using self-supervised methods have shown great success for conditional text generation tasks.", "Hence, we take advantage of a contemporary pre-trained model, namely BART (Lewis et al., 2020), which is an encoder-decoder model that uses the Transformer architecture(Vaswani et al., 2017).", "To obtain a SQL-to-utterance translation model, we fine-tune the pre-trained BART model with our parallel data, with SQL being the input sequence and utterance being the output sequence.", "Empirically, we found that the desired translation model can be effectively obtained using the SQL-utterance pairs at hand, although the original BART model is designed for text-to-text translation only.", "After obtaining a trained generative model q ( x, y | d ) , we can sample synthetic pairs of ( x, y ) for each database d .", "The synthesized data will then be used as a complement to the original training data for a semantic parser.", "Following Yu et al. (2020), we use the strategy of first pre-training a parser with the synthesized data, and then fine-tuning it with the original training data.", "In this manner, the resulting parameters encode the compositional inductive bias introduced by our generative model.", "Another way to view pre-training is that a parser p ( y | x, d ) is essentially trained to approximate the posterior distribution of q ( y | x, d ) via massive samples from q ( x, y | d ) .", "We show that our generative model can be used to synthesize data in two settings of semantic parsing.", "We also present an ablation study for our approach.", "In-Domain Setting We first evaluate our method in the conventional in-domain setting where training and test data are from the same database.", "Specifically, we synthesize new data for the GEOQUERY dataset (Zelle and Mooney, 1996) which contains 880 utterance-SQL pairs on the database of U.S. geography.", "We evaluate in both question and query split, following Finegan-Dollak et al. (2018).", "The traditional question split ensures that no utterance is repeated between the train and test sets.", "This only tests limited generalization as many utterances correspond to the same SQL query; query split is introduced to ensure that neither utterances nor SQL queries repeat.", "The query split tests compositional generalization of a semantic parser as only fragments of test SQL queries occur in the training set.", "Out-of-Domain Setting Then we evaluate our method in a challenging out-of-domain setting where the training and test databases do not overlap.", "That is, a parser is trained on some source Model Question Split Query Split seq2tree (Dong and Lapata, 2016) 62 31 GECA (Andreas, 2020) 68 49 template-based (2018) 55.2 seq2seq (Iyer et al., 2017) 72.5 Base Parser 70.9 49.5 Base Parser + Syn Pre-Train 74.6 62.1 w.o. trained PCFG 72.4 54.8 w.o. pre-trained BART 71.5 53.9 Table 1: Execution accuracies on GEOQUERY .", "databases but evaluated in unseen target databases .", "Concretely, we apply our method to the SPIDER (Yu et al., 2018b) dataset where the training contains utterance-SQL pairs from 146 source databases and the test set contains data from a disjoint set of target databases.", "In this out-of-domain setting, we synthesize data in the source databases in the hope that it can promote its domain generalization to unseen target databases.", "Training As mentioned in Section 3.4, we use pre-training to augment a semantic parser with synthesized data.", "Specifically, we use the following four-step training procedure: 1) train a two-stage generative model, namely q ( x, y | d ) , 2) sample new data from it, 3) pre-train a semantic parser p ( y | x, d ) using the synthesized data, 4) fine-tune the parser with the target training data.", "In the in-domain setting, one PCFG and translation model is trained.", "In the out-of-domain setting, a separate PCFG is trained on each source database assuming that each database has a different distribution of SQL queries.", "In contrast, a single translation model is trained and shared across source databases.", "We use RAT-SQL (Wang et al., 2020b) as our base parser.", "The size of the synthesized data is always proportional to the size of the original data.", "We tune the ratio in { 1 , 3 , 6 , 12 } , and find that 3 , 6 works best for GEOQUERY and SPIDER respectively.", "We use the RAT-SQL implementation from Wang et al. (2020a) which supports value prediction and evaluation by execution.", "We train it with the default hyper-parameters.", "For the SQL-to-utterance translation model, we reuse all the default hyperparame-ters from BART (Lewis et al., 2020).", "Both models are trained using NVIDIA V100.", "For GEOQUERY , we report execution accuracy on the test sets of the question and query split; for SPI Model", "DER , we report exact set match (Yu et al., 2018b) along with execution accuracy on the dev set.", "The main results are shown in Table 1 and 2. First, we can see that compared with previous work, our base parser achieves the best performance, confirming that we are using a strong base parser to test our synthesized data.", "With the pre-training using synthesized data, the performance of the base parsers is boosted in both GEOQUERY and SPIDER .", "In GEOQUERY , the pretraining results in the margin of 12.6% in the query split.", "This is somewhat expected as our generative model, especially q ( y | d ) directly models the composition underlying SQL queries, which helps a parser generalize better to unseen queries.", "Moreover, our sampled SQL queries cover around 15% test SQL queries of the query split, partially explaining why it is so beneficial for the query split.", "In SPIDER , the pre-training boosts the performance by 3.1% in terms of execution accuracy.", "Although our model does not synthesize data directly for target databases (which are unseen), it still helps a parser achieve better domain generalization.", "This contradicts the observation by Zhong et al. (2020) that synthesizing data in source databases is useless, even harmful without careful consistency calibration.", "We attribute this to the pre-training strategy we use, as in our preliminary experiments we found that directly mixing the synthesized data with the original training data is indeed harmful.", "We try to answer two questions:", "a) whether it is necessary to learn a PCFG ;", "b) whether pre-trained translation model, namely BART, is required for success .", "To answer the first question, we use a randomized version of q ( y | d ) where the probability of production rules are uniformly distributed, instead of being estimated from data in Equation (2).", "As Sampled SQLs ( y ) Generated Utterances ( x ) SELECT length FROM river WHERE traverse = \"new york\" What is the length of the river whose traverse is in New York city?", "shown in Table 1 and 2, this variant ( w.o. trained PCFG) still improves the base parsers, but with a smaller margin.", "This shows that a trained PCFG model is better at synthesizing useful SQL queries.", "To answer the second question, we use a randomly initialized SQL-to-utterance translation model instead of BART.", "As shown in Table 1 and 2, this variant ( w.o. pre-trained BART) results in a drop in performance as well, indicating that pre-trained BART is crucial for synthesizing useful utterances.", "Table 3 shows examples of synthesized paired data for GEOQUERY .", "In the positive examples, the sampled SQLs can be viewed as recombinations of SQLs fragments observed in the training data.", "For example, SELECT Sum(length) and traverse = colorado are SQL fragments from separate training examples.", "Our PCFG combines them together to form a new SQL, and the SQL-to-utterance model successfully maps it to a reasonable translation.", "The negative examples consist of two kinds of errors.", "First, the PCFG generated semantically invalid SQLs which cannot be mapped to reasonable utterances.", "This error is due to the independence assumption made by the PCFG.", "For instance, when a column and its corresponding entity is separately sampled, there is no guarantee that they form a meaningful clause, as shown in population = mississippi .", "To address this, future work might consider more powerful generative models to model the dependencies within and across clauses in a SQL.", "Second, the SQL-to-utterances model failed to translate the sampled SQLs, as shown in the last example.", "In this work, we propose to efficiently learn a generative model that can synthesize parallel data for semantic parsing.", "The synthesized data is used to pre-train a semantic parser and provide a strong inductive bias of compositionality.", "Empirical results on GEOQUERY and SPIDER show that the pre-training can help a parser achieve better compositional and domain generalization.", "We would like to thank the anonymous reviewers for their valuable comments.", "We thank Naihao Deng for providing the preprocessed database for GEOQUERY ." ]
[ "abstain", "abstain", "objective", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "method", "abstain", "method", "abstain", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other", "method", "other", "method", "other", "abstain", "abstain", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other" ]
[ "The hypernymy detection task has been addressed under various frameworks.", "Previously, the design of unsupervised hypernymy scores has been extensively studied.", "In contrast, supervised classifiers, especially distributional models, leverage the global contexts of terms to make predictions, but are more likely to suffer from lexical memorization.", "In this work, we revisit supervised distributional models for hypernymy detection.", "Rather than taking embeddings of two terms as classification inputs, we introduce a representation learning framework named Bidirectional Residual Relation Embeddings (BiRRE).", "In this model, a term pair is represented by a BiRRE vector as features for hypernymy classification, which models the possibility of a term being mapped to another in the embedding space by hypernymy relations.", "A Latent Projection Model with Negative Regularization (LPMNR) is proposed to simulate how hypernyms and hyponyms are generated by neural language models, and to generate BiRRE vectors based on bidirectional residuals of projections.", "Experiments verify BiRRE outperforms strong baselines over various evaluation frameworks.", "As a type of linguistic resources, hypernymy relations refer to is-a relations between terms.", "Such relations are frequently exploited in a wide range of NLP tasks, including taxonomy induction (Mao et al., 2018), lexical entailment (Vulic et al., 2017) and Web query understanding (Wang et al., 2015).", "In the NLP community, the task of hypernymy detection has been studied under various frameworks, e.g., unsupervised hypernym discovery (Roller et al., 2018; Chen et al., 2018; Chang et al., 2018), supervised hypernymy classification (Shwartz et al., 2016; Nguyen et al., Corresponding author. 2017), graded lexical entailment (Vulic et al., 2017).", "To address unsupervised hypernym discovery, pattern-based and distributional approaches are two mainstream types of methods.", "Pattern-based approaches use Hearst patterns (Hearst, 1992) and their variants to extract hypernymy relations from texts (Kozareva and Hovy, 2010; Roller and Erk, 2016).", "Distributional methods employ hypernymy measures (or called scores) to predict hypernymy based on distributional vectors (Santus et al., 2014, 2017), alleviating the pattern sparsity issue.", "Le et al. (2019) combine Hearst patterns and hyperbolic embeddings for unsupervised hypernym detection.", "Compared to unsupervised tasks, the supervised hypernymy detection task is formulated more directly, classifying a term pair as hypernymy or non-hypernymy based on two terms' representations (Yu et al., 2015; Anke et al., 2016; Nguyen et al., 2017).", "Although this task definition is more straightforward, the corresponding methods receive criticism because they may suffer from lexical memorization (Levy et al., 2015), referring to the phenomenon that they only learn whether a term is a prototypical hypernym, rather than the actual relations between two terms.", "To address the problem, several methods combine other signals as inputs for hypernymy classifiers, such as dependency paths (Shwartz et al., 2016) and the WordNet concept hierarchy (Nguyen et al., 2017).", "Nonetheless, it is worth studying whether supervised clas-sifiers can learn hypernymy relations purely based on distributional representations.", "In this paper, we revisit supervised distributional models for hypernymy detection, and propose a representation learning framework named Bidirectional Residual Relation Embeddings (BiRRE).", "To handle lexical memorization (Levy et al., 2015), we learn a BiRRE vector for each term pair as features for the classifier, avoiding using the two terms' embeddings directly.", "The BiRRE vector models the possibility of a term being mapped to another in the embedding space by hypernymy relations, learned via existing neural language models and supervised signals of the training set.", "Specifically, we introduce the Latent Projection Model with Negative Regularization (LPMNR) to simulate how hypernyms and hyponyms are generated in the the embedding space.", "The BiRRE vectors are generated based on bidirectional residuals of projection results of LPMNR.", "Experiments over multiple public datasets and various evaluation frameworks prove that BiRRE outperforms strong baselines.", "The rest of this paper is organized as follows.", "Section 2 summarizes the related work.", "The BiRRE framework is elaborated in Section 3, with experiments shown in Section 4. Finally, we conclude our paper and discuss the future work in Section 5. 2 Related Work In this section, we overview related work on various tasks related to hypernymy detection.", "Due to space limitation, we focus on recent advances and refer readers to Wang et al. (2017a) for earlier work.", "Pattern-based approaches date back to Hearst (1992), utilizing handcrafted patterns in English for text matching.", "An example of Hearst patterns is [...] such as [...].", "They are employed to build large-scale taxonomies (Wu et al., 2012; Faralli et al., 2019).", "Although Hearst patterns are fairly simple, recent studies show they are highly useful for designing hypernymy measures (Roller et al., 2018; Le et al., 2019).", "Other approaches aim at improving the coverage of generalized Hearst patterns by automatic pattern expansion (Kozareva and Hovy, 2010; Roller and Erk, 2016), or considering other context-rich representations (such as Heterogeneous Information Networks (Shi et al., 2019)).", "A potential drawback of pattern-based methods is that the recall of extraction results over specific domains is limited (Alfarone and Davis, 2015), as textual patterns are naturally sparse in the corpus.", "To overcome the sparsity issue, distributional hypernymy measures model the degree of hypernymy within a term pair.", "A majority of these hypernymy measures are based on Distributional Inclusion Hypothesis (DIH) (Weeds et al., 2004), meaning that a hypernym covers a broader spectrum of contexts, compared to its hyponyms.", "The improvements and variants of DIH include (Santus et al., 2014; Chen et al., 2018; Chang et al., 2018) and many others.", "A comprehensive overview of distributional hypernymy measures can be found in Santus et al. (2017).", "Recently, Le et al. (2019) combine Hearst patterns and distributional vectors for hypernym detection.", "Additionally, the work of graded lexical entailment (Vulic et al., 2017) and cross-lingual graded lexical entailment (Vulic et al., 2019) aims at computing numerical scores, indicating the degree of hypernymy of a term pair.", "For supervised hypernymy classification , traditional approaches employ distributional vectors of two terms as features, such as the Concat model, the Diff model, the SimDiff model (Turney and Mohammad, 2015).", "Recently, several approaches are proposed to learn hypernymy embeddings, considering the semantic hierarchies of concepts (Yu et al., 2015; Luu et al., 2016; Nguyen et al., 2017; Chang et al., 2018; Nickel and Kiela, 2018; Ganea et al., 2018; Rei et al., 2018; Chen et al., 2018).", "For example, Yu et al. (2015) learn hypernym and hyponym embeddings for a term by max-margin neural network.", "Nguyen et al. (2017) propose hierarchical embeddings for hypernymy classification, jointly trained over texts and the WordNet concept hierarchy.", "Rei et al. (2018) propose a directional similarity neural network based on word embeddings to predict the degree of hypernymy between two terms.", "Yet a number of models encode terms in the hyperbolic space, such as the hyperbolic Lorentz Model (Nickel and Kiela, 2018), Hyperbolic Entailment Cones (Ganea et al., 2018), and others (Le et al., 2019; Aly et al., 2019).", "The hyperbolic geometry is more capable of modeling the transitivity property of hypernymy.", "Additionally, patterns and distributional vectors can also be combined for supervised hypernymy prediction, as in Shwartz et al. (2016); Held and Habash (2019) and several systems submitted to SemEval 2018 Task 9 (Camacho-Collados et al., 2018).", "Another type of supervised models can be categorized as projection-based approaches , which model how to map embeddings of a term to those of its hypernyms.", "Fu et al. (2014) is most influen-tial, followed by a number of variants.", "Biemann et al. (2017); Wang et al. (2017b, 2019b) improve projection learning by considering explicit negative samples.", "The usage of orthogonal matrices is exploited in Wang et al. (2019a).", "One advantage is that they do not perform classification on two terms' embeddings directly, alleviating lexical memoriza-x i y i hyper( x i ) res hyper ( x i , y i ) M2: Hypernym Projection hypo (1) ( y i ) x i y i Embedding Lookup hypo (2) ( y i ) ... hypo (N) ( y i ) res hypo ( x i , y i ) M1: Hyponym Projection Term Pairs Hypernymy Relations D (+) Non-hypernymy Relations D (-) Training Regularization Training Training Regularization M3: Hypernymy Relation Classification Hypernymy & Nonhypernymy Relations D (+) D (-) Hidden Layers Classifer BiRRE Vector Pre-processing r i Figure 1: The BiRRE framework for supervised hypernymy detection. tion (Levy et al., 2015).", "Compared to previous work, BiRRE is supervised, but does not minimize the classification error firstly.", "It uses LPMNR to learn hypernym/hyponym generation process by projection learning.", "Hence, it takes advantages of both traditional classification and projection-based approaches.", "In this section, we first introduce the task description and the BiRRE framework.", "The detailed steps and justifications are elaborated subsequently.", "Given two sets of term pairs: the training sets of hypernymy D (+) = { ( x i , y i ) } and non-hypernymy relations D ( ) = { ( x i , y i ) } , the task is to learn a classifier f to distinguish hypernymy vs. non-hypernymy relations.", "Particularly, y i is a hypernym of x i if ( x i , y i ) D (+) .", "For non-hypernym relations, the relation types between two terms x i and y i in D ( ) can be reverse-hypernymy, synonymy, antonymy, or unrelated, depending on the respective task and dataset settings.", "The BiRRE framework is shown in Figure 1, consisting of pre-processing and three major modules.", "Pre-processing: The pre-processing step of the BiRRE framework requires minimal computation.", "For each term pair ( x i , y i ) D (+) D ( ) , we retrieve the corresponding embedding vectors from any neural language models (e.g., Word2Vec, GloVe), without fine-tuning.", "Denote normalized embeddings of x i and y i as x i and y i , respectively.", "M1: The hyponym projection module learns how to map embeddings of a hypernym to those of its hyponyms.", "Consider the example in Figure 2. There are usually one-to-many mappings (in semantics) from hypernyms to hyponyms.", "Hence, we map a hypernym to its N semantically diverse hyponyms by LPMNR.", "We denote the N hyponym embeddings w.r.t. y i as hypo (1) ( y i ) , , hypo ( N ) ( y i ) 1 .", "Based on the difference between the true hyponym embeddings x i and the N predicted hyponym embeddings , we compute the hyponym residual vector res hypo ( x i , y i ) to measure the goodness of mapping from y i to x i .", "As shown in Biemann et al. (2017), the explicit usage of negative samples (i.e., non-hypernymy relations) improves the performance of projection learning.", "In this module, we take D (+) as the training set and D ( ) for regularization purposes.", "M2: The hypernym projection module learns how to map embeddings of a hyponym to those of its hypernyms.", "Based on Figure 2, such mappings tend to be simpler.", "Hence, we only learn one mapping model from a hyponym to embeddings of its hypernym.", "We denote the hypernym embeddings 1 Because the training process is completed in the embedding space, our model learns to associate low-density hypernym regions with multiple numbers of high-density hyponym regions.", "Here, M (1) y i , , M ( N ) y i may refer to the distributions of word embeddings of hyponyms, with no guarantee that they refers to actual word embeddings.", "as hyper ( x i ) .", "This step is learned by a simplified version of LPMNR.", "Similarly, we denote the hypernym residual vector as res hyper ( x i , y i ) , measuring the goodness of mapping from x i to y i .", "In this module, we also take D (+) as the training set and D ( ) for regularization.", "M3: Finally, the BiRRE vector (denoted as r i ) w.r.t. ( x i , y i ) is computed by concatenating res hypo ( x i , y i ) and res hyper ( x i , y i ) .", "A feed-forward neural network is trained over D (+) and D ( ) for hypernymy relation classification.", "The parameters of M3 are learned by back propagation, with parameters of M1 and M2 fixed in this step.", "Previously, several approaches (Fu et al., 2014; Ya-mane et al., 2016) assume there is a d d projection matrix M such that Mx i y i where d is the dimension of word embeddings for ( x i , y i ) D (+) .", "According to Wang et al. (2019a), the usage of orthogonal matrices has better performance for hypernymy prediction, as the the cosine similarity of Mx i and y i can be maximized when Mx i and y i are normalized.", "Let M = { M (1) , , M ( N ) } be the parameter collection of our hyponym projection model (i.e., N d d orthogonal projection matrices).", "For each hypernym y i , these N projection matrices map y i to the embeddings of N semantically diverse hyponyms M (1) y i , , M ( N ) y i .", "The major challenge is that the explicit semantics of N projections are unknown, and may vary across different datasets.", "To derive a unified solution for all scenarios, we introduce a latent variable ( p ) i (0 , 1) to represent the weight of ( x i , y i ) D (+) w.r.t. the projection matrix M ( p ) ( p { 1 , , N } , (cid:80) ( x i ,y i ) D (+) ( p ) i = 1 ).", "The objective of hyponym projection is as follows: 2 min M (cid:88) ( x i ,y i ) D (+) N (cid:88) p =1 ( p ) i,j (cid:107) M ( p ) y i x i (cid:107) 2", "where I d is the d d identity matrix.", "A potential drawback of Eq.", "(1) is that it only considers hypernymy relations D (+) .", "The relation classification objective is not optimized.", "As Biemann et al. (2017) suggest, negative samples can of help for learning projection regularizers.", "The regu-larizers push the projected hyponym embeddings of a term further away from its non-hyponyms, making hypernymy and non-hypernymy relations more separable.", "Hence, we reformulate Eq.", "(1) as: min M 1 | D (+) | (cid:88) ( x i ,y i ) D (+) N (cid:88) p =1 ( p ) i (cid:107) M ( p ) y i x i (cid:107) 2 + | D ( ) | (cid:88) ( x i ,y i ) D ( ) N (cid:88) p =1 ( p ) i ( M ( p ) y i ) T x i", "s.", "t. M ( p ) TM ( p ) = I d , p { 1 , , N } (2) where > 0 is the regularization balancing factor.", "The latent variable ( p ) i (0 , 1) is the weight of the negative sample ( x i , y i ) D ( ) w.r.t. M ( p ) .", "The constraint (cid:80) ( x i ,y i ) D ( ) ( p ) i = 1 also holds.", "To the best of our knowledge, there is no standard off-the-shelf solution to Eq.", "(2).", "We slightly change the regularization term of Eq.", "(2).", "The objective function is changed as follows, which we refer as the Latent Projection Model with Negative Regularization (LPMNR): min M 1 | D (+) | (cid:88) ( x i ,y i ) D (+) N (cid:88) p =1 ( p ) i (cid:107) M ( p ) y i x i (cid:107) 2 | D ( ) | (cid:88) ( x i ,y i ) D ( ) N (cid:88) p =1 ( p ) i (cid:107) M ( p ) y i x i (cid:107) 2", "2 For simplicity, we omit the constraints of latent variables in the objective functions in this paper.", "Optimizing Eq.", "(3) is non-trivial due to the existence of the unknown weights ( p ) i and ( p ) i .", "In this paper, we present a dual-iterative algorithm to solve the problem.", "All values of ( p ) i and ( p ) i are randomly initialized (with ( p ) i , ( p ) i (0 , 1) , (cid:80) ( x i ,y i ) D (+) ( p ) i = 1 and (cid:80) ( x i ,y i ) D ( ) ( p ) i = 1 ).", "In each iteration, we update the values of ( p ) i , ( p ) i and M ( p ) .", "When all the values of ( p ) i and ( p ) i are fixed, Eq.", "(3) can be regarded as a variant of the Multi-Wahba problem (Wang et al., 2019a).", "For simplicity, let = | D (+) | | D ( ) | .", "We extend their work and give an SVD based closed-form solution to Eq.", "(3) in Algorithm 1. Algorithm 1 Closed-form Solution to Eq.", "Proof: It is trivial to see that the optimal values of each matrix is independent from each other.", "Hence, we only need to optimize: min M 1 | D (+) | (cid:88) ( x i ,y i ) D (+) ( p ) i (cid:107) M ( p ) y i x i (cid:107) 2 | D ( ) | (cid:88) ( x i ,y i ) D ( ) ( p ) i (cid:107) M ( p ) y i x i (cid:107) 2", "s.", "t. M ( p ) TM ( p ) = I d For simplicity, let = | D (+) | | D ( ) | , with the superscript ( p ) omitted.", "J ( M ) = (cid:88) ( x i ,y i ) D (+) i (cid:107) My i x i (cid:107) 2 (cid:88) ( x i ,y i ) D ( ) i (cid:107) My i x i (cid:107) 2", "s.", "t. MTM = I d Define the matrix B = (cid:80) ( x i ,y i ) D (+) i x i y Ti (cid:80) ( x i ,y i ) D ( ) i x i y Ti .", "We re-write the objective function as: J ( M ) = 1 tr ( MBT ) .", "Hence, we have transformed the problem into the Multi-Wahba problem (Wang et al., 2019a).", "J ( M ) is minimized when the optimal value of M is: M = U diag (1 , . . . , 1 (cid:124) (cid:123)(cid:122) (cid:125) d 1 , det ( U ) det ( V )) VT with UV T = SV D ( B ) .", "After optimal values of M ( p ) are computed, the values of all (cid:107) M ( p ) y i x i (cid:107) 2 are known.", "In this condition, we fix the values of M ( p ) and update all ( p ) i and ( p ) i .", "We turn the problem of minimizing Eq.", "(3) into the following problems: min ( p ) i (cid:88) ( x i ,y i ) D (+) (cid:107) M ( p ) y i x i (cid:107) 2 ( p ) i (4) max ( p ) i (cid:88) ( x i ,y i ) D ( ) (cid:107) M ( p ) y i x i (cid:107) 2 ( p ) i (5) We update ( p ) i and ( p ) i by constrained gradient descent where the updating formulas are: ( p ) i = ( p ) i (cid:88) ( x i ,y i ) D (+) (cid:107) M ( p ) y i x i (cid:107) 2 (6) ( p ) i = ( p ) i + (cid:88) ( x i ,y i ) D ( ) (cid:107) M ( p ) y i x i (cid:107) 2 (7) where > 0 is the learning rate (a small deci-mal).", "( p ) i and ( p ) i are updated values of ( p ) i and ( p ) i for the new iteration, respectively.", "After the update of all weights, we normalize the weights to satisfy: (cid:80) ( x i ,y i ) D (+) ( p ) i = 1 and (cid:80) ( x i ,y i ) D ( ) ( p ) i = 1 .", "The iterative procedure continues until convergence, with the algorithm summarized in Algorithm 2. After training, given x j , M1 outputs N hyponym embeddings: hypo (1) ( y i ) = M (1) y i , , hypo ( N ) ( y i ) = M ( N ) y i .", "We define the hyponym residual vector res hypo ( x i , y i ) as follows: res hypo ( x i , y i )) = x i M ( p ) y i where p is the index of the selected projection matrix that best fits for ( x i , y i ) D (+) .", "We set p empirically as: p = argmin p (cid:107) x i M ( p ) y i (cid:107) 2 .", "Based on the objective in Eq.", "(3), if ( x i , y i ) D (+) , (cid:107) res hypo ( x i , y i ) (cid:107) 2 tends to be small.", "Otherwise, (cid:107) res hypo ( x i , y i ) (cid:107) 2 would be large.", "Hence, it is discriminative for hypernymy classification.", "The hypernym projection module can be regarded as a simplified version of the previous module.", "Denote Q as the d d projection matrix.", "The objective of hypernym projection is formulated as follows: min Q 1 | D ( ) | (cid:88) ( x i ,y i ) D ( ) (cid:107) Qx i y i (cid:107) 2 | D (+) | (cid:88) ( x i ,y i ) D (+) (cid:107) Qx i y i (cid:107) 2", "It can be solved by Algorithm 1 with weights reduced and N = 1 .", "Similar to hyponym projection, we compute the hypernym residual vector res hyper ( x i , y i ) as follows: res hyper ( x i ) = Qx i y i 3.5 Hypernymy Relation Classification (M3) For each pair ( x i , y i ) D (+) D ( ) , we generate the BiRRE vector r i via the concatenation of two residual vectors: r i = res hypo ( x i , y i ) res hyper ( x i , y i ) (8) A feed forward neural network is trained for hypernymy vs. non-hypernymy classification over D (+) and D ( ) using r i as features.", "To this end, we summarize the high-level training process of BiRRE, as shown in Algorithm 3. There can be zero, one or multiple hidden layers in the neural network.", "The detailed study of network structures will be discussed in the experiments.", "Orthogonal projections have been applied to predict various types of word relations (Ethayarajh, 2019).", "However, the mechanisms behind orthogonal projections in the embedding space for predicting such relations can not be fully explained by NLP researchers.", "In BiRRE, we use different numbers of matrices in M1 and M2 , in order to capture the mappings between hypernyms and hyponyms.", "Due to the complicated nature of linguistics, such projections are not 100% correct.", "Hence, we learn the residual vectors and train a classifier (in M3 ) to decide which dimensions learned by M1 and M2 are best predictors for hypernymy relations.", "Therefore, the performance of BiRRE can be improved.", "In this section, we conduct extensive experiments to evaluate the BiRRE model over various benchmarks.", "We also compare it with state-of-the-art to show its effectiveness.", "The default word embeddings used by our model are pre-trained by the fastText model (Bojanowski et al., 2017) over the English Wikipedia corpus of version December 2019.", "We train the model by ourselves using their original codes.", "The embedding size is set as d = 300 , according to their paper.", "In the implementation, the parameters and N are set to 10 3 and max { 1 , (cid:98) lg | D (+) |(cid:99)} (an empirical formula), respectively.", "We also tune the model parameters in subsequent experiments.", "The neural network in M3 is fully connected and trained via the Adam algorithm with the dropout rate to be 0.1.", "We use the largest hypernymy relation dataset (to our knowledge) from Shwartz et al. (2016) to test the effectiveness of BiRRE.", "It is created from various resources: WordNet, DBPedia, Wikidata and YAGO, and divided into random split and lexical split.", "Especially, the lexical split forces training, testing and validation sets contain distinct vocabularies, disabling lexical memorization (Levy Method Precision Recall F1 Precision Recall F1 Random Split Lexical Split Roller and Erk (2016) 0.926 0.850 0.886 0.700 0.964 0.811 Shwartz et al. (2016) 0.913 0.890 0.901 0.809 0.617 0.700 Glavas and Ponzetto (2017) 0.933 0.826 0.876 0.705 0.785 0.743 Rei et al. (2018) 0.928 0.887 0.907 0.826 0.860 0.842 BiRRE 0.945 0.932 0.938 0.880 0.918 0.898 Table 1: Performance of different approaches over the dataset (Shwartz et al., 2016).", "et al., 2015).", "We follow the same evaluation steps of Shwartz et al. (2016); Rei et al. (2018) and report the results in Table 1. The network structure and parameters are tuned over the validation set.", "Based on the results, BiRRE consistently outperforms state-of-the-art by 3.1% and 5.6% in terms of F1.", "Additionally, the performance gap between lexical and random splits has been narrowed down from 6.5% (Rei et al., 2018) to 4.0% (BiRRE).", "It shows that BiRRE alleviates lexical memorization, compared to other distributional models.", "We also conduct pairwise statistical tests between Rei et al. (2018) and our outputs.", "It shows that BiRRE outperforms the approach significantly.", "We tune the value of from 0.0 to 1.0 using the development set.", "The results over the lexical spilt of the dataset (Shwartz et al., 2016) are shown in Figure", "3(a).", "Bigger means a larger effect of negative regularization.", "As seen, the usage of negative regularization improves the prediction performance by a large margin.", "A suitable choice of is generally around 0.4 to 0.6.", "As for the neural network structures, the number of hidden nodes does not have a large impact on the model performance.", "Hence, we only report the results when we use the same number of nodes in hidden layers as the dimension of word embeddings d , shown in Figure", "3(b).", "Our results are consistent with previous research, which show that adding more hidden layers can decrease the prediction accuracy, leading to model overfitting.", "We evaluate BiRRE over two benchmark datasets: BLESS (Baroni and Lenci, 2011) and ENTAILMENT (Baroni et al., 2012), consisting of 14,547 and 2,770 labeled term pairs, respectively.", "For evaluation, we follow the same leave-one-out evaluation protocols as used in previous research (Yu et al., 2015; Luu et al., 2016; Nguyen et al., 2017).", "All the experimental results are reported in terms of averaged accuracy.", "Because the two datasets do not have separate validation sets, we take the dataset (Shwartz et al., 2016) to tune parameters of BiRRE.", "To prevent data leakage, we exclude all the data of the validation set that also appear in the test set for parameter tuning.", "We compare BiRRE against several previous supervised models (Mikolov et al., 2013; Yu et al., 2015; Luu et al., 2016; Nguyen et al., 2017; Wang et al., 2019a).", "3 The averaged accuracy scores of all these methods are shown in Table 2. From the results, we can see that our model outperforms all previous baseline approaches, having the averaged accuracy of 98% and 93%, respectively.", "We also conduct the paired t-test, which shows that BiRRE sig-3 We have also considered SemEval 2018 Task 9 (Camacho-Collados et al., 2018) for evaluation.", "However, this task focuses on the complete process of retrieving (or discovering) hypernyms for input terms from specific corpora.", "Hence, it is not suitable to evaluate BiRRE directly.", "nificantly outperforms classical models (Mikolov et al., 2013).", "Compared to the strongest competitor (Wang et al., 2019a), the accuracy of our model is also higher by 1%.", "We further study the effectiveness of individual residual vectors for hypernymy classification and conduct the following ablation study.", "Each time, we only use a unidirectional residual vector as features (i.e., res hypo ( x i , y i ) and res hyper ( x i , y i ) ).", "Additionally, we follow several previous papers (Yu et al., 2015; Luu et al., 2016; Nguyen et al., 2017), using the addition, offset and concatenation of embedding vectors as features (i.e., x i + y i , x i y i and x i y i to train the neural networks for hypernymy classification.", "These three models are treated as naive baselines.", "The experimental settings are the same as in Experiments 1 and 2. The experimental results over BLESS (Baroni and Lenci, 2011), ENTAILMENT (Baroni et al., 2012) and the lexical split of the dataset (Shwartz et al., 2016) are illustrated in Table 3. We have the following three observations.", "i) Traditional models using x i + y i , x i y i and x i y i as features do not yield satisfactory results.", "The most likely cause is that they suffer from the lexical memorization problem.", "ii) The hyponym residual vector res hypo ( x i , y i ) is slightly more effective than the hypernym residual vector res hyper ( x i , y i ) .", "It means that the more complicated hyponym generation process is more precise and suitable for our task.", "iii) By combining res hypo ( x i , y i ) and res hyper ( x i , y i ) , the proposed approach is more effective and outperforms previous methods.", "Yet another widely used evaluation framework is hypernym discovery, including three subtasks:", "i) ranked hyernym detection,", "ii) hyernymy direction Feature Set BLESS ENT.", "classification and", "iii) graded lexical entailment, as presented in Nguyen et al. (2017); Roller et al. (2018); Le et al. (2019) and many others.", "These subtasks require algorithms to output unsupervised scores (or measures), indicating the level of hypernymy within a term pair.", "Therefore, this framework is not directly applicable to evaluate BiRRE.", "We evaluate BiRRE on hypernym discovery by external supervision.", "For ranked hyernym detection, following Roller et al. (2018); Le et al. (2019), we consider five test sets: BLESS (Ba-roni and Lenci, 2011), EVAL (Santus et al., 2015), LEDS (Baroni et al., 2012), SHWARTZ (Shwartz et al., 2016) and WBLESS (Weeds et al., 2014).", "For each test set, we use the remaining four datasets (excluding all term pairs in the current test set) to train and tune the BiRRE model.", "For each term in the test set, we create a ranked list of candidate hypernyms by placing positive predictions over negative.", "Next, for candidate hypernyms with the same relation label, we rank them by norms of BiRRE vectors to produce the final ranked list.", "For the hypernymy direction classification subtask, we use three test sets: BLESS (Baroni and Lenci, 2011), WBLESS (Weeds et al., 2014) and BIBLESS (Kiela et al., 2015).", "Because this subtask is directly evaluated in terms of accuracy, we train the supervised BiRRE model using the external dataset (Shwartz et al., 2016) (also excluding term overlaps) and report the performance.", "Another subtask evaluated in Roller et al. (2018); Le et al. (2019) is graded lexical entailment (Vulic et al., 2017).", "Because BiRRE only produces discrete outputs, how BiRRE can be adapted for graded lexical entailment is left as future work.", "The experimental results are summarized in Table 4. For comparison, we take three recent models (Nguyen et al., 2017; Roller et al., 2018; Le et al., 2019) as strong baselines.", "Due to space limitation, for Roller et al. (2018), we only list the Method BLESS EVAL LEDS SHWARTZ WBLESS Task: Ranked Hyernym Detection (Average Precision) Nguyen et al. (2017) 0.45 0.54 -0.85 Roller et al. (2018) 0.76 0.48 0.84 0.44 0.96 Le et al. (2019) 0.81 0.50 0.89 0.50 0.98 BiRRE 0.87 0.56 0.88 0.56 0.98 Method BLESS WBLESS BIBLESS Task: Hyernymy Direction Classification (Accuracy) Nguyen et al. (2017) 0.92 0.87 0.81 Roller et al. (2018) 0.96 0.87 0.85 Le et al. (2019) 0.94 0.90 0.87 BiRRE 0.98 0.95 0.92 Table 4: Experimental results of ranked hyernym detection and hyernymy direction classification.", "scores generated by spmi(x, y) due to its superiority.", "We can see that BiRRE consistently outperforms baselines over most of the datasets.", "As for LEDS and WBLESS, the results of BiRRE and the state-of-the-art (Le et al., 2019) are comparable.", "Hence, our supervised distributional model BiRRE can also address hypernym discovery, previously addressed by unsupervised hypernymy scores.", "We need to claim that models in Table 4 use different knowledge sources (either patterns or distributional vectors) for parameter learning.", "Strictly speaking, the gaps of scores in this set of tasks do not necessarily reflect which method is better in all situations.", "It still remains an open question that how to evaluate all types of methods related to hypernymy detection in a unified framework.", "We also test our model using other types of word embeddings.", "We consider two other types of traditional word embeddings: Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014), as well as BERT (Devlin et al., 2019) representations without contexts 4 .", "Experiments are conducted over the same datasets as used in Experiment 3. The results are shown in Table 5, in terms of accuracy.", "As shown, the effect of fastText (Bojanowski et al., 2017) is slightly better than Word2Vec and GloVe.", "The representations of BERT do not yield satisfactory performance, probably due to the fact that the dimensionality of BERT is higher than other models, making the number of parameters in BiRRE 4 The dimensions of Word2Vec and GloVe are the same as fastText.", "The pre-trained BERT model we use is Google's base model, released at https://github.com/ google-research/bert .", "In this paper, we present the BiRRE model for supervised hypernymy detection.", "It employs two projection-based hypernym and hyponym generation modules based on word embeddings to learn BiRRE vectors for hypernymy classification.", "Experimental results show that BiRRE outperforms state-of-the-arts over various benchmark datasets.", "Future work includes", "i) improving projection learning to model complicated linguistic properties of hypernymy;", "ii) extending our model to address other tasks, such as graded lexical entailment (Vulic et al., 2017) and cross-lingual graded lexical entailment (Vulic et al., 2019); and", "iii) exploring how deep neural language models (such as BERT (De-vlin et al., 2019), Transformer-XL (Dai et al., 2019), XLNet (Yang et al., 2019)) can improve the performance of hypernymy detection.", "We would like to thank anonymous reviewers for their valuable comments.", "This work is supported by the National Key Research and Development Program of China under Grant No. 2016YFB1000904." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Non-Autoregressive machine Translation (NAT) models have demonstrated significant inference speedup but suffer from inferior translation accuracy.", "The common practice to tackle the problem is transferring the Autoregressive machine Translation (AT) knowledge to NAT models, e.g., with knowledge distillation.", "In this work, we hypothesize and empirically verify that AT and NAT encoders capture different linguistic properties of source sentences.", "Therefore, we propose to adopt multi-task learning to transfer the AT knowledge to NAT models through encoder sharing.", "Specifically, we take the AT model as an auxiliary task to enhance NAT model performance.", "Experimental results on WMT14 English German and WMT16 English Romanian datasets show that the proposed MULTI-TASKNAT achieves significant improvements over the baseline NAT models.", "Furthermore, the performance on large-scale WMT19 and WMT20 English German datasets confirm the consistency of our proposed method.", "In addition, experimental results demonstrate that our MULTI-TASKNAT is complementary to knowledge distillation, the standard knowledge transfer method for NAT.", "1 1 Introduction Neural machine translation (NMT), as the state-of-the-art machine translation paradigm, has recently been approached with two different sequence decoding strategies.", "The first type autoregressive The first two authors contributed equally to this work.", "translation (AT) models generate output tokens one by one following the left to right direction (Vaswani et al., 2017; Bahdanau et al., 2015), but it is often criticized for its slow inference speed (Gu et al., 2018).", "The second type non-autoregressive translation (NAT) models adopt a parallel decoding algorithm to produce output tokens simultaneously (Gu et al., 2019; Ghazvininejad et al., 2019; Ma et al., 2020), but the translation quality of it is often inferior to auto-regressive models (Gu et al., 2018).", "Many researchers have investigated the collaboration between AT and NAT models.", "For instance, ENCODER-NAD-AD (Zhou et al., 2020) leverages NAT models to improve the performance of AT.", "Specifically, their method inserts a NAT decoder between the conventional AT encoder and decoder to generate coarse target sequences for the final autoregressive decoding.", "A line of research (Wang et al., 2019b; Guo et al., 2020; Ding et al., 2020) holds the opinion that the lack of contextual dependency on target sentences potentially leads to the deteriorated performance of NAT models.", "To boost the NAT translation performance, many re-cent works resort to the knowledge transfer from a well-trained AT model.", "Typical knowledge transfer methods include sequence-level knowledge distillation with translation outputs generated by strong AT models (Gu et al., 2019; Ghazvininejad et al., 2019), word-level knowledge distillation with AT decoder representations (Wei et al., 2019; Li et al., 2019), and fine-tuning on AT model by curriculum learning (Guo et al., 2020), etc.", "In this work, we first verify our our hypothesis that AT and NAT encoders although they belong to the same sequence-to-sequence learning task capture different linguistic properties of source sentences.", "We conduct our verification by evaluating the encoder on a set of probing tasks (Conneau Task AT NAT Surface SeLen 91.7 93.4 WC 76.0 79.1 Syntactic TrDep 45.8 46.0 ToCo 78.3 79.7 BShif 74.8 73.4 Semantic Tense 89.2 89.2 SubN 86.2 87.5 ObjN 85.2 85.3 SoMo 54.0 53.0 CoIn 64.9 62.8 Table 1: Performance on the probing tasks of evaluating linguistic properties embedded in the learned representations of AT and NAT models. et al., 2018; Raganato and Tiedemann, 2018) for AT and NAT models.", "Further, by leveraging the linguistic differences, we then adopt a multi-task learning framework with a shared encoder (i.e., MULTITASKNAT) to transfer the AT model knowledge into the NAT model.", "Specifically, we employ an additional AT task as the auxiliary task of which the encoder parameters are shared with the NAT task while parameters of the decoder are exclusive.", "Since many works (Cipolla et al., 2018; Liu et al., 2019) suggest that the weights for each task are critical to the multi-task learning, in this work, the multi-task weight assigned to the AT task is dynamically annealed from 1 to 0 .", "We name this scheme importance annealing .", "We empirically show the benefit of importance annealing in both directions of the original WMT14 English German dataset.", "Further with knowledge distillation, our proposed MULTI-TASKNAT achieves significant improvements on WMT14 English German and WMT16 English Romanian datasets.", "This con-firms the effectiveness of our proposed model on machine translation tasks.", "We propose a multi-task learning framework to boost NAT translation quality by transferring the AT knowledge to the NAT model.", "Our analyses reveal that the encoder sharing is necessary for capturing more linguistic and semantic information.", "Experiments on standard benchmark datasets demonstrate the effectiveness of the proposed MULTI-TASKNAT.", "To verify our hypothesis that AT and NAT encoders capture different linguistic properties of source sentences and can thereby complement each other, we probe the linguistic knowledge (Conneau et al., 2018) that embedded in the AT and NAT encoders on a set of tasks to investigate to what extent an encoder captures the linguistic properties.", "We present the detail for each probing tasks in Appendix B. Moreover, in Appendix C, we also provide a qualitative investigation to capture the difference between high-dimensional representations of AT and NAT encoders from another perspective.", "The AT and NAT models referred to in the following experiments are TRANSFORMER and MASK-PREDICT .", "We train the models on the WMT14 English German dataset, and the details of the experiments are introduced in the Appendix.", "Probing Tasks Probing tasks (Conneau et al., 2018) can quantitatively measure the linguistic knowledge embedded in the model representation.", "We follow Wang et al. (2019a) to set model config-urations.", "The experimental results are depicted in Table 1.", "Table 1 shows the AT and NAT encoders capture different linguistic properties of source sentences.", "We observe that on average, the NAT model captures more surface features but less semantic features than the AT model.", "For example, on the sentence length prediction (SeLen) task, NAT models significantly outperform AT models since the sentence length prediction is a key component in NAT models.", "However, for sentence modification (SoMo) and coordinate clauses invertion (CoIn) tasks, the AT model outperforms the NAT model by a large margin.", "The linguistic probing results reveal that AT and NAT models capture different linguistic properties, which thereby leaves space for the encoder sharing structure.", "In this section, we introduce that our shared encoder structure between AT and NAT models under the multi-task learning framework.", "Multi-Task NAT Given the AT and NAT models under the standard encoder-decoder structure, we employ the hard parameter sharing method (Ruder, 2017) to share their encoder parameters.", "Therefore, as shown in Figure 1, the proposed model MULTI-TASKNAT consists of three com-Figure 1: The architecture of our proposed model.", "We introduce an extra AT decoder to boost the training at the beginning, and gradually lower the importance weight of AT task by increasing .", "ponents: shared encoder, AT decoder, and NAT decoder.", "Their parameters are jointly optimized towards minimizing the multi-task loss function, as introduced in the next section.", "Multi-Task Framework The loss function of the proposed MULTI-TASKNAT L at iteration step t is defined as the weighted sum of AT loss and NAT loss: L = t L nat (cid:0) X, Y ; enc , natdec (cid:1) + (1 t ) L at (cid:0) X, Y ; enc , atdec (cid:1) (1) where L at and L nat are AT loss and NAT loss.", "enc , natdec , and atdec are parameters of the shared encoder, NAT decoder, and AT decoder respectively.", "t is the importance factor to balance the preference between the AT and NAT models at time step t as illustrated bellow.", "Importance Annealing The term L at only serves as an auxiliary and does not directly affect the inference of NAT.", "Therefore, we intuitively determine to lower the importance of the AT loss when the training process is close to the ending, which we named importance annealing .", "Formally, we set t = t T where T is the total steps of training.", "Under such a scheme, the weight for L at is linearly annealed from 1 .", "0 to 0 .", "0 along the training process, while the weight for L nat is increased from 0 .", "0 to 1 .", "0 .", "Training and Inference During the model training with training pairs ( X, Y ) , we feed the source sentence X to the encoder and the target sentence Y to two decoders separately.", "The target sentence Model WMT14 En De De En MASK-PREDICT 1 24.61 TRANSFORMER-LEV 2 25.20 MASK-PREDICT 3 24.70 29.52 MULTI-TASKNAT 25.66 30.09 + Importance Annealing 25.79 30.32 1 Ghazvininejad et al. (2019); 2 Gu et al. (2019); 3 Our implementation.", "Y can be either the target sentence in the raw training data (4.1) or the generated target sentence with knowledge distillation (4.2).", "During the model inference, we only use the NAT decoder to generate the target tokens simultaneously while ignoring the AT decoder.", "Therefore, the inference overhead is the same as the NAT model before sharing.", "We conducted experiments on two widely used WMT14 English German and WMT16 English Romanian benchmark datasets, which consist of 4.5M and 610K sentence pairs, respectively.", "We applied BPE (Sennrich et al., 2016) with 32K merge operations for both language pairs.", "The experimental results are evaluated in case-sensitive BLEU score (Papineni et al., 2002).", "We use TRANSFORMER (Vaswani et al., 2017) as our baseline autoregressive translation model and the MASK-PREDICT (Ghazvininejad et al., 2019) as our baseline non-autoregressive model.", "We integrate the TRANSFORMER decoder into the MASK-PREDICT to implement the proposed MULTI-TASKNAT model.", "For t , we use the annealing scheme described in Section", "3. Since the major NAT architecture of our method is exactly the MASK-PREDICT model, any established decoding latency results (Kasai et al., 2021) for MASK-PREDICT can also be applied to ours.", "All of the parameters are randomly initialized for a fair comparison with the MASK-PREDICT .", "More training details are introduced in Appendix A. 4.1 Ablation Study Table 2 shows that the performance of our MULTITASKNAT model and baseline models on WMT14 Model WMT14 WMT16 En De De En En Ro Ro En Baseline Models Transformer (Ghazvininejad et al., 2019) 27.74 31.09 34.28 33.99 Hint-based NAT (Li et al., 2019) 25.20 29.52 NAT-REG (Wang et al., 2019b) 24.61 28.90 FCL-NAT (Guo et al., 2020) 25.75 29.50 Levenshtein Transformer (Gu et al., 2019) 27.27 33.26 Mask-Predict (Ghazvininejad et al., 2019) 27.03 30.53 33.08 33.31 Mask-Predict w/ Raw Data Prior (Ding et al., 2021) 27.8 33.7 Our Experiments Mask-Predict 27.18 30.86 33.03 32.71 MULTI-TASKNAT (w/ IA) 27.98 31.27 33.80 33.60 Table 3: Evaluation of translation performance on WMT14 En De and WMT16 En Ro test sets.", "En De datasets without using the knowledge distillation.", "The vanilla MULTI-TASKNAT model with the the fixed as 0 .", "5 outperforms the baseline MASK-PREDICT model by 0.96 and 0.57 BLEU score in En De and De En direction respectively and even surpasses the strong baseline TRANSFORMER-LEV by 0.46 BLEU points in En De translation.", "With the importance annealing , the MULTI-TASKNAT model achieves slight but consistent improvements over the vanilla model (+ Importance Annealing in Table 2).", "The improvements demonstrate the effectiveness of our proposed model using multi-task learning.", "We further evaluate the proposed MULTI-TASKNAT model with the standard practice of knowledge distillation.", "Table 3 depicts the performances of our model as well as strong baseline models.", "Our proposed MULTI-TASKNAT model achieves a significant improvement of 0.80 and 0.41 BLEU point over the strong baseline MASK-PREDICT model on En De and De En translation.", "On En Ro translation, our model outperforms the baseline model by 0.77 and 0.89 BLEU scores respectively.", "We use the compare-mt (Neubig et al., 2019) 2 to determine the significance.", "Details for significance tests are described in Appendix A.4.", "We conduct probing tasks to empirically recon-firm our hypothesis in Section 2 and better understand our MULTI-TASKNAT in terms of linguistic properties.", "The results are presented in Table", "4. In most of the cases, our MULTI-TASKNAT could learn better surface, syntactic, and semantic information than the TRANSFORMER and MASK-PREDICT baseline models, indicating that our multi-task learning framework can indeed take the advantages of two separate tasks and capture better linguistic properties.", "Notably, on the sen-Model WMT19 WMT20 En De De En En De De En MASK-PREDICT 34.79 37.04 25.24 36.36 MULTI-TASKNAT 35.38 37.62 25.72 36.58 Table 5: Evaluation of translation performance on WMT19 En De and WMT20 En De test sets with knowledge distillation.", "tence length (Selen) prediction task and tree depth (TrDep) task, the MULTI-TASKNAT shows significantly better performance.", "On other tasks, our model demonstrates better or on-par performance compared to the NAT model.", "Regarding the coordination inversion (CoIn) task, though the MULTITASKNAT shows certainly lower performance than the TRANSFORMER , it still outperforms the MASK-PREDICT by 0 .", "5 .", "We conduct the larger-scale experiments on the WMT English German.", "We adopt newstest2019 and newstest2020 as the test sets.", "The parallel data consists of about 36 .", "8 M sentence pairs.", "We average the last 5 checkpoints as the final model.", "The results are listed in Table", "5. The improvements suggest that our model are consistently effective on various scale of data.", "In this paper, we have presented a novel multitask learning approach for NAT model with a hard parameter sharing mechanism.", "Experimental results confirm the significant effect of the proposed MULTI-TASKNAT model, which shows the complementary effects of multi-task learning to the knowledge distillation method.", "Based on our MULTI-TASKNAT, there are many promising directions for future research.", "For example, 1) decoder interaction: knowledge distillation in an online fashion between AT and NAT decoders; 2) share-all framework: shared-encoder and shared-decoder with two decoding strategies, and the model can dynamically choose the optimal decoding strategy during model inference.", "3) data manipulation strategies: such as data rejuvenation (Jiao et al., 2020), lexical frequency discrepancy (Ding et al., 2021).", "The authors sincerely thank Liang Ding for the advice of experiments settings, and the anonymous reviewers for their insightful suggestions on various aspects of this work." ]
[ "abstain", "abstain", "objective", "objective", "method", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "method", "method", "method", "abstain", "abstain", "result", "objective", "objective", "objective", "result", "objective", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Previous work on answering complex questions from knowledge bases usually separately addresses two types of complexity: questions with constraints and questions with multiple hops of relations.", "In this paper, we handle both types of complexity at the same time.", "Motivated by the observation that early incorporation of constraints into query graphs can more effectively prune the search space, we propose a modified staged query graph generation method with more flexible ways to generate query graphs.", "Our experiments clearly show that our method achieves the state of the art on three benchmark KBQA datasets.", "Knowledge base question answering (KBQA) aims at answering factoid questions from a knowledge base (KB).", "It has attracted much attention in recent years (Bordes et al., 2014; Xu et al., 2016; Yu et al., 2017; Liang et al., 2017; Hu et al., 2018; Petrochuk and Zettlemoyer, 2018).", "Early work on KBQA focused on simple questions containing a single relation (Yih et al., 2014; Bordes et al., 2015; Dong et al., 2015; Hao et al., 2017).", "However, real questions are often more complex and recently some studies looked into complex KBQA.", "Two different types of complexity have been studied: (1) Single-relation questions with constraints .", "For example, in the question Who was the first president of the U.S.? there is a single relation president of between the answer entity and the entity U.S., but we also have the constraint first that needs to be satisfied.", "For this type of complex questions, a staged query graph generation method has been proposed, which first identifies a single-hop relation path and then adds constraints to it to form a query graph (Yih et al., 2015; Bao et al., 2016; Luo et al., 2018).", "(2) Questions with multiple hops of relations.", "For example, for the question Who is the wife of the founder of Facebook? the answer is related to Facebook through two hops of relations, namely, wife of and founder of.", "To answer this type of multihop questions, we need to consider longer relation paths in order to reach the correct answers.", "The main challenge here is how to restrict the search space, i.e., to reduce the number of multi-hop relation paths to be considered, because the search space grows exponentially with the length of relation paths.", "One idea is to use beam search.", "For example, Chen et al. (2019) and Lan et al. (2019b) proposed to consider only the best matching relation instead of all relations when extending a relation path.", "However, little work has been done to deal with both types of complexity together.", "In this paper, we handle both constraints and multi-hop relations together for complex KBQA.", "We propose to modify the staged query graph generation method by allowing longer relation paths.", "However, instead of adding constraints only after relation paths have been constructed, we propose to incorporate constraints and extend relation paths at the same time .", "This allows us to more effectively reduce the search space.", "On the ComplexWebQues-tions dataset, which has a high percentage of complex questions with both types of complexity, our method substantially outperforms existing methods with an improvement of 3.3 percentage points in Prec@1 and 3.9 percentage points in F1.", "On two other benchmark KBQA datasets, our method also achieves the state of the art 1 .", "A KB can be represented as a set of triplets K = { ( h, r, t ) } where h and t are entities from E (the", "entity set) and r is a relation from R (the relation set).", "Given a question Q , KBQA tries to find an entity a E that answers the question.", "Our method is largely inspired by an existing staged query graph generation method (Yih et al., 2015; Bao et al., 2016; Luo et al., 2018), which we briefly introduce here first.", "A query graph has four types of nodes: A grounded entity (shaded rectangle) is an existing entity in the KB.", "An existential variable (unshaded rectangle) is an ungrounded entity.", "A lambda variable (circle) is also an ungrounded entity but it represents the answer.", "Finally, an aggregation function (diamond) is a function such as argmin and count that operates on a set of entities.", "The edges of a query graph are relations from R .", "A query graph should have exactly one lambda variable to denote the answer, at least one grounded entity, and zero or more existential variables and aggregation functions.", "Figure 1 shows an example query graph for the question Who is the first wife of TV producer that was nomiated for The Jeff Probst Show ?", "We summarize the staged query graph generation method as follows.", "More details can be found in (Yih et al., 2015; Bao et al., 2016).", "1) Starting from a grounded entity found in the question (referred to as a topic entity ), identify a core relation path 2 linking the topic entity to a lambda variable.", "Existing work considers core relation paths containing a single relation (Yih et al., 2015; Bao et al., 2016; Luo et al., 2018).", "3 2) From a core relation path identified in Step 1, attach one or more constraints found in the question.", "A constraint consists of either a grounded entity or 2 This path is called the core inferential chain by Yih et al. (2015) and basic query graph by Bao et al. (2016).", "3 They also consider paths with two relations connected by a so-called CVT node, which is a special dummy entity used in Freebase for n -ary relations.", "For simplicity, we treat these also as single-relation paths.", "an aggregation function together with a relation.", "See examples in Figure 1.", "3) With all the candidate query graphs generated from Step 1 and Step 2 4 , rank them by measuring their similarities with the question.", "This is typically done through a neural network model such as a CNN (Yih et al., 2015; Bao et al., 2016).", "4) Execute the top-ranked query graph against the KB to obtain the answer entities.", "The major challenge we face when directly applying the existing method outlined above to constrained multi-hop KBQA is that questions containing multiple hops of relations (such as the example in Figure 1) cannot be handled, because existing work considers only core relation paths with a single hop (or two hops with a CVT node).", "If we make a naive modification by allowing core relation paths to be longer, the search space suddenly becomes much larger.", "For example, on the Com-plexWebQuestions dataset, if we allow core relation paths up to 3 hops, on average we will have around 10 , 000 core relation paths per question, which is computationally very expensive.", "Recent work on multi-hop KBQA tackles this problem by beam search, i.e., keeping only the topK t -hop relation paths before generating the ( t + 1) -hop relation paths (Chen et al., 2019; Lan et al., 2019b).", "However, this approach ignores constraints when generating the relation paths.", "We observe that constraints found in a question can often help reduce the search space and guide the generation of the core relation paths towards the right direction.", "Take the question in Figure 1 for example.", "Given a partial core relation path ( The Jeff Probst Show , nominated for, y 1 , nominee, y 2 ), if we were to extend this path at y 2 with one more relation, we would need to consider all relations in the KB linked to bindings of y 2 , which include all entities nominated for The Jeff Probst Show .", "But if we attach the constraint (is a, TV producer) to y 2 first, then we would need to consider only those relations linked to TV producers nominated for The Jeff Probst Show .", "4 In (Yih et al., 2015), a priority queue is used to keep only the top-ranked query graphs.", "This more flexible way of generating query graphs, coupled with a beam search mechanism and a semantic matching model to guide pruning, explores a much smaller search space while still maintaining a high chance of finding the correct query graph.", "Formally, our method uses beam search to generate candidate query graphs iteratively.", "We assume that the t -th iteration produces a set of K query graphs, denoted as G t .", "At the ( t + 1) -th iteration, for each g G t , we apply one of the { extend , connect , aggregate } actions (explained below) to grow g by one more edge and one more node.", "We do this for all g G t and all actions that are applicable to each g .", "Let G (cid:48) t +1 denote the set of all resulting query graphs.", "We then use a scoring function (explained in Section 2.4) to rank all the query graphs in G (cid:48) t +1 and place the topK of them in G t +1 .", "We continue the iterations until there is no g G t +1 that is scored higher than any g G t .", "We allow the following actions to grow a query graph.", "Figure 2 shows examples of these actions.", "(1) An extend action extends the core relation path by one more relation in R .", "If the current query graph contains only a topic entity e , an extend action finds a relation r linked to e in the KB and grows the path by r 5 .", "It also makes the other end of r the lambda variable x .", "If the current query graph has a lambda variable x , an extend action changes x into an existential variable y , finds all bindings of y in the KB by executing the current query graph against the KB, finds a relation r linked to one of these entities, and finally attaches r to y .", "The other end of r becomes the new lambda variable x .", "(2) Besides the topic entity at the start of the current core relation path, there are oftentimes other grounded entities found in the question.", "A connect action links such a grounded entity e to either the 5 We also allow r to be two relations connected through a CVT node.", "lambda variable x or an existential variable connected to x that is a CVT node.", "6 To decide which relation r to use to link e and x , again we can find all bindings of x by executing the current query graph and then find a relation that exists between one of these entities and e .", "(3) Following Luo et al. (2018), we can detect an aggregation function from the question using a set of predefined keywords.", "An aggregate action attaches the detected aggregation function as a new node to either the lambda variable x or an existential variable connected to x that is a CVT node.", "The novelty of our method is that the extend action can be applied after the connect and aggregate actions, which previous methods do not allow.", "At the end of the t -th iteration, we rank the candidate query graphs in G (cid:48) t by deriving a 7-dimensional feature vector v g for each graph g G (cid:48) t and feeding these vectors into a fully-connected layer followed by softmax to derive p ( g | Q ) .", "The first dimension of v g comes from a BERT-based semantic matching model.", "Specifically, we convert g into a sequence of tokens by following the sequence of actions that has been taken to construct g and adding the textual descriptions of the entities and relations involved at each step sequentially to the sequence.", "Existential variables and lambda variables are ignored.", "For example, the query graph shown in Figure", "2(a) of the paper is converted to the following sequence: (the, jeff, probst, show, nominated, for, nominee).", "7 6 Here we only consider the existential variable connected to the lambda variable as we should have already considered the other existential variables in past iterations.", "7 This example is for illustration purpose.", "In the actual data, the relation descriptions are different from what we show in Figure 1.", "Therefore the actual token sequence is different for this example.", "We also convert the question into a sequence of tokens.", "For example, the question Who is the wife of the founder of Facebook? becomes (who, is, the, wife, of, the, founder, of, facebook).", "We then concatenate the query graph sequence and the question sequence into a single sequence, The other 6 dimensions of v g are as follows: The first one is the accumulated entity linking scores of all grounded entities in the query graph.", "The second one is the number of grounded entities appearing in the query graph.", "The third to the fifth ones are the numbers of entity types, temporal expressions and superlatives in the query graph, respectively.", "The last feature is the number of answer entities of the query graph.", "To train our model, we make use of paired questions and their correct answers without any ground truth query graphs.", "Following the framework of Das et al. (2018), we use REINFORCE algorithm to learn a policy function p ( g | Q ) in an end-to-end manner, where is the set of parameters we want to learn, including the BERT parameters to be updated and the parameters of the fully-connected layer for the 7-dimensional vector v g .", "We use F1 score of the predicted answers with respect to the ground truth answers as reward.", "Our method requires entities to be identified from the questions and linked to their corresponding entries in the KB.", "For named entity linking, we use an existing linking tool 8 for the ComplexWebQues-tions dataset and the already extracted topic entities released together with the dataset for the other two datasets.", "For entity type linking, we make use of the training questions and their answers to learn a linking model.", "For temporal expressions and superlative linking, we simply use regular expressions and a superlative word list.", "The superlative words are manually mapped to two aggregation functions: argmax and argmin .", "We initialize the BERT module in the ranker with the BERT base model 9 .", "Other parameters are initialized randomly.", "For the hyper-parameters in BERT model, we set the dropout ratio as 0 .", "1 , the hidden size as 768 .", "The number of layers and the with the special token [CLS] separating them, as how BERT is used typically to handle two sequences.", "We then use the standard BERT model (Devlin et al., 2019) to process the entire sequence and derive a score at the top layer.", "Note that we fine-tune the pre-trained BERT parameters during learning.", "8 The tool can be found at https://developers.", "google.com/knowledge-graph .", "9 The pre-trained BERT base model could be found at https://github.com/huggingface/ pytorch-transformers .", "number of multi-attention heads are set as 6 and 12 , respectively.", "we use the latest dump of Freebase 10 as our KB for all the datasets.", "For beam search, we set the beam size K to be 3.", "We use three datasets to evaluate our method: ComplexWebQuestons (CWQ) (Talmor and Berant, 2018), WebQuestionsSP (WQSP) (Yih et al., 2015) and ComplexQuestions (CQ) (Bao et al., 2016).", "We treat CWQ as the major evaluation dataset because CWQ has a significantly higher percentage of complex questions with multiple hops of relations and constraints, as shown in Table 1a.", "11 For example, more than 30% of the questions in CWQ has 2-hop relations with constraints, compared with just 0.5% in WQSP.", "Note that we cannot collect similar statistics for the CQ dataset because it does not provide the ground truth query graphs, but we observe that major questions in CQ have 1-hop relations.", "We compare our method with the following existing work.", "First, we compare with existing staged query graph generation methods (Yih et al., 2015; Bao et al., 2016; Luo et al., 2018), which cannot handle multi-hop questions.", "Next, we compare with (Lan et al., 2019a), which handles constraints and considers multi-hop relation paths, but uses neither beam search nor constraints to reduce the search space.", "We also compare with (Chen et al., 2019), which uses beam search with a beam size of 1 to handle multi-hop questions but does not handle constraints.", "Finally, we compare with (Bhutani et al., 2019) and (Ansari et al., 2019).", "Bhutani et al. (2019) decomposed complex questions into simple questions and achieved the SOTA in terms of Prec@1 on CWQ 12 .", "Ansari et al. (2019) generated query programs from questions token by token and achieved the SOTA on WQSP.", "We show the overall comparison in Table 1b.", "We can see that on the CWQ dataset, our method clearly achieves the best performance in terms of 10 The KB can be downloaded from https: //developers.google.com/freebase/ .", "11 Note that we treat 2 -hop relation paths with CVT nodes as 1 -hop paths.", "12 We note that on the leaderboard of CWQ the best Prec@1 was achieved by Sun et al. (2019).", "However, their method uses annotated topic entities and is thus not comparable here.", "both Prec@1 and F1.", "The amount of improvement is also substantial, with 3.3 percentage points in Prec@1 and 3.9 percentage points in F1.", "This validates our hypothesis that our method works particularly well for complex questions with both constraints and multi-hop relations.", "For the other two datasets, WQSP and CQ, our method also achieves the SOTA, outperforming previous methods, demonstrating the robustness of our method.", "We also conduct an ablation study to better understand our model.", "To verify that the effectiveness of our method is not mainly due to the use of BERT, we replace BERT with LSTM.", "We can see in Table 1c that the LSTM-based version of our method can still outperform the previous state of the art.", "This shows that the effectiveness of our model is not simply because of the use of BERT.", "We also test three versions of our method, each with one action removed, in order to understand if all three actions are necessary.", "The results are also shown in Table 1c.", "We can see that the aggregate action is the least important action whereas the extend action is the most important one.", "However, we need to combine all three actions together to achieve the best performance.", "Ranking Error : There are 65% of errors coming from mis-prediction of query graphs.", "We look at these error cases closely.", "We find that some relations are hard to be detected even with human judgment.", "For example, our model mis-predicts the relation in the question Who was VP for Nixon? as profession while the correct relation is vice president.", "To understand VP is the abbreviation of vice president needs external knowledge, if this mapping has not been observed in the training data.", "Topic Linking Error : We observe that there are 27% of errors occurring due to the entity or expression linking error.", "E.g., What guitar does Corey Taylor play? has the constraint type gui-tar, but it is not detected in the linking procedure.", "Generation Limitation : The limitation of query graph generation strategies leads to 6% of errors.", "For the question What jobs did John Adams have before he was president, we are unlikely to find a matched query graph with our strategies.", "In this paper we proposed a modified staged query graph generation method to deal with complex questions with both multi-hop relations and constraints.", "By incorporating constraints into query graphs early, coupled with the help of beam search, we are able to restrict the search space.", "Experiments showed our method substantially outperformed existing methods on the ComplexWebQues-tions dataset and also outperformed the previous state of the art on two other KBQA datasets.", "This research is supported by the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative.", "Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore." ]
[ "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "method", "method", "method", "method", "method", "abstain", "abstain", "result", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "objective", "abstain", "result", "other", "other" ]
[ "We present an approach to minimally supervised relation extraction that combines the benefits of learned representations and structured learning, and accurately predicts sentence-level relation mentions given only proposition-level supervision from a KB.", "By explicitly reasoning about missing data during learning, our approach enables large-scale training of 1D convolutional neural networks while mitigating the issue of label noise inherent in distant supervision.", "Our approach achieves state-of-the-art results on minimally supervised sentential relation extraction, outperforming a number of baselines, including a competitive approach that uses the attention layer of a purely neural model.", "1 1 Introduction Recent years have seen significant progress on tasks such as object detection, automatic speech recognition and machine translation.", "These performance advances are largely driven by the application of neural network methods on large, high-quality datasets.", "In contrast, traditional datasets for relation extraction are based on expensive and time-consuming human annotation (Doddington et al., 2004) and are therefore relatively small.", "Distant supervision (Mintz et al., 2009), a technique which uses existing knowledge bases such as Freebase or Wikipedia as a source of weak supervision, enables learning from large quantities of unlabeled text and is a promising approach for scaling up.", "Recent work has shown promising results from large-scale training of neural networks for relation extraction (Toutanova et al., 2015; Zeng et al., 2015).", "There are, however, significant challenges due to the inherent noise in distant supervision.", "For 1 Our code and data are publicly available on Github: https://github.com/bflashcp3f/PCNN-NMAR example, Riedel et al. (2010) showed that, when learning using distant supervision from a knowledge base, the portion of mis-labeled examples can vary from 13% to 31%.", "To address this issue, another line of work has explored structured learning methods that introduce latent variables.", "An example is MultiR (Hoffmann et al., 2011), which is based on a joint model of relations between entities in a knowledge base and those mentioned in text.", "This structured learning approach has a number of advantages; for example, by integrating inference into the learning procedure it has the potential to overcome the challenge of missing facts by ignoring the knowledge base when mention-level classifiers have high confidence (Ritter et al., 2013; Xu et al., 2013).", "Prior work on structured learning from minimal supervision has leveraged sparse feature representations, however, and has therefore not benefited from learned representations, which have recently achieved state-of-the-art results on a broad range of NLP tasks.", "In this paper, we present an approach that combines the benefits of structured and neural methods for minimally supervised relation extraction.", "Our proposed model learns sentence representations that are computed by a 1D convolutional neural network (Collobert et al., 2011) and are used to define potentials over latent relation mention variables.", "These mention-level variables are related to observed facts in a KB using a set of deterministic factors, followed by pairwise potentials that encourage agreement between extracted propositions and observed facts, but also enable inference to override these soft constraints during learning, allowing for the possibility of missing information.", "Because marginal inference is intractable in this model, a MAP-based approach to learning is applied (Taskar et al., 2004).", "Our approach is related to recent work structured learning with end-to-end learned representations, including Structured Prediction Energy Networks (SPENs) (Belanger and McCallum, 2016); the key differences are the application to minimally supervised relation extraction and the inclusion of latent variables with deterministic factors, which we demonstrate enables effective learning in the presence of missing data in distant supervision.", "Our proposed method achieves state-of-the-art results on minimally supervised sentential relation extraction, outperforming a number of baselines including one that leverages the attention layer of a purely neural model (Lin et al., 2016).", "In this section we present our model, which combines continuous representations with structured learning.", "We first review the problem setting and introduce notation, next we present our approach to extracting feature representations which is based on the piecewise convolutional neural network (PCNN) model of Zeng et.", "al. (2015) and includes positional embeddings (Collobert et al., 2011).", "Finally we describe how this can be combined with structured latent variable models that reason about overlapping relations and missing data during learning.", "Given a set of sentences, s = s 1 , s 2 . . . , s n that mention a pair of knowledge base entities e 1 and e 2 (dyad), our goal is to predict which relation, r , is mentioned between e 1 and e 2 in the context of each sentence, represented by a set of hidden variables, z = z 1 , z 2 , . . . z n .", "Relations are selected from a fixed set drawn from a knowledge base, in addition to NA (no relation).", "Minimally supervised learning is more difficult than supervised relation extraction, because we do not have direct access to relation labels on the training sentences.", "Instead, during learning, we are only provided with information about what relations hold between e 1 and e 2 according to the KB.", "The problem is further complicated by the fact that most KBs are highly incomplete (this is the reason we want to extend them by extracting information from text in the first place), which effectively leads to false-negatives during learning.", "Furthermore, there are many overlapping relations between dyads, so it is easy for a model trained using minimal supervision from a KB to confuse these relationships.", "All of these issues are addressed to some degree by the structured learning approach that we present in Section 2.3.", "First, however we present our approach to feature representation based on convolutional neural networks.", "In the following section we review the Piecewise CNN (PCNN) architecture, first proposed by Zeng et.", "al. (2015), which is used as the basis for our feature representation.", "Input Representation: A sentence, s i consisting of l words is represented by two types of embeddings: word embeddings, E i , and position embeddings, P i relative to the entity pair.", "Following Lin et.", "al. (2016), word embeddings were initialized by running Word2Vec on the New York Times corpus and later fine-tuned; position embeddings encode the position of the word relative to KB entities, e 1 and e 2 , mentioned in the sentence.", "The form of input sentence representation is w 1 , w 2 , , w l , where w i R d .", "The dimension of embedding at each word position is equal to the word embedding dimension plus two times the position embedding size (one position is encoded for each entity).", "Convolution: Given an input sentence representation, we perform 1D convolution within a window of length l to extract local features.", "Assume we have d f convolutional filters ( F = { f 1 , f 2 , , f d f } , f i R l d ) .", "The output of the i -th convolutional filter within the j -th window is: c ij = f i w j l +1: j + b (1 j m + l 1) Where b is a bias term.", "We use zero padding when the window slides out of the sentence boundaries.", "Piecewise Max Pooling: The output of the convolutional layer c i is separated into three parts ( c i 1 , c i 2 , c i 3 ) using the positions of the two entities in the sentence.", "Max pooling over time is then applied to each of these parts, followed by an el-ementwise tanh.", "The final sentence vector is defined as follows: [ x ] ik = tanh(max j ( c ikj )) (1 i d f , 1 k 3) 2.3 Structured Minimally Supervised Learning Our proposed model is based on the PCNN representations described above, in addition to a latent variable model that reasons about missing data and ambiguous relations during learning and is illustrated in Figure 1.", "The embedding for sentence i , is used to define a factor over the i th input sentence and latent relation mention variable z i : PCNN ( s i , z i ) = e x i zi where x i is the representation for sentence s i , as encoded by the piecewise CNN.", "Another set of factors, OR , link the sentence-level mention variables, z i , to aggregate-level variables t j , representing whether relation j is mentioned between e 1 and e 2 in text.", "This is modeled using a deterministic OR: OR ( z , t j ) = 1 t j i : j = z i where 1 x is an indicator function that takes the value 1 when x is true.", "The choice of deterministic OR can be interpreted intuitively as follows: if a proposition is true according to t j , then it must be extracted from at least one sentence in the training corpus, on the other hand, if it is false, no sentences in the corpus can mention it.", "Finally, we incorporate a set of factors that penalize disagreement between observed relations in the KB, d j , and latent variables t j , which represent whether relation j was extracted from the text.", "The penalties for disagreement with the KB are hyperparameters that are adjusted on held-out development data and incorporate entity frequency information from the KB, to model the intuition that more popular entities are less likely to have missing facts: A ( t j , d j ) = e T , if t j = 0 and d j = 1 e D , if t j = 1 and d j = 0 1 , otherwise Putting everything together, the (unnormalized) joint distribution over t , d and z conditioned on sentences s mentioning a dyad is defined as follows: P ( d , t , z | s ) | s | (cid:89) i =1 PCNN ( s i , z i ) (cid:16) | r | (cid:89) j =1 OR ( z , t j ) A ( t j , d j ) (cid:17) = exp( S ( s , z , t , d )) (1) Here, is a tunable hyperparameter to adjust impact of the disagreement penalty, and S ( ) is the model score for a joint configuration of variables, which corresponds to the log of the unnormalized probability.", "A standard conditional random field (CRF) formulation would optimize model parameters, so as to maximize marginal probability of the observed KB relations, d conditioned on observed sentences, s : P ( d | s ) = (cid:88) z , t P ( d , t , z | s ) Computing gradients with respect to P ( d | s ) (and marginalizing out z and t ) is computationally intractable, so instead we propose an approach that uses maximum-a-posteriori (MAP) parameter learning (Taskar et al., 2004) and is inspired by the latent structured SVM (Yu and Joachims, 2009).", "Given a large text corpus in which a set of sentences, s mention a specific pair of entities ( e 1 , e 2 ) and a set of relations d hold between e 1 and e 2 , our goal is to minimize the structured hinge loss: LSH ( ) = max 0 , max z e , t e , d e [ S ( s , z e , t e , d e ) + l Ham ( d e , d )] max z g , t g (cid:2) S ( s , z g , t g , d ) (cid:3) (2) Where l Ham ( d e , d ) is the Hamming distance between the bit vector corresponding to the set of observed relations holding between ( e 1 , e 2 ) in the KB and those predicted by the model.", "Minimizing LSH ( ) can be understood intuitively as adjusting the parameters so that configurations consistent with observed relations in the KB, d , achieve a higher model score than those with a large hamming distance from the observed configuration.", "z e corresponds to the most confusing configuration of the sentence-level relation mention variables (i.e. one that has a large score and also a large Hamming loss) and z g corresponds to the best configuration that is consistent with the observed relations in the KB.", "This objective can be minimized using stochastic subgradient descent.", "Fixing z g and z e to their maximum values in Equation 2, subgradients with respect to the parameters can be computed as follows: LSH ( ) = 0 if LSH ( ) 0 , S ( s , z e , t e , d e ) S ( s , z g , t g , d ) otherwise (3) = 0 if LSH ( ) 0 , (cid:80) i log PCNN ( s i ,z e,i ) (cid:80) i log PCNN ( s i ,z g,i ) otherwise (4) Because the second factor of the product in Equation 1 does not depend on , it is straightforward to compute subgradients of the scoring function, S ( ) , with fixed values of z g and z e using backpropagation (Equation 4).", "Inference: The two inference problems, corresponding to maximizing over hidden variables in Equation 2 can be solved using a variety of solutions; we experimented with A search over left-to-right assignments of the hidden variables.", "An admissible heuristic is used to upper-bound the maximum score of each partial hypothesis by maximizing over the unassigned PCNN factors, ignoring inconsistencies.", "This approach is guaranteed to find an optimal solution, but can be slow and memory intensive for problems with many variables.", "In preliminary experiments on development data, we found that local-search (Eisner and Tromble, 2006) using both relation type and mention search operators (Liang et al., 2010; Ritter et al., 2013) usually finds an optimal solution and also scales up to large training datasets; we use local search with 30 random restarts to compute argmax assignments for the hidden variables, z g and z e , in all our experiments.", "Bag-Size Adaptive Learning Rate: Since the search space of the MAP inference problem increases exponentially as the number of hidden variables goes up, it becomes more difficult to find the exact argmax solution using local search, leading to increased noise in the computed gradients.", "To mitigate the search-error problem in large bags of sentences, we dynamically modify the learning rate based on the number of sentences in each bag as follows: i = , if | s i | < 1 1 | s i | , if 1 | s i | 2 ( 1 | s i | ) 2 , otherwise where i is the learning rate for i th training entity pair and 1 / 2 are two tunable bag-size thresholds.", "In Table 3 and Table 4, we see that this strategy significantly improves performance, especially when training on the larger NYTFB -280 K dataset.", "We also experimented with this method for PCNN+ATT, but found that its performance did not improve.", "In Section 2, we presented an approach that combines the benefits of PCNN representations and structured learning with latent variables for minimally supervised relation extraction.", "In this section we present the details of our evaluation methodology and experimental results.", "Datasets: We evaluate our models on the NYT-Freebase dataset (Riedel et al., 2010) which was created by aligning relational facts from Freebase with the New York Times corpus, and has been used in a broad range of prior work on minimally supervised relation extraction.", "Several versions of this dataset have been used in prior work; to facilitate the reproduction of prior results, we experiment with two versions of the dataset used by Riedel et.", "al. (2010) (henceforth NYTFB -68 K ) and Lin et.", "al. (2016) (NYTFB -280 K ).", "Statistics of these datasets are presented in Table 8.", "A more detailed discussion about the differences between Dataset NYTFB -68 KNYTFB -280 K (Riedel et. al. 2010) (Lin et. al. 2016) Entity pairs 67,946 280,275 Sentences 120,290 523,312 Table 1: Number of entity pairs and sentences in the training portion of Riedel's HELDOUT dataset (NYTFB -68 K ) and Lin's dataset (NYTFB -280 K ).", "datasets used in prior work is also presented in Appendix B. Hyperparameters: Following Lin et.", "al. (2016), we utilize word embeddings pre-trained on the NYT corpus using the word2vec tool, other parameters are initialized using the method described by Glorot and Bengio (2010).", "The Hoffmann et.", "al. sentential evaluation dataset is split into a development and test set and grid search on the development set was used to determine optimal values for the learning rate among { 0 .", "001 , 0 .", "01 } , KB disagreement penalty scalar among { 100 , 200 , , 2000 } and 1 / 2 bag size threshold for the adaptive learning rate among { 10 , 15 , , 40 } .", "Other hyperparameters with fixed values are presented in Table 2.", "Neural Baselines: To demonstrate the effectiveness of the our approach, we compare against col-less universal schema (Verga et al., 2016) in addition to the PCNN+ATT model of Lin et.", "al. (2016).", "After training the Lin et.", "al. model to predict observed facts in the KB, we use its attention layer to make mention-level predictions as follows: p ( r j | x i ) = exp ( r j x i ) (cid:80) n r k =1 exp ( r k x i ) Where r j indicates the vector representation of the j th relation.", "Structured Baselines: In addition to initializing convolutional filters used in the PCNN ( ) factors randomly and performing structured learning of representations as in Equation 4, we also experimented with variants of MultiR and DNMAR, which are based on the structured perceptron (Collins, 2002), using fixed sentence representations: both traditional sparse feature representations, in addition to pre-trained continuous representations generated using our best-performing reimplementation of PCNN+ATT.", "For the structured perceptron baselines, we also experimented with variants based on MIRA (Crammer and Singer, 2003), which we found to provide consistent improvements.", "More details are provided in Appendix A. 3.1 Sentential Evaluation In this work, we are primarily interested in mention-level relation extraction.", "For our first set of experiments (Tables 3 and 4), we use the manually annotated dataset created by (Hoffmann et al., 2011).", "Note that sentences in the Hoffman et.", "al. dataset were selected from the output of systems used in their evaluation, so it is possible there are high confidence predictions made by our systems that are not present.", "Therefore, we further validate our findings, by performing a manual inspection of the highest confidence predictions in Table", "5. NYTFB -68 K Results: As illustrated in Table 3, simply applying structured models (MultiR and DNMAR) with pre-trained sentence representations performs competitively.", "MIRA provides consistent improvements for both sparse and dense representations.", "PCNN+ATT outperforms most latent-variable models on the sentential evaluation, we found this result to be surprising as the model was designed for extracting proposition-level facts.", "Col-less universal schema does not perform very well in this evaluation; this is likely due to the fact that it was developed for the KBP slot filling evaluation (Ji et al., 2010), and only uses the part of a sentence between two entities as an input representation, which can remove important context.", "Our proposed model, which jointly learns sentence representations using a structured latent-variable model that allows for the possiblity of missing data, achieves the best overall performance; its improvements over all baselines were found to be statistically significant according to a paired bootstrap test (Efron and Tibshirani, 1994; Berg-Kirkpatrick et al., 2012).", "2 NYTFB -280 K Results: When training on the larger dataset provided by Lin et.", "al. (2016), linguistic features are not available, so only neural representations are included in our evaluation.", "As illustrated in Table 4, PCNNNMAR also achieves the best performance when training on the larger 2 p-value is less than 0.05.", "The AUC of most models decreases on the Hoffmann et.", "al. sentential dataset when training on NYTFB -280 K .", "This is not surprising, because the Hoffmann et.", "al. dataset is built by sampling sentences from positive predictions of models trained on NYTFB 68 K ; changing the training data causes a difference in the ranking of high-confidence predictions for each model, leading to the observed decline in performance against the Hoffmann et.", "al. dataset.", "To further validate our findings, we also manually inspect the models' top predictions as described below.", "Manual Evaluation: Because the Hoffmann et.", "al. sentential dataset does not contain the highest confidence predictions, we also manually inspected each model's top 500 predictions for the most frequent 4 relations, and report precision @ N to further validate our results.", "As shown in Table 5, for NYTFB -68 K , PCNN+ATT performs comparably on /location/contains 3 and /person/company , whereas our model has a considerable advantage on the other two relations.", "For NYTFB -280 K , our model performs consistently better on all four relations compared with PCNN+ATT.", "When training on the larger NYTFB 280 K dataset, we observe trend of increasing mention-level P@N for PCNNNMAR , however the performance of PCNN+ATT appears to decrease.", "We investigate this phenomenon further below.", "Performance at Extracting New Facts: To explain PCNN+ATT's drop in mention-level perfor-3 /location/contains is the most frequent relation in the Hoffmann et.", "mance after training on the larger NYTFB -280 K dataset, our hypothesis is that the larger KB-supervised dataset not only contains more true positive training examples but also more false negative examples.", "This biases models toward predicting facts about popular entities, which are likely to exist in Freebase.", "To provide evidence in support of this hypothesis, we divide the manually annotated dataset from Hoffmann et.", "al. into two categories: mentions of facts found in Freebase, and those that are not; this distribution is presented in the Table", "6. In Table 7, we present a breakdown of model performance on these two subsets.", "For PCNN+ATT, although the AUC of in-Freebase mentions on the test set increases after training on the larger NYTFB -280 K , its Out-Of-Freebase AUC on both dev and test sets drops significantly, which clearly illustrates the problem of increasing false negatives during training.", "In contrast, our model, which explicitly allows for the possibility of missing data in the KB during learning, has relatively stable performance on the two types of mentions, as the amount of weakly-supervised training data is increased.", "In Section 3.1, we evaluated the results of minimally supervised approaches to relation extraction by comparing extracted mentions against human judgments.", "An alternative approach, which has been used in prior work, is to evaluate a model's performance by comparing predictions against held out facts from a KB.", "Taken in isolation, this approach to evaluation can be misleading, because it penalizes models that extract many Model Name DEV TEST Fixed Sentence Representations MultiR continuous 72.4 66.7 MultiR continuous MIRA 74.6 73.4 DNMAR continuous 73.1 68.0 DNMAR continuous MIRA 75.6 68.7 Jointly Learned Representations PCNNNMAR 78.1 75.4 PCNNNMAR (bag size adaptive learning rate) 82.9 83.1 Baselines col-less universal schema (Verga et al., 2016) 60.3 57.5 PCNN+ATT (Lin et al. (2016) code) 67.9 72.1 PCNN+ATT (our reimplementation with parameter tuning) 78.2 74.8 Table 4: AUC of sentential evaluation precision / recall curves for all models trained on NYTFB -280 K .", "new facts that do not already appear in the knowledge base.", "This is undesirable, because the whole point of an information extraction system is to ex-Model Dataset InFB OutFB DEV PCNN+ATT NYTFB -68 K 78.2 89.6 NYTFB -280 K 77.1 77.0 Change -1.1 -12.6 PCNNNMAR (AdapLR) NYTFB -68 K 81.3 90.4 NYTFB -280 K 77.7 90.6 Change -3.6 +0.2 TEST PCNN+ATT NYTFB -68 K 78.7 75.9 NYTFB -280 K 81.9 56.8 Change +3.2 -19.1 PCNNNMAR (AdapLR) NYTFB -68 K 85.9 85.4 NYTFB -280 K 83.1 81.5 Change -2.8 -3.9 Table 7: Top: Comparison of AUCs of In-Freebase and Out-Of-Freebase mentions on sentential DEV set for PCNN+ATT and PCNNNMAR (AdapLR) with two datasets.", "tract new facts that are not already contained in a KB.", "Furthermore, sentential extraction has the benefit of providing clear provenance for extracted facts, which is crucial in many applications.", "Having mentioned these limitations of the held-out evaluation metrics, however, we now present results using this approach to facilitate comparison to prior work.", "Figure 2 presents precision-recall curves against held out facts from Freebase comparing PCNNN-MAR to several baselines and Figure 3 presents results on the larger NYTFB -280 K dataset.", "All models perform better according to the held out evaluation metric when training on the larger dataset, which is consistent with our hypothesis, presented at the end of Section 3.1.", "Our structured model with learned representations, PCNNNMAR (AdapLR), has lower precision when recall is high.", "This also fits with our hypothesis, as systems that explicitly model missing data will extract many correct facts that do not appear in the KB, resulting in an under-estimate of precision according to this metric.", "Knowledge Base Population: There is a long line of prior work on learning to extract relational information from text using minimal supervision.", "Early work on semantic bootstrapping (Hearst, 1992; Brin, 1998; Agichtein and Gravano, 2000; Carlson et al., 2010; Gupta and Manning, 2014; Qu et al., 2018), applied an iterative procedure to extract lexical patterns and relation instances.", "These systems tend to suffer from the problem of semantic drift, which motivated work on distant supervision (Craven et al., 1999; Snyder and Barzilay, 2007; Wu and Weld, 2007; Mintz et al., 2009), that explicitly minimizes standard loss functions, against observed facts in a knowledge base.", "The TAC KBP Knowledge Base Population task was a prominent shared evaluation of relation extraction systems (Ji et al., 2010; Surdeanu, 2013; Surdeanu et al., 2010, 2012).", "Recent work has explored a variety of new neural network architectures for relation extraction (Wang et al., 2016; Zhang et al., 2017; Yu et al., 2015), experimenting with alternative sentence representations in our framework is an interesting direction for future work.", "Recent work has also shown improved performance by incorporating supervised training data on the sentence level (Angeli et al., 2014; Beltagy et al., 2018), in contrast our approach does not make use of any sentence-level labels during learning and therefore relies on less human supervision.", "Finally, prior work has explored a variety of methods to address the issue of noise introduced during distant supervision (Wu et al., 2017; Yaghoobzadeh et al., 2017; Qin et al., 2018).", "Another line of work has explored open-domain and unsupervised methods for IE (Yao et al., 2011; Ritter et al., 2012; Stanovsky et al., 2015; Huang et al., 2016; Weber et al., 2017).", "Universal schemas (Riedel et al., 2013) combine aspects of minimally supervised and unsupervised approaches to knowledge-base completion by applying matrix factorization techniques to multi-relational data (Nickel et al., 2011; Bordes et al., 2013; Chang et al., 2014).", "Rows of the matrix typically model pairs of entities, and columns represent relations or syntactic patterns (i.e., syntactic dependency paths observed between the entities).", "Representations: Prior work has investigated the combination of structured learning with learned representations for a number of NLP tasks, including parsing (Weiss et al., 2015; Durrett and Klein, 2015; Andor et al., 2016), named entity recognition (Cherry and Guo, 2015; Ma and Hovy, 2016; Lample et al., 2016) and stance detection (Li et al., 2018).", "We are not aware of any previous work that has explored this direction on the task of minimally supervised relation extraction; we believe structured learning is particularly crucial when learning from minimal supervision to help address the issues of missing data and overlapping relations.", "3065 5 Conclusions In this paper we presented a hybrid approach to minimally supervised relation extraction that combines the benefits of structured learning and learned representations.", "In International Conference on Machine Learning , pages 983992.", "Iz Beltagy, Kyle Lo, and Waleed Ammar.", "2018.", "Improving distant supervision with maxpooled attention and sentence-level supervision.", "arXiv preprint arXiv:1810.12956 .", "Sergey Brin.", "1998.", "Extracting patterns and relations from the world wide web.", "In International Workshop on The World Wide Web and Databases , pages 172183.", "Springer.", "Mark Craven, Johan Kumlien, et al. 1999.", "Constructing biological knowledge bases by extracting information from text sources.", "In ISMB , volume 1999, pages 7786.", "Extensive experiments show that by performing inference during the learning procedure to address the issue of noise in distant supervision, our proposed model achieves state-of-the-art performance on minimally supervised mention-level relation extraction.", "Funding was provided by the National Science Foundation under Grant No.", "IIS-1464128, the Defense Advanced Research Projects Agency (DARPA) via the U.S. Army Research Office (ARO) and under Contract Number W911NF-17-C-0095 and the Office of the Director of National Intelligence (ODNI) and Intelligence Advanced Research Projects Activity (IARPA) via the Air Force Research Laboratory (AFRL) contract number FA8750-16-C0114, in addition to an Amazon Research Award and an NVIDIA GPU grant.", "The content of the information in this document does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.", "The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation here on." ]
[ "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "This paper studies strategies to model word formation in NMT using rich linguistic information, namely a word segmentation approach that goes beyond splitting into substrings by considering fusional morphology.", "Our linguistically sound segmentation is combined with a method for target-side inflection to accommodate modeling word formation.", "The best system variants employ source-side morphological analysis and model complex target-side words, improving over a standard system.", "A major problem in word-level approaches to MT is a lack of morphological generalization.", "Both inflectional variants of the same lemma and derivations of a shared word stem are treated as unrelated.", "For morphologically complex languages with a large vocabulary, this is problematic, especially in low-resource or domain-adaption scenarios.", "A simple and widely used approach to reduce a large vocabulary in NMT is Byte Pair Encoding (BPE) (Sennrich et al., 2016), which iteratively merges the top-frequent bigrams from initially character-level split words until a set vocabulary size is reached.", "This strategy is effective, but linguistically uninformed and often leads to sub-optimal segmentation.", "Also, by only segmenting words into substrings, BPE cannot handle non-concatenative operations, for example: umlautung: Baum Sg B aume Pl ( tree/trees ), transitional elements that frequently occur in German compounds: Grenz | kontroll | politik Grenze, Kontrolle ( border control policy ) derivation: abundant abundance In this paper, we apply word segmentation on both the source and target sides that goes beyond merely splitting into exact substrings.", "This overcomes the issues caused by fusional morphology by accomo-dating modeling word formation across languages.", "Productive word formation can lead to a high number of infrequent word forms even though the morphemes in these words are frequent.", "A linguistically motivated segmentation method to handle processes such as compounding and derivation allows for better coverage and generalization, both on the word level and on the morpheme level, and also enables the generation of new words.", "Sound morphological processing on the source and target side aims at learning productive word formation processes during translation, such as the English-German translation pair ungovernability Unregierbarkeit : un | PREF govern | V able | SUFF-ADJ ity | SUFF-NOUN un | PREF regieren | V bar | SUFF-ADJ keit | SUFF-NOUN Morphological information should not only handle isomorphic translation equivalents as above, but also help to uncover relations between source and target side for structurally different translations.", "There is a growing interest in the integration of linguistic information in NMT.", "For example, Eriguchi et al. (2016) and Bastings et al. (2017) demonstrate the positive impact of source-side syntactic information; Nadejde et al. (2017) report improved translation quality when using syntactic information in form of CCG tags on source and target side.", "To address data sparsity, compound modeling has already been proved useful for phrase-based MT, e.g., Koehn and Knight (2003) who model source-side compounds, and Cap et al. (2014) who generate compounds on the target side.", "For NMT, Huck et al. (2017) apply compound and suffix segmentation using a stemmer.", "Ataman et al. (2017) reduce complex source-side vocabulary by means of an unsupervised morphology learning algorithm.", "Ataman and Federico (2018) forego a traditional morphological analysis of the source language, and instead compose word representations from character trigrams.", "However, these three papers only apply segmentation on the string level and cannot properly handle fusional morphology.", "Addressing morphology in NMT, Banerjee and Bhattacharyya (2018) combine BPE with a morphological analyzer to guide the segmentation of surface forms into substrings.", "Their approach does not result in morphemes, for example googling googl | ing , which does not match with google , while in our work we match such morphemes.", "Tamchyna et al. (2017) present an NMT system to generate inflected forms on the target side, with a focus on overcoming data-sparsity caused by inflection.", "Their work contains a simple experiment on compound splitting with promising initial results that encouraged us to systematically explore word formation, including compounding, in NMT.", "To model word formation, we investigate", "(i) source-side tags for shallow syntactic information;", "(ii) target-side segmentation relying on a rich morphological analysis; and", "(iii) source-side segmentation strategies also relying on a tool for morphological analysis.", "We show that combining these strategies improves translation quality.", "Our contribution is a segmentation strategy that includes modeling non-concatenative processes, by implementing an English morphological analyzer suitable for this task, and by exploiting an existing tool for German, in order to obtain a consistent morphological sub-word representation.", "Our strategy to model word formation operates on lemma level as this allows for a better generalization than using surface forms.", "To model target-side inflection, we follow the simple lemma-tag generation approach by Tamchyna et al. (2017), but we improved the lemma representation to better support modeling word formation, and also implement a novel source-side morphological representation.", "Lemma-tag generation (existing strategy): In a pre-processing step, all inflected forms of the target-side training data are replaced by pairs of the lemma and its rich morphological tag.", "In a postprocessing step, the system's output is re-inflected by generating inflected forms using the morphological tool SMOR (Schmid et al., 2004).", "Table 1 depicts the process of inflecting tag and lemma pairs (columns 1, 2) into surface forms (column 3).", "New selection of lemma analyses: SMOR is a finite-state based tool covering inflection and derivation; it outputs all possible analysis paths, i.e. analyses at different levels of granularity.", "While not much attention is paid to the lemma selection in Tamchyna et al. (2017), a carefully selected lemma-internal representation is crucial for modeling word formation, as it provides the basis for segmentation across morphemes.", "To obtain optimal analyses, we follow Koehn and Knight (2003), and use the frequencies of observed non-complex words (we ignore bound morphemes).", "We select the analysis with the highest geometric mean of the com-ponents' frequencies, which gives a preference to words occurring more frequently in the data.", "The modified selection strategy favors more complex analyses; table 2 shows some examples.", "SUFF/N/-Table 3: English morphological analysis: the rightmost string on SUFF segments denotes a string operation, such as the removal of e in conspir e + acy .", "We implemented a simple morphological analyzer that is generally based on Koehn and Knight (2003), in that a word is segmented into strings that are already observed in the training data.", "Our method additionally relies on tag information (similar to the compound splitter of Weller-Di Marco (2017)), and on a hand-crafted set of prefixes and suffixes in combination with rules such as i y to handle non-concatenative transitions as in beautiful beauty | N ful | SUFF-ADJ .", "The segmentation is based on statistics derived from tagged and lemmatized data.", "This has several advantages:", "(i) the lemma and tag information restricts the possible operations (e.g., -ion as suffix is only applicable to nouns);", "(ii) there is no need to handle inflection;", "(iii) the tag provides a flat morpho-syntactic structure of the segmented word.", "The analysis first identifies a potential prefix by finding a combination with a prefix in the training data, for example deactivation | N de | PREF activation | N .", "The tag restriction at this step is important to maintain the word class of the original word, and to avoid analyses such as decent | ADJ de | PREF cent | N .", "The remaining part undergoes splitting into either word+suffix (e.g., activation | N activate | V ion | SUFF-N ) or a combination of two words (e.g., evildoer | N evil | N doer | N ) until no further segmentation can be found.", "In case of several possibilities, the analysis whose components lead to the highest geometric mean is selected.", "Table 3 illustrates how the morphological segmentation makes the word parts accessible such that they match with other occurrences of the word.", "The splitter in its present form is rather aggressive and tends to oversplit.", "While it is often assumed that this is not harmful in MT (e.g., Koehn EN Morph-Markup-Split enthusiasm < N > tic < SUFF ADJ > ally < SUFF ADV > explode < V > ion < SUFF N > inevitable < ADJ > ly < SUFF ADV > EN Morph-noMarkup-Split enthusiasm tic < SUFF ADJ > ally < SUFF ADV > explode ion < SUFF N > inevitable ly < SUFF ADV > EN Morph-noMarkup-noSplit enthusiasmtic < SUFF ADJ > ally < SUFF ADV > explodeion < SUFF N > inevitablely < SUFF ADV > Table 4: Representation variants for the words enthusiastically , explosion and inevitably .", "The morphological analyses provide a straightforward basis for the segmentation experiments.", "German: The lemma-tag approach ( oldLemTag ) is contrasted to the system variant with new lemma selection ( newLemTag ).", "For the segmentation experiments ( newLemTagSplit ), we apply compound splitting, such as Gold<NN>Preis<NN> Gold<NN> Preis<NN> ( gold price ).", "Also, nominalization, e.g., regieren<V> ung<NN><SUFF> ( govern ment ), is segmented, but different adjective suffixes (such as -lich ) are kept attached.", "Generally, we found that variation of the splitting granularity of adjective suffixes does not have a large impact.", "English: We first look at a representation where lemma-tag pairs replace surface forms ( LemTag ).", "To evaluate the effect of morphological information, we compare the three settings in table 4 that also rely on the lemma-tag representation: the tags convey inflectional information, but the lemma is replaced by its morphological analysis.", "In Morph-Markup-Split , words are split following the analysis, with tags indicating word-internal structure.", "Morph-noMarkup-Split is the same, but without word-internal tags.", "The annotation of pre-fixes/suffixes ( -ion < SUFF-N > ) is always kept.", "In addition to explicit splitting, we consider a variant where lemmas are replaced by the unsplit morphological analysis ( Morph-noMarkup-noSplit ), and all segmentation is done by BPE, which can now access actual words ( enthusiasm instead of *enthusias ) that already occur in the training data.", "This representation is conceptionally similar to the German lemma-tag representation.", "We compare four training settings: small (248,730 sentences: news-commentary), large2M (1,956,444 sentences: Europarl + news-commentary), large4M (4,116,215 sentences: Europarl + news-comment-ary + CommonCrawl) and medium (1M sentences) where the medium corpus consists of the news-commentary corpus and the first 750k sentences of Europarl.", "We use WMT'15 as dev-set (2169 sentences) and WMT'16 as test-set (2999 sentences).", "The lemma-tag approach doubles the sentence length by inserting tags.", "To avoid overly long sentences, the training data was first filtered to sentences of length 50, and after that, sentences more than 60 words long after BPE splitting were removed (e.g., sentences containing mostly foreign language words split nearly at character level).", "Data pre-processing The baseline was trained on plain surface forms (tokenized and true-cased).", "For the German lemma-tag system, we used Bit-Par (Schmid, 2004) to obtain morphological features in the sentence context, and SMOR (Schmid et al., 2004) for morphological analysis.", "For English, we used TreeTagger (Schmid, 1994).", "The English morphological analyzer for the small, medium and large2M systems was trained on the large2M data, the analyzer for the large4M system was trained on the full 4M lines.", "All systems (baseline and lemma-tag variants) underwent BPE segmentation (joint BPE of source/target side) with 30k merging operations.", "Training For the experiments, we used a Transformer NMT model (Sockeye toolkit: Hieber et al. (2017)).", "Table 6 shows the training parameters.", "Table 5 shows different representation variants on the source and target side, as outlined in section 5.", "Generally, the lemma-tag systems are better than a standard NMT system; there is not much difference between the old (Tamchyna et al., 2017) and the new version (lines 2 and 3).", "Source-side lemma-tag pairs improve the small and medium settings when paired with non-split German data; split German data works better for the Large2M system.", "Both variants perform similarly for the Large4M system (lines 4 and 5).", "English word-internal markup improves the Large2M system, both with split and unsplit German data (lines 6 and 9), and leads to the best result when combined with split German data in the Large4M setting (line 9).", "The variants in lines 7 and 8 (split/unsplit morphological analysis) produce similar results when translating to non-split German data.", "Interestingly, with explicit splitting on the German side (lines 10 and 11), the non-split English data performs considerably better for the small/medium/large2M settings, leading to the best results overall for these data settings.", "There seems to be a tendency that explicit splitting on both sides harms the smaller settings, possibly because translating at morpheme level requires more training data.", "Similarly, the English word-internal markup might introduce a complexity that only the larger systems can handle.", "On the other hand, using the non-split morphological analysis is less intrusive, but potentially useful at the BPE segmentation step by providing better access to sub-words.", "However, the best variants use explicit segmentation on the target side this makes the question to split or not to split difficult to answer.", "Maybe always splitting at a certain level is not the right approach, but rather a more context-sensitive segmentation strategy would be desirable.", "In low-resource scenarios, such as translating data of a particular domain, the problems caused by inflectional variants and forms created through derivation are typically aggravated.", "Applying a system trained on general language, but with a component to handle inflection and word formation, to an out-of-domain test set constitutes an interesting use case.", "We use a test set 1 (Haddow et al., 2017) from the medical domain (1931 sentences), containing health information aimed at the general public and summaries of scientific studies.", "Table 7 shows the results for the different system variants.", "For all data settings, the lemma-tag variants are better than the surface form baselines.", "There are no clear tendencies for a best-performing strategy across all settings, but English morphological analysis seems to contribute less, whereas English lemma-tag information (lines 4, 5) leads to overall good results.", "Table 8 shows two examples to illustrate the effect of morphological analysis.", "In", "(a), the baseline translates the noun foolishness as adjective, whereas the morphologically enriched system chooses a valid translation.", "Looking at the representation of foolishness after BPE segmentation, the baseline's fool@@ is@@ hn@@ ess is not particularly meaningful, whereas the representation of system 6 is [ NN ] fool < N > ish < SUFF ADJ > ness < SUFF N > , which provides a better basis for translation.", "In", "(b), from the medical test set, the baseline fails to translate coagulation .", "Below, the BPE representations of coagulation (f=19) and coagulate 1 HimL-testset-2015 from www.himl.eu/test-sets In For all his foolishness Ed Miliband knew who his enemies were.", "Surface (System 1) Tag morph.", "annotated (System 10) co@@ ag@@ ulation [NN] co@@ ag@@ ulate ion < SUFF N > co@@ ag@@ ulate [VB] co@@ ag@@ ulate Even with BPE segmentation, the representation in System 10 is more general than in the surface system, and in particular allows matching with e.g., coagulate .", "Similarly, Gerinnungstest ( coagulation test ) is represented as ger@@ innen < V > ung < NN >< SUFF > Test < NN > , allowing to combine statistics of the verb gerinnen and the noun Gerinnung .", "Thus, better generalization, paired with tag information, enables the morphology-informed systems to make better use of the training data.", "We showed that morphologically sound segmentation that considers non-concatenative processes in order to obtain a consistent representation of sub-words improves translation.", "The findings of our experiments provide important insights for translating morphologically rich languages, and are particularly important for low-resource settings.", "This research was partially funded by LMU Mu-nich's Institutional Strategy LMUexcellent within the framework of the German Excellence Initiative.", "This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 640550).", "This work was supported by the Dutch Organization for Scientific Research (NWO) VICI Grant nr. 277-89-002." ]
[ "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "objective", "objective", "other", "other", "other", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "other", "other", "other" ]
[ "Empathetic dialogue assembles emotion understanding, feeling projection, and appropriate response generation.", "Existing work for empathetic dialogue generation concentrates on the two-party conversation scenario.", "Multiparty dialogues, however, are pervasive in reality.", "Furthermore, emotion and sensibility are typically confused; a refined empathy analysis is needed for comprehending fragile and nuanced human feelings.", "We address these issues by proposing a novel task called MultiParty Empathetic Dialogue Generation in this study.", "Additionally, a Static-Dynamic model for Multi-Party Empathetic Dialogue Generation, SDMPED, is introduced as a baseline by exploring the static sensibility and dynamic emotion for the multi-party empathetic dialogue learning, the aspects that help SDMPED achieve the state-of-the-art performance.", "Empathetic conversation studies have been coming to the forefront in recent years owing to the increasing interest in dialogue systems.", "Empathetic dialogues not only provide dialogue partners with highly relevant contents but also project their feelings and convey a special emotion, that is, empathy.", "As revealed by previous studies (Fraser et al., 2018; Zhou et al., 2020), empathy can enhance conversation quality and transmit appropriate emotional responses to partners.", "Accordingly, most, if not all, existing work focuses on taking an emotional perspective in dialogue studies (Levinson et al., 2000; Kim et al., 2004; Bertero et al., 2016; Fraser et al., 2018; Rashkin et al., 2019).", "Although the empathetic conversation has received extensive attention, its exploration is still limited to the scenario with only two parties.", "In fact, multi-party chatting scenes are common in seminar discussions, conferences, and group chats.", "Multi-party conversations also rely on aid from empathy analysis.", "For instance, people with a similar experience can smoothly communicate with each other and easily feel understood, encouraged, and supported.", "These observations encourage us to present a novel natural language processing task called Multi-Party Empathetic Dialogue Generation.", "Generating multi-party empathetic dialogues faces two challenges.", "One challenge is the way to model multi-party dialogues.", "First, existing two-party dialogue models follow a seq2seq structure, whereas most multi-party dialogues are nonsequential.", "As shown in Figure 1, in response to Speaker 1 , the third and fourth utterances both express empathy for his/her stress and struggle.", "Sec-298 ond, in addition to the target participant, other participants also have implicit influence and interaction, and should be considered of generating utterances at each step.", "For instance, as an example of how to successfully resolve the situation, Speaker 4 inspires Speaker 1 as well as relieves Speaker 3 of his/her worry.", "Another challenge is the way to model the fragile and nuanced feelings of dialogue participants.", "We first clarify the relations of sensibility, emotion, and empathy in this study.", "Previous empathy studies recognized the emotion of one party and generated dialogues coupled with the same emotion (Rashkin et al., 2019; Shin et al., 2020).", "However, empathy is also determined by sensibility, which is a perspective-taking ability to experience other part-ners' emotions and make an appropriate response with his/her own view.", "According to the response I went through this in Figure 1, we can find that Speaker 4 has a similar experience to Speaker 1 , while Speaker 2 can only provide superficial comfort to Speaker 1 due to his/her weak sensibility.", "We observe that sensibility arises from personality and experience, and remains static throughout a conversation.", "On the other hand, emotion may dynamically change.", "For example, Speakers 2, 3 , and 4 possess different sensibilities to Speaker 1 , and these personal background-related attributes are persistent in the conversation.", "By contrast, the emotion of Speaker 1 gets reversed after receiving positive replies, as well as the main tone of this dialogue.", "To comprehensively cope with the aforementioned challenges in this study, we present a S taticD ynamic model for M ultiP arty E mpathetic D ialogue Generation called SDMPED.", "SDMPED models multi-party dialogues by constructing a dynamic graph network with temporal information and explores participants' dynamic emotions and static sensibilities by fusing speaker information.", "The contributions of our work are as follows: We propose a new task called Multi-party Empathetic Dialogue Generation, which attempts to resolve the emotional changes and empathy generation of multiple participants in a conversation.", "We propose an effective baseline model SDMPED for this new task, which combines dynamic emotions and static sensibilities from multiple parties.", "We demonstrate that our approach leads to performance exceeding the state of the art when trained and evaluated on multi-party empathetic data.", "Considering empathy in modeled conversations has been proposed as early as 20 years ago (Levin-son et al., 2000).", "However, this idea has not been widely studied in NLP field due to the limitations of the available data.", "Recently, Rashkin et al. (2019) re-introduced the concept of empathetic dialogue and constructed the first empathetic dialogue dataset, EMPATHETICDIALOGUES (ED), which contains 32 emotions in 25K dialogues.", "Another dataset, PEC (Zhong et al., 2020), provides assurance that most of the data are in line with the characteristics of empathy, yet it lacks emotion-related annotations.", "Another limitation is that data in PEC come from only two forums on Reddit (i.e., happy5 and offmychest).", "The data in BlendedSkillTalk dataset (Smith et al., 2020) are collected from the ED, ConvAI2 (Dinan et al., 2020), and Persona-Chat (Zhang et al., 2018) datasets.", "However, only a small portion of these data are characterized by empathy.", "Notably, none of the aforementioned datasets have multiple (>2) persons participating in the same conversation, neither they include empathy degree labels.", "Shin et al. (2020) formulated a reinforcement learning problem to maximize the user's emotional perception of the generated responses.", "Li et al. (2020b) utilized the coarse-grained dialogue-level and the fine-grained token-level emotions, which helped better capture the nuances of user emotions.", "In Caire (Lin et al., 2020), the empathy generation tasks are reinforced with an auxiliary objective for emotion classification by using a transfer learning model.", "Nevertheless, current empathetic dialogue models are conducted in the context of two participants; they do not explore the implicit interactions among multiple speaking persons and do not consider the differences in their sensibilities.", "There have been quite a few studies on multi-party conversations before (Strauss and Minker, 2010), but they all focused on speech rather than conversational text.", "A recent multi-party study (Meng et al., 2018) has tended to focus on the Address and 299 Utterance u i s j Speaker Token Speaker Position Context Response + Content t T+1 E m o t i o n Speaker 3: Gee, achieve it!", "Response Selection (ARS) task and ignore the influence of emotions, which is a significant departure from our empathetic dialogue task.", "Over the last years, researchers have gradually shifted from studying simple emotions in two-party dialogues (Busso et al., 2008; Li et al., 2017) to conducting more complex emotion analysis of multiple participants.", "STAC (Asher et al., 2016) and ARS (Ouchi and Tsuboi, 2016) are the multiparty dialogue datasets without emotion labels.", "MELD (Poria et al., 2019) and MESID (Firdaus et al., 2020) create the multi-modal multi-party emotional dialogue datasets from the TV series Friends .", "However, these two datasets contain the emotion-related data derived from short and colloquial chats from TV series, and consequently, their dialogue quality cannot be guaranteed.", "Additionally, these datasets can only be utilized for simple upstream tasks, such as emotion recognition.", "Most of the dialogues in current datasets are daily conversations on trivial topics, while those modeling empathy dialogues are lacking.", "Majumder et al. (2019) proposed a conversational emotion recognition model based on RNN to dynamically model the states of multiple speakers.", "Later, Ghosal et al. (2019) and Li et al. (2020a) also studied context and speaker sensitivity based on the approach of Majumder et al. (2019).", "A common problem of these models is that they only focus on the accuracy of emotion recognition while ignoring the dynamic changes of emotions.", "In this section, we introduce a static-dynamic model called SDMPED as shown in Figure", "2. We begin by describing the construction of the Temporal Dynamic Graph Network (TDGCN), including speaker sensibility nodes, emotion-related utterance nodes, and various types of edges between them.", "Thereafter, we use TDGCN to obtain dynamic emotions and static speaker sensibilities by integrating nodes and edges.", "Finally, we use prompt tuning to generate final dialogue responses based on emotion and sensibility information.", "We regard an empathetic post and its meaningful replies as a dialogue and ensure that each dialogue has more than three participating speakers.", "A post contains replies from multiple people, along with associated emotion and empathy degree labels.", "The empathy degree label of each utterance will be used in conjunction with the emotional content in our future model to learn the sensibility of each person.", "We propose a concept called dialogue emotional turn, which is different from the traditional dialogue turn.", "Specifically, a dialogue is assumed to have multiple sentences in one emotional turn but with the same emotional tone.", "When a person utters a second sentence, the emotion may already differ 300 from the previous one.", "Other people's subsequent utterances and emotions will be centered around this sentence.", "Therefore, we divide the dialogues to study the emotion variations over time, according to the principle that the same speaker can make at most one utterance during each emotional turn.", "Then, we introduce key symbols and concepts used in our study.", "A T emotional turns dialogue with N utterances between M ( M > 2) speakers can be expressed as U = { u ik | 1 i N and 1 k M } , where u ik represents the i th sentence from j th speaker.", "To better study emotion variations, we specify that a speaker can at most utter one sentence in each emotional turn.", "Thus, U can be divided into U = { U t | 1 t T } , where each part U t has n t nodes.", "Further, the sensibilities of speakers can be expressed as S = { s 1 , s 2 , ..., s M } .", "Our model aims to generate an empathy response of length L .", "SDMPED captures the sensibility information and emotional variations of multiple parties owing to a novel graph network.", "First, we train the multi-scale TextCNN (Zhang and Wallace, 2015) according to the empathy degrees of our dataset, and we extract the d dimensional utterance-level features containing sensibility information.", "In each turn, we use the emotion of the first speaker as the main emotional tone, and extract the emotional content features based on those emotion labels in the same way.", "Using these sensibility-related features as nodes and speaker-utterance relationships as an adjacency matrix, we construct a two-step static graph network to determine the static sensibility information HS = { ( H x ) S | 1 x M } of speakers.", "Thereafter, we represent the dialogue as a directed graph G = ( V, E, R, W ) to obtain additional emotional information.", "The graph is constructed as follows: Nodes V : The node set V = { v ik | 1 i N and 1 k M } incorporates emotion-related utterances.", "Among them, each node v ik (abbreviated as v i ) is initialized with the extracted feature u i spoken by the speaker s k .", "Adjacency Matrix E : E represents the adjacency matrix between emotion-related utterances.", "e ij E represents the edge from the utterance node v i to v j .", "et al., 2019; Yang et al., 2021): the relative occurrence positions of u i and u j in the conversation (with three types of relations, namely, Before , Current , and After ) and both speakers of the constituting utterance nodes, as shown in Figure", "3. Edge Weights W :Based on our assumptions, the edge weights are based on similarity-based attention, and the edge weights ij W are calculated as follows: ij = softmax( u Ti W [ u i p , ..., u i + f ]) , for j = i p, ..., i + f.", "And the relationship between the utterance and its speakers ki in static graph network can also be represented as c Freq .", "Speaking frequency of the speakers F req denotes the utterance number of a speaker in the whole conversations.", "c is a speaking coefficient to avoid over-fitting.", "Time Division Before feeding it into TDGCN, we need to divide E into T steps: E = { E t | 1 t T } .", "At time step t , the divided matrix E t includes only edges corresponding to the utterance in the emotional turn t .", "As shown in Figure 1, four speakers participate in the dialogue with 7 utterances.", "This dialogue has two emotional turns: u 1 to u 4 and u 5 to u 7 .", "The nodes and edges are constructed in Figure", "3. We take node u 3 as an example.", "The edge e 13 represents that u 1 spoken by s 1 appears before u 3 spoken by s 3 and the influence between them; the self-loop e 33 represents the influence of current node u 3 on itself.", "Two-Step Graph Update: The graph update mechanism has been implemented in two steps in order to better track conversation information and dynamic emotions.", "The update mechanism is calculated as follows: h (1) i = ( (cid:80) r R (cid:80) j N ri ij c i,r W (1) r u j + ii W (1)0 u i ) , h (2) i = ( (cid:80) j N ri W (2) h (1) j + W (2)0 h (1) i ) , (1) where ij and ii are the edge weights and N ri denotes the neighboring indices of node v i under relation r R .", "c i,r can be set in advance, such as c i,r = | N ri | .", "is the activation function ReLU, while W (1) r , W (1)0 , W (2) , and W (2)0 are learnable parameters.", "Utilizing the Two-Step Graph Update mechanism, we can effectively normalize the local neighborhood through neighborhood connections and enable self-dependent feature transformation through 301 The Relationship of Nodes and Edges u 3 u 4 u 1 u 2 s 1 s 3 , Before s 2 s 3 , Before s 4 s 3 , After s 3 s 3 , Current Sensibility Content + Emotion GRU GRUTDGCN t 1 t 2 EmotionTracker u 1 u 2 u 3 u 7 u 5 u 4 u 6 u 1 u 2 u 3 u 4 Figure 3: Transformation of dynamic emotions from t 1 to t 2 , as well as various types of edges between different nodes (e.g., Node u 3 ).", "self-connections, thereby extracting further information (Ghosal et al., 2019): We can call these two steps RGCONV and GCONV respectively in Figure", "2. 3.3 TDGCN Previous dynamic graphs were mostly used in spatio-temporal traffic networks with separated spatial and time features (Guo et al., 2019; Zhao et al., 2020).", "However, given that the utterance node is time-related and changes frequently, we implement the dynamic graph by updating a weight matrix through GRU and updating the hidden layer through the two-step graph: M ( l ) t = GRU( H ( l ) t 1 , M ( l ) t 1 ) , H ( l ) t = GCONV(RGCONV( E t , H ( l ) t 1 , M ( l ) t )) , (2) where t [1 , T ] and l [1 , L ] ( L generally equals 2) denote the time and layer index, respectively.", "M ( l ) t 1 represents the weight matrix updated by GRU.", "H (0) t is equal to the node features V. The hidden state H ( l ) t of the l th layer at time step t can be divided into n t parts: H ( l ) t = { ( h x ) ( l ) t } , where x represents the speaker index.", "By concatenating person's sensibility with corresponding emotion-related content ( h x ) ( l ) t , we obtain dynamic emotion embedding: ( e x ) ( l ) t = (cid:104) ( H x ) S ; ( h x ) ( l ) t (cid:105) .", "P e = softmax ( W l e t +1 ) , L emo = log ( P e [ e ]) .", "(4) 3.4 Decoder and Loss We adopt prompt tuning (Lester et al., 2021) to generate responses, which is a lightweight alternative to fine-tuning the generation task and keeps language model parameters unchanged while optimizing the prompt.", "The prompt adjustment achieves comparable performance in the full data setting by learning only parameters with a small proportion.", "The representation e t +1 is first transformed by a linear transformation into prompt .", "We can obtain the input of the empathy decoder Z = [ X ; prompt; Y ] , where X and Y represent the context and target response, respectively.", "We use the standard maximum likelihood estimate to optimize the response prediction, and we obtain another loss function through the decoder: L res = log (cid:0) p ( Y | R generate ) (cid:1) .", "Finally, all the parameters are jointly trained end-to-end to optimize the listener selection and response generation by minimizing the sum of two losses: L = L + L .", "(6) 4 Experiments 4.1 Dataset Data Pre-Processing The MPED data is obtained from an online peer-to-peer support platform, where users can express their emotions by chatting with others who have similar experiences.", "Generally, we permit the words of each utterance to range between 3 and 100, excluding emojis, which are stored separately 1 .", "We discard artificially repeated characters, correct spelling errors, and standardize network language.", "Developing a dialogue model requires more ethical considerations.", "Therefore, we focus our analysis on help-seeking or emotional comfort-seeking conversations.", "As a result, the conversations with sensitive contents are filtered out.", "In the end, we further ensure that no private information is included.", "1 Emotional utterances have been incorporated in MPED yet not in our proposed baseline since we focus on unimodal text in this study.", "It is quite beneficial that emotional category labels are available, which saves a lot of manual work.", "We have confirmed their accuracy and constructed the MPED dataset with kinds of emotions.", "We further classify these emotions for simplicity into 10 types, that is, happy , sad , calm , angry , excited , exhausted , supportive , bored , nervous , and thankful .", "MPED includes single-turn and multi-turn dialogue data, called MPED-S and MPED-M.", "We randomly split them into 80% training set, 10% validation set, and 10% testing set, respectively.", "Empathetic Pre-Processing Given that empathy is a complex feeling, gathering empathetic data is challenging.", "We first remove the conversations that do not contain empathetic posts, such as games, and so forth.", "Then, we design a three-point scale (0 to 2) and evaluate empathy, where three criteria are used: Emotional Reactions (expressing warmth and compassion), Interpretation (articulating understanding of feelings and experiences), and Exploration (exploring feelings and experiences not stated in the post).", "Considering manually screening dialogues is infeasible on large-size data, we filter out simple replies and label single-turn dialogues.", "In the end, three degrees of empathy are included in MPED, that is, weak , moderate , and strong .", "The hyper-parameters in our approach are set as follows.", "The input embeddings are 300-dimensional pre-trained 840B GloVe vectors.", "The speaking coefficient c is 5.", "The learning rate is 0 .", "003 and batch size is 16 .", "The dropout rate is 0 .", "6 , while the loss weight is 5 e 4 .", "Automatic Evaluation Criteria We calculate the AVG BLEU (average of BLEU-1,-2,-3,-4) (Pa-pineni et al., 2002) and ROUGE-L (Lin, 2004) scores as evaluations of model response generation, which have been often used to compare the system-generated response against the human-gold response in generation tasks.", "Human Evaluation Criteria We randomly collect 100 dialogue samples and their corresponding generations from each model.", "Then, we assign human annotators to rate each response between 1 and 5 on three distinct attributes: Empathy : assesses whether the speaker of the response understands the feelings of others and fully manifests it; Relevance : evaluates whether the generated response is relevant with the dialogue context and consistent with the expressed information or background knowledge; Fluency : measures whether the response is smooth and grammatically correct.", "MReCoSa: A context-sensitive model with multihead self-attention (Zhang et al., 2019).", "Multi-Trans: This multi-task model learns emotion classification and dialogue generation at the same time (Rashkin et al., 2018).", "MoEL: This model (Lin et al., 2019) combines the response representations from multiple emotion-specific decoders.", "EmpGD: This method (Li et al., 2020b) exploits coarse-grained and fine-grained emotions by an adversarial learning framework.", "Caire: This method (Lin et al., 2020) fine-tunes a large-scale pre-trained language model with multiple objectives: response language modeling, response prediction, and dialogue emotion detection.", "Random Prompt: We built a network with random values for prompt according to Lester et al. (2021).", "We describe the variants of our model below: Graph-Based: This simple model uses a graph-based model to build the empathetic dialogue graph of multi-party.", "Two-Step Graph: This model adopts a graph network with two-step graph update.", "SDMPED without Sensibility (SDMPED w/o S): This model ignores the sensibilities of speakers but maintains a TDGCN structure.", "SDMPED: Our final model combines dynamic emotions with static sensibilities to produce empathy responses.", "Automatic Evaluation Results According to the experimental results shown in Table 1, our model SDMPED achieves the highest scores under most metrics compared with other baselines.", "The noticeable improvement indicates the effectiveness of SDMPED on empathetic expressions of multi-party.", "Since multi-party dialogues are not time-sequential 303 Model MPED-M MPED-S Metrics ROUGE-L AVG BLEU Emp.", "and multi-turn dialogues need to consider the im-pact of each turn, SDMPED performs better than the models MoEL, EmpDG, and Caire that are designed solely for two-party dialogue.", "Compared with the Random prompt model, our model has been greatly improved, which demonstrates that our emotional prompt design plays an important role.", "Given that persons have different sensibilities, adding the characteristics of different people to explore their conversations helps improve the performance.", "Thus, SDMPED obtains a performance improvement on the basis of SDMPED without Sensibility.", "Human Evaluation Results Table 1 shows that SDMPED has achieved good performance in Empathy , Relevance , and Fluency .", "Our model is effective in capturing different emotional changes between multiple speakers and generating appropriate responses.", "MoEL and EmpDG are more inclined towards the characteristics of two-party dialogues, and thus cannot fully adapt to the new situation of multi-party.", "Random prompt and Caire are basically as good as our model in Fluency , however their Empathy and Relevance are inferior.", "These two models are pre-trained transfer learning models, and the generated responses are fluent and grammatical while being simple and general.", "We perform an ablation study to better understand the contributions of the main parts of our model.", "As shown in Table 2, the performance becomes notice-0 5 10 15 20 25 30 10 11 12 13 14 15 16 17 18 3 4 5 6 7 8 8+ N u m b er o f W o r d s M e t r i c s Number of Speakers Avg.", "ably worse, especially in the multi-turn dialogue data, after we remove the sensibility component.", "The degree of empathy for empathetic dialogues depends on the emotional tone at that time and the speakers' own abilities of perspective-taking, so studying sensibilities can help better investigate the responses generated by different people.", "According to the comparison of SDMPED without Sensibility and Two-Step Graph, emotions of people change at every moment, and updating the graph structure at each emotional turn is particularly necessary.", "After removing the two-step graph update mechanism, we find that the results of Graph-Based have further declined, which indicates that the two-step graph convolution process can better extract empathetic and dialogue features.", "We investigate the effects of different numbers of speakers and tokens.", "When 37 speakers are available, as shown in Figure 4, the model maintains fairly stable results, indicating that it can handle multiple-party empathetic dialogues effectively.", "However, the results decline as the speaker number continues to increase.", "The reason for the drop is 304 Speaker Sensibility Utterance Context Speaker 1 I am alone and have no friends now .", "that our conversations are typically concentrated between 3 to 5 people, and those with more than 7 people contain little content per speaker.", "In Figure 5, we compare our model with two prompt embedding methods and different numbers of emotion classification categories.", "The comparison between the orange and blue curves shows that dividing emotions into 10 categories gives better results than the 6 and 60 categories (6 and 60 categories are similar to the number of categories in MELD and ED datasets).", "Clearly, dividing emotions into 10 categories and placing a prompt matrix with 2 tokens before the response can yield promising performance.", "We apply different speakers' sensibilities to the empathy decoder in the same multi-turn conversation context and obtain results based on MPED in Table", "3. When presented with Speaker 1 's loneliness and depression, the following four speakers are willing to provide support, but they come up with different responses due to their different sensibilities.", "Speaker 2 is relatively unable to appreciate the emotions of Speaker 1 and jokes that he/she can find a virtual friend to hug; Speaker 3 expresses warmth and Speaker 4 and Speaker 5 comfort Speaker 1 and express their understanding.", "They also look forward to the future by suggesting that Speaker 1 can do something that helps distract himself/herself.", "We have introduced a novel task called MultiParty Empathetic Dialogue Generation.We have proposed a model called SDMPED suitable for the characteristics of the task.", "Our experiments have demonstrated that SDMPED is superior to other approaches on MPED.", "Future work can explore related issues such as integrating empathy into the dialogues, combining emojis and responses, guiding the active development of conversation.", "Data Collection.", "We collected publicly available data and removed all personal information (phone, email, postcode, location, and any other privacy information).", "Any potentially sensitive dialogues were completely removed from our data.", "No treatment recommendations or diagnostic claims were given in this study.", "This research is approved and monitored by the University's Institutional Review Board and performed in accordance with the principle of GDPR (General Data Protection Regulation 2 ) as follows: data processing shall be lawful if it is necessary for the performance of a task carried out in the public interest.", "Additionally, this study is explored not for any commercial use while merely for scientific 2 https://gdpr-info.eu/.", "Annotator Compensation.", "We resorted to the Amazon Mechanical Turk crowdsourcing platform to evaluate three artificial indicators (i.e., Empathy, Relevance, and Fluency).", "The crowdworkers were assessed with 20 random sentences, which averagely took 5-6 minutes to accomplish, and compensated with $0.8 per HIT (Human Intelligence Task).", "The compensation was determined based on the US minimum wage of $7.12 per hour.", "Potential Misuse.", "Our model is less likely to contribute to depression of users or generate non-empathic expressions (e.g., discrimination, criticism, and antagonism), since the model is based on the assumption that everyone has varying degrees of sensibility and empathy.", "Additionally, this model removes any sensitive information of users, and it is basically impossible to infer their personalities, preferences, interests, or other private information from the generated dialogues.", "Acknowledgements : This work was supported in part by the National Natural Science Foundation of China (No. 62106091) and Shandong Provincial Natural Science Foundation (No. ZR2021MF054)." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "result", "objective", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Hierarchical text classification is a challenging subtask of multi-label classification due to its complex label hierarchy.", "Existing methods encode text and label hierarchy separately and mix their representations for classification, where the hierarchy remains unchanged for all input text.", "Instead of modeling them separately, in this work, we propose Hierarchy-guided Contrastive Learning (HGCLR) to directly embed the hierarchy into a text encoder.", "During training, HGCLR constructs positive samples for input text under the guidance of the label hierarchy.", "By pulling together the input text and its positive sample, the text encoder can learn to generate the hierarchy-aware text representation independently.", "Therefore, after training, the HGCLR enhanced text encoder can dispense with the redundant hierarchy.", "Extensive experiments on three benchmark datasets verify the effectiveness of HGCLR.", "Hierarchical Text Classification (HTC) aims to categorize text into a set of labels that are organized in a structured hierarchy (Silla and Freitas, 2011).", "The taxonomic hierarchy is commonly modeled as a tree or a directed acyclic graph, in which each node is a label to be classified.", "As a subtask of multi-label classification, the key challenge of HTC is how to model the large-scale, imbalanced, and structured label hierarchy (Mao et al., 2019).", "The existing methods of HTC have variously introduced hierarchical information.", "Among recent researches, the state-of-the-art models encode text and label hierarchy separately and aggregate two representations before being classified by a mixed feature (Zhou et al., 2020; Deng et al., 2021).", "As denoted in the left part of Figure 1, their main goal is to sufficiently interact between text and structure to * Corresponding author.", "achieve a mixed representation (Chen et al., 2021), which is highly useful for classification (Chen et al., 2020a).", "However, since the label hierarchy remains unchanged for all text inputs, the graph encoder provides exactly the same representation regardless of the input.", "Therefore, the text representation interacts with constant hierarchy representation and thus the interaction seems redundant and less effective.", "Alternatively, we attempt to inject the constant hierarchy representation into the text encoder.", "So that after being fully trained, a hierarchy-aware text representation can be acquired without the constant label feature.", "As in the right part of Figure 1, instead of modeling text and labels separately, migrating label hierarchy into text encoding may benefit HTC by a proper representation learning method.", "To this end, we adopt contrastive learning for the hierarchy-aware representation.", "Contrastive learning, which aims to concentrate positive samples and push apart negative samples, has been considered as effective in constructing meaningful representations (Kim et al., 2021).", "Previous work on contrastive learning illustrates that it is critical 7109 to building challenging samples (Alzantot et al., 2018; Wang et al., 2021b; Tan et al., 2020; Wu et al., 2020).", "For multi-label classification, we attempt to construct high-quality positive examples.", "Existing methods for positive example generation includes data augmentation (Meng et al., 2021; Wu et al., 2020), dropout (Gao et al., 2021), and adversarial attack (Wang et al., 2021b; Pan et al., 2021).", "These techniques are either unsupervised or task-unspecific: the generation of positive samples has no relation with the HTC task and thus are incompetent to acquire hierarchy-aware representations.", "As mentioned, we argue that both the ground-truth label as well as the taxonomic hierarchy should be considered for the HTC task.", "To construct positive samples which are both label-guided and hierarchy-involved, our approach is motivated by a preliminary observation.", "Notice that when we classify text into a certain category, most words or tokens are not important.", "For instance, when a paragraph of news report about a lately sports match is classified as basketball, few keywords like NBA or backboard have large impacts while the game result has less influence.", "So, given a sequence and its labels, a shorten sequence that only keeps few keywords should maintain the labels.", "In fact, this idea is similar to adversarial attack, which aims to find important tokens which affect classification most (Zhang et al., 2020).", "The difference is that adversarial attack tries to modify important tokens to fool the model, whereas our approach modifies unimportant tokens to keep the classification result unchanged.", "Under such observation, we construct positive samples as pairs of input sequences and theirs shorten counterparts, and propose Hierarchy-Guided Contrastive Learning (HGCLR) for HTC.", "In order to locate keywords under given labels, we directly calculate the attention weight of each token embedding on each label, and tokens with weight above a threshold are considered important to according label.", "We use a graph encoder to encode label hierarchy and output label features.", "Unlike previous studies with GCN or GAT, we modify a Graphormer (Ying et al., 2021) as our graph encoder.", "Graphormer encodes graphs by Transformer blocks and outperforms other graph encoders on several graph-related tasks.", "It models the graph from multiple dimensions, which can be customized easily for HTC task.", "We propose Hierarchy-Guided Contrastive Learning (HGCLR) to obtain hierarchy-aware text representation for HTC.", "To our knowledge, this is the first work that adopts contrastive learning on HTC.", "For contrastive learning, we construct positive samples by a novel approach guided by label hierarchy.", "The model employs a modified Graphormer, which is a new state-of-the-art graph encoder.", "Experiments demonstrate that the proposed model achieves improvements on three datasets.", "Existing work for HTC could be categorized into local and global approaches based on their ways of treating the label hierarchy (Zhou et al., 2020).", "Local approaches build classifiers for each node or level while the global ones build only one classifier for the entire graph.", "Banerjee et al. (2019) builds one classifier per label and transfers parameters of the parent model for child models.", "Wehrmann et al. (2018) proposes a hybrid model combining local and global optimizations.", "Shimura et al. (2018) applies CNN to utilize the data in the upper levels to contribute categorization in the lower levels.", "The early global approaches neglect the hierarchical structure of labels and view the problem as a flat multi-label classification (Johnson and Zhang, 2015).", "Later on, some work tries to coalesce the label structure by recursive regularization (Gopal and Yang, 2013), reinforcement learning (Mao et al., 2019), capsule network (Peng et al., 2019), and meta-learning (Wu et al., 2019).", "Although such methods can capture the hierarchical information, recent researches demonstrate that encoding the holistic label structure directly by a structure encoder can further improve performance.", "Zhou et al. (2020) designs a structure encoder that integrates the label prior hierarchy knowledge to learn label representations.", "Chen et al. (2020a) embeds word and label hierarchies jointly in the hyperbolic space.", "Zhang et al. (2021) extracts text features according to different hierarchy levels.", "Deng et al. (2021) 7110 introduces information maximization to constrain label representation learning.", "Zhao et al. (2021) designs a self-adaption fusion strategy to extract features from text and label.", "Chen et al. (2021) views the problem as semantic matching and tries BERT as text encoder.", "Wang et al. (2021a) proposes a cognitive structure learning model for HTC.", "Similar to other work, they model text and label separately.", "Contrastive learning is originally proposed in Computer Vision (CV) as a weak-supervised representation learning method.", "Works such as MoCo (He et al., 2020) and SimCLR (Chen et al., 2020b) have bridged the gap between self-supervised learning and supervised learning on multiple CV datasets.", "A key component for applying contrastive learning on NLP is how to build positive pairs (Pan et al., 2021).", "Data augmentation techniques such as back-translation (Fang et al., 2020), word or span permutation (Wu et al., 2020), and random masking (Meng et al., 2021) can generate pair of data with similar meanings.", "Gao et al. (2021) uses different dropout masks on the same data to generate positive pairs.", "Kim et al. (2021) utilizes BERT representation by a fixed copy of BERT.", "These methods do not rely on downstream tasks while some researchers leverage supervised information for better performance on text classification.", "Wang et al. (2021b) constructs both positive and negative pairs especially for sentimental classification by word replacement.", "Pan et al. (2021) proposes to regularize Transformer-based encoders for text classification tasks by FGSM (Goodfellow et al., 2014), an adversarial attack method based on gradient.", "Though methods above are designed for classification, the construction of positive samples hardly relies on their categories, neglecting the connection and diversity between different labels.", "For HTC, the taxonomic hierarchy models the relation between labels, which we believe can help positive sample generation.", "Given a input text x = { x 1 , x 2 , ..., x n } , Hierarchical Text Classification (HTC) aims to predict a subset y of label set Y , where n is the length of the input sequence and k is the size of set Y .", "The candidate labels y i Y are predefined and organized as a Directed Acyclic Graph (DAG) G = ( Y, E ) , where node set Y are labels and edge set E denotes their hierarchy.", "For simplicity, we do not distinguish a label with its node in the hierarchy so that y i is both a label and a node.", "Since a non-root label of HTC has one and only one father, the taxonomic hierarchy can be converted to a tree-like hierarchy.", "The subset y corresponds to one or more paths in G : for any non-root label y j y , a father node (label) of y j is in the subset y .", "In this section, we will describe the proposed HGCLR in detail.", "Figure 2 shows the overall architecture of the model.", "Our approach needs a strong text encoder for hierarchy injection, so we choose BERT (Devlin et al., 2019) as the text encoder.", "Given an input token sequence: x = { [CLS] , x 1 , x 2 , ..., x n 2 , [SEP] } (1) where [CLS] and [SEP] are two special tokens indicating the beginning and the end of the sequence, the input is fed into BERT.", "For convenience, we denote the length of the sequence as n .", "The text encoder outputs hidden representation for each token: H = BERT(x) (2) where H R n d h and d h is the hidden size.", "We use the hidden state of the first token ( [CLS] ) for representing the whole sequence h x = h [CLS] .", "We model the label hierarchy with a customized Graphormer (Ying et al., 2021).", "Graphormer models graphs on the base of Transformer layer (Vaswani et al., 2017) with spatial encoding and edge encoding, so it can leverage the most powerful sequential modeling network in the graph domain.", "We organize the original feature for node y i as the sum of label embedding and its name embedding: f i = label _ emb( y i ) + name _ emb( y i ) .", "Label embedding is a learnable embedding that takes a label as input and outputs a vector with size d h .", "Name embedding takes the advantage of the name of the label, which we believe contains fruitful information as a summary of the entire class.", "We use the average of BERT token embedding of 7111 Push apart BERT Encoder Multi-label Classifier [News, Sports], [News, Music], [Travel] [News, Sports], [News, Music], [Travel] Pull together Input tokens Graph Encoder ... ... ...", "the label as its name embedding, which also has a size of d h .", "Unlike previous work which only adopts names on initialization, we share embedding weights across text and labels to make label features more instructive.", "With all node features stack as a matrix F R k d h , a standard self-attention layer can then be used for feature migration.", "To leverage the structural information, spatial encoding and edge encoding modify the Query-Key product matrix AG in the self-attention layer: A Gij = ( f i WGQ )( f j WGK ) T d h + c ij + b ( y i ,y j ) (4) where c ij = 1 D (cid:80) Dn =1 w e n and D = ( y i , y j ) .", "The first term in Equation 4 is the standard scale-dot attention, and query and key are projected by WGQ R d h d h and WGK R d h d h .", "c ij is the edge encoding and ( y i , y j ) denotes the distance between two nodes y i and y j .", "Since the graph is a tree in our problem, for node y i and y j , one and only one path ( e 1 , e 2 , ..., e D ) can be found between them in the underlying graph G (cid:48) so that c ij denotes the edge information between two nodes and w e i R 1 is a learnable weight for each edge.", "b ( y i ,y j ) is the spatial encoding, which measures the connectivity between two nodes.", "It is a learnable scalar indexed by ( y i , y j ) .", "The graph-involved attention weight matrix AG is then followed by Softmax, multiplying with value matrix and residual connection & layer normalization to calculate the self-attention, L = LayerNorm(softmax(A G )V + F) (5) We use L as the label feature for the next step.", "The Graphormer we use is a variant of the self-attention layer, for more details on the full structure of Graphormer, please refer to the original paper.", "As mentioned, the goal for the positive sample generation is to keep a fraction of tokens while retaining the labels.", "Given a token sequence as Equation 1, the token embedding of BERT is defined as: { e 1 , e 2 , ..., e n } = BERT _ emb( x ) (6) The scale-dot attention weight between token embedding and label feature is first calculated to determine the importance of a token on a label, q i = e i WQ , k j = l j WK , A ij = q i k T j d h (7) The query and key are token embeddings and label features respectively, and WQ R d h d h and WK R d h d h are two weight matrices.", "Thus, for a certain x i , its probability of belonging to label y j can be normalized by a Softmax function.", "Next, given a label y j , we can sample key tokens from that distribution and form a positive sample x .", "To make the sampling differentiable, we replace the Softmax function with Gumbel-Softmax (Jang et al., 2016) to simulate the sampling operation: P ij = gumbel _ softmax( A i 1 , A i 2 , ..., A ik ) j (8) 7112 Notice that a token can impact more than one label, so we do not discretize the probability as one-hot vectors in this step.", "Instead, we keep tokens for positive examples if their probabilities of being sampled exceed a certain threshold , which can also control the fraction of tokens to be retrained.", "For multi-label classification, we simply add the probabilities of all ground-truth labels and obtain the probability of a token x i regarding its ground-truth label set y as: P i = (cid:88) j y P ij (9) Finally, the positive sample x is constructed as: x = { x i if P i > else 0 } (10) where 0 is a special token that has an embedding of all zeros so that key tokens can keep their positions.", "The select operation is not differentiable, so we implement it differently to make sure the whole model can be trained end-to-end.", "Details are illustrated in Appendix A. The positive sample is fed to the same BERT as the original one, H = BERT( x ) (11) and get a sequence representation h x with the first token before being classified.", "We assume the positive sample should retain the labels, so we use classification loss of the positive sample as a guidance of the graph encoder and the positive sample generation.", "Intuitively, given a pair of token sequences and their positive counterpart, their encoded sentence-level representation should be as similar to each other as possible.", "Meanwhile, examples not from the same pair should be farther away in the representation space.", "Concretely, with a batch of N hidden state of positive pairs ( h i , h i ) , we add a non-linear layer on top of them: c i = W 2 ReLU( W 1 h i ) c i = W 2 ReLU( W 1 h i ) (12) where W 1 R d h d h , W 2 R d h d h .", "For each example, there are 2( N 1) negative pairs, i.e., all the remaining examples in the batch are negative examples.", "Thus, for a batch of 2 N examples Z = { z { c i } { c i }} , we compute the NT-Xent loss (Chen et al., 2020b) for z m as: L conm = log exp(sim( z m , ( z m )) / ) (cid:80) 2 Ni =1 ,i (cid:54) = m exp(sim( z m , z i ) / ) (13) where sim is the cosine similarity function as sim( u, v ) = u v/ (cid:107) u (cid:107)(cid:107) v (cid:107) and is a matching function as: ( z m ) = (cid:26) c i , if z m = c i c i , if z m = c i (14) is a temperature hyperparameter.", "Following previous work (Zhou et al., 2020), we flatten the hierarchy for multi-label classification.", "The hidden feature is fed into a linear layer, and a sigmoid function is used for calculating the probability.", "The probability of text i on label j is: p ij = sigmoid( W c h i + b c ) j (16) where WC R k d h and b c R k are weights and bias.", "where y ij is the ground truth.", "The classification loss of the constructed positive examples LC can be calculated similarly by Equation 16 and Equation 18 with h i substituting for h i .", "The final loss function is the combination of classification loss of original data, classification loss of the constructed positive samples, and the contrastive learning loss: L = LC + LC + L con (19) where is a hyperparameter controlling the weight of contrastive loss.", "During testing, we only use the text encoder for classification and the model degenerates to a BERT encoder with a classification head.", "Datasets and Evaluation Metrics We experiment on Web-of-Science (WOS) (Kowsari et al., 2017), NYTimes (NYT) (Sandhaus, 2008), and RCV1-V2 (Lewis et al., 2004) datasets for comparison and analysis.", "WOS contains abstracts of published papers from Web of Science while NYT and RCV1-V2 are both news categorization corpora.", "We follow the data processing of previous work (Zhou et al., 2020).", "WOS is for single-path HTC while NYT and RCV1-V2 include multi-path taxonomic labels.", "The statistic details are illustrated in Table", "1. Similar to previous work, We measure the experimental results with Macro-F1 and Micro-F1.", "Implement Details For text encoder, we use bert-base-uncased from Transformers (Wolf et al., 2020) as the base architecture.", "Notice that we denote the attention layer in Eq.", "4 and Eq.", "7 as single-head attentions but they can be extended to multi-head attentions as the original Transformer block.", "For Graphormer, we set the attention head to 8 and feature size d h to 768 .", "The batch size is set to 12 .", "The optimizer is Adam with a learning rate of 3 e 5 .", "We implement our model in PyTorch and train end-to-end.", "We train the model with train set and evaluate on development set after every epoch, and stop training if the Macro-F1 does not increase for 6 epochs.", "The threshold is set to 0 .", "02 on WOS and 0 .", "005 on NYT and RCV1-V2.", "The loss weight is set to 0 .", "1 on WOS and RCV1-V2 and 0 .", "3 on NYT.", "and are selected by grid search on development set.", "The temperature of contrastive module is fixed to 1 since we have achieved promising results with this default setting in preliminary experiments.", "Baselines We select a few recent work as baselines.", "HiAGM (Zhou et al., 2020), HTCInfoMax (Deng et al., 2021), and HiMatch (Chen et al., 2021) are a branch of work that propose fusion strategies for mixed text-hierarchy representation.", "HiAGM applies soft attention over text feature and label feature for the mixed feature.", "HTCInfoMax improves HiAGM by regularizing the label representation with a prior distribution.", "HiMatch matches text representation with label representation in a joint embedding space and uses joint representation for classification.", "HiMatch is the state-of-the-art before our work.", "All approaches except HiMatch adopt TextRCNN (Lai et al., 2015) as text encoder so that we implement them with BERT for a fair comparison.", "Main results are shown in Table", "2. Instead of modeling text and labels separately, our model can make more use of the strong text encoder by migrating hierarchy information directly into BERT encoder.", "On WOS, the proposed HGCLR can achieve 1.5% and 2.1% improvement on Micro-F1 and Macro-F1 respectively comparing to BERT and is better than HiMatch even if its base model has far better performance.", "BERT was trained on news corpus so that the base model already has decent performance on NYT and RCV1-V2, outperforming post-pretrain models by a large amount.", "On NYT, our approach observes a 2.3% boost on Macro-F1 comparing to BERT while sightly increases on Micro-F1 and outperform previous methods on both measurements.", "On RCV1-V2, all baselines hardly improve Micro-F1 and only influence Macro-F1 comparing to BERT.", "HTCInfoMax experiences a decrease because its constraint on text representation may contradict with BERT on this dataset.", "HiMatch behaves extremely well on RCV1-V2 with Macro-F1 as measurement while our approach achieves state-of-the-art on Micro-F1.", "Besides the potential implement difference on BERT encoder, RCV1-V2 dataset provides no label name, which invalids our name embedding for label representation.", "Baselines like HiAGM and HiMatch only initialize labl embedding with their names so that this flaw has less impact.", "We will discuss more on name embedding in next section.", "The main differences between our work and previous ones are the graph encoder and contrastive learning.", "To illustrate the effectiveness of these two parts, we test our model with them replaced or removed.", "We report the results on the develop-7114 Model WOS NYT RCV1-V2 Micro-F1 Macro-F1 Micro-F1 Macro-F1 Micro-F1 Macro-F1 Hierarchy-Aware Models TextRCNN (Zhou et al., 2020) 83.55 76.99 70.83 56.18 81.57 59.25 HiAGM (Zhou et al., 2020) 85.82 80.28 74.97 60.83 83.96 63.35 HTCInfoMax (Deng et al., 2021) 85.58 80.05 -83.51 62.71 HiMatch (Chen et al., 2021) 86.20 80.53 -84.73 64.11 Pretrained Language Models BERT (Our implement) 85.63 79.07 78.24 65.62 85.65 67.02 BERT (Chen et al., 2021) 86.26 80.58 -86.26 67.35 BERT+HiAGM (Our implement) 86.04 80.19 78.64 66.76 85.58 67.93 BERT+HTCInfoMax (Our implement) 86.30 79.97 78.75 67.31 85.53 67.09 BERT+HiMatch (Chen et al., 2021) 86.70 81.06 -86.33 68.66 HGCLR 87.11 81.20 78.86 67.96 86.49 68.31 Table 2: Experimental results of our proposed model on several datasets.", "ment set of WOS for illustration.", "We first replace Graphormer with GCN and GAT ( r.p. GCN and r.p. GAT), results are in Table", "3. We find that Graphormer outperforms both graph encoders on this task.", "GAT also involves the attention mechanism but a node can only attend to its neighbors.", "Graphormer adopts global attention where each node can attend to all others in the graph, which is proven empirically more effective on this task.", "When the graph encoder is removed entirely (r.m. graph encoder), the results drop significantly, showing the necessity of incorporating graph encoder for HTC task.", "The model without contrastive loss is similar to a pure data augmentation approach, where positive examples stand as augment data.", "As the last row of Table 3, on development set, both the positive pair generation strategy and the contrastive", "learn-(a)", "ing framework have contributions to the model.", "Our data generation strategy is effective even without contrastive learning, improving BERT encoder by around 1% on two measurements.", "Contrastive learning can further boost performance by regularizing text representation.", "We further analyze the effect of incorporating label hierarchy, the Graphormer, and the positive samples generation strategy in detail.", "Our approach attempts to incorporating hierarchy into the text representation, which is fed into a linear layer for probabilities as in Equation 16.", "The weight matrix WC can be viewed as label representations and we plot theirs T-SNE projections under default configuration.", "Since a label and its father should be classified simultaneously, the representation of a label and its father should be similar.", "Thus, if the hierarchy is injected into the text repre-7115 Variants of Graphormer Micro-F1 Macro-F1 Base architecture 87.46 81.52 -w/o name embedding 86.40 80.40 -w/o spatial encoding 86.88 80.42 -w/o edge encoding 87.25 80.54 Table 4: Performance with variants of Graphormer on development set of WOS.", "sentation, labels with the same father should have more similar representation to each other than those with a different father.", "As illustrated in Figure 3, label representations of BERT are scattered while label representations of our approach are clustered, which demonstrates that our text encoder can learn a hierarchy-aware representation.", "As for the components of the Graphormer, we validate the utility of name embedding, spatial encoding, and edge encoding.", "As in Table 4, all three components contribute to embedding the graph.", "Edge encoding is the least useful among these three components.", "Edge encoding is supposed to model the edge features provided by the graph, but the hierarchy of HTC has no such information so that the effect of edge encoding is not fully embodied in this task.", "Name embedding contributes most among components.", "Previous work only initialize embedding weights with label name but we treat it as a part of input features.", "As a result, neglecting name embedding observes the largest drop, which may explain the poor performance on RCV1-V2.", "5.3.3 Effect of Positive Example Generation To further illustrate the effect of our data generation approach, we compare it with a few generation strategies.", "Dropout (Gao et al., 2021) uses no", "pos-(a) A high degree of uncertainty associated with the emission inventory for China tends to degrade the performance of chemical transport models in predicting PM2.5 concentrations especially on a daily basis.", "In this study a novel machine learning algorithm, Geographically Weighted Gradient Boosting Machine (GW-GBM), was developed by improving GBM through building spatial smoothing kernels to weigh the loss function...", "Tags: CS, Machine Learning", "(b) Posterior reversible encephalopathy syndrome (PRES) is a reversible clinical and neuroradiological syndrome which may appear at any age and characterized by headache, altered consciousness, seizures, and cortical blindness...", "Tags: Medical, Headache Figure 4: Two fragments of the generated positive examples.", "itive sample generation techniques but contrasts on the randomness of the Dropout function using two identical models.", "Random masking (Meng et al., 2021) is similar to our approach except the remained tokens are randomly selected.", "Adversarial attack (Pan et al., 2021) generates positive examples by an attack on gradients.", "As in Table 5, a duplication of the model as positive examples is effective but performs poorly.", "Instead of dropping information at neuron level, random masking drops entire tokens and boosts Macro-F1 by over 1%, indicating the necessity of building hard enough contrastive examples.", "The adversarial attack can build hard-enough samples by gradient ascending and disturbance in the embedding space.", "But the disturbance is not regularized by hierarchy or labels so that it is less effective since there is no guarantee that the adversarial examples remain the label.", "Our approach guided the example construction by both the hierarchy and the labels, which accommodates with HTC most and achieves the best performance.", "In Figure 4, we select two cases to further illustrate the effect of labels on positive samples generation.", "In the first case, word machine strongly indicates this passage belongs to Machine Learning so that it is kept for positive examples.", "In the second case, syndrome is related to Medical and PRES occurs several times among Headache .", "Because of the randomness of sampling, our approach cannot construct an example with all keywords.", "For instance, learning in case one or headache in case two is omitted in this trial, which adds more difficulties for contrastive examples.", "In this paper, we present Hierarchy-guided Contrastive Learning (HGCLR) for hierarchy text classification.", "We adopt contrastive learning for migrating taxonomy hierarchy information into BERT encoding.", "To this end, we construct positive examples for contrastive learning under the guidance of a graph encoder, which learns label features from taxonomy hierarchy.", "We modify Graphormer, a state-of-the-art graph encoder, for better graph understanding.", "Comparing to previous approaches, our approach empirically achieves consistent improvements on two distinct datasets and comparable results on another one.", "All of the components we designed are proven to be effective.", "We thank all the anonymous reviewers for their constructive feedback.", "The work is supported by National Natural Science Foundation of China under Grant No.62036001 and PKU-Baidu Fund (No. 2020BD021)." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "method", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "method", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "result", "method", "other", "other" ]
[ "In most cases, the lack of parallel corpora makes it impossible to directly train supervised models for the text style transfer task.", "In this paper, we explore training algorithms that instead optimize reward functions that explicitly consider different aspects of the style-transferred outputs.", "In particular, we leverage semantic similarity metrics originally used for fine-tuning neural machine translation models to explicitly assess the preservation of content between system outputs and input texts.", "We also investigate the potential weaknesses of the existing automatic metrics and propose efficient strategies of using these metrics for training.", "The experimental results show that our model provides significant gains in both automatic and human evaluation over strong baselines, indicating the effectiveness of our proposed methods and training strategies.", "1 1 Introduction Text style transfer aims to convert an input text into another generated text with a different style but the same basic semantics as the input.", "One major challenge in this setting is that many style transfer tasks lack parallel corpora, since the absence of human references makes it impossible to train the text style transfer models using maximum likelihood estimation (MLE), which aims to maximize the predicted likelihood of the references.", "As a result, some of the earliest work (Shen et al., 2017; Hu et al., 2017; Fu et al., 2018) on unsupervised text style transfer proposed training algorithms that are still based on MLE by formulating the style transfer models as auto-encoders optimized with reconstruction loss.", "Specifically, during training the model is tasked to generate a style-agnostic encoding and reconstruct the input text based on this encoding with style-specific embeddings or decoders.", "During inference, the model aims to transfer the source 1 Code and data are available at: https://github.", "text style using the target style information.", "While these methods have seen empirical success, they face the inherent difficulty of coming up with a style-agnostic but content-preserving encoding this is a non-trivial task and failure at this first step will diminish style transfer accuracy and content preservation of the final output.", "Another line of work (Xu et al., 2018; Pang and Gimpel, 2019; Luo et al., 2019) proposes training algorithms based on rewards related to the automatic evaluation metrics, which can assess the model performance more directly during training.", "This approach is conceptually similar to training algorithms that optimize models using rewards related to the corresponding evaluation metrics for other NLP tasks, such as machine translation (Shen et al., 2016; Wieting et al., 2019a) or text summarization (Paulus et al., 2018; Li et al., 2019).", "As for unsupervised style transfer, the widely used automatic metrics mainly attend to three desiderata: (1) style transfer accuracy the generated sentence must be in the target style, commonly measured by the accuracy of a style classifier applied to the transferred text, (2) fluency the generated text must be grammatically correct and natural, commonly measured by the perplexity of a language model and (3) content preservation the semantics need to be preserved between the source and target, commonly measured by the BLEU score between the system outputs and source texts.", "Since these automatic metrics only require the system outputs and source texts, they can be used as rewards for training.", "Moreover, the two lines of approaches can be used together, and previous work (Yang et al., 2018; John et al., 2019; Madaan et al., 2020) proposed methods which use the auto-encoders as the backbone augmented with task-specific rewards.", "In particular, the style transfer accuracy reward is used by most of the recent work.", "to identify and address the bottlenecks of these methods.", "Specifically, we focus on two problems: (1) the difficulty of designing an efficient reward for content preservation, (2) the lack of robustness of the existing automatic evaluation metrics.", "Content preservation is more difficult to measure compared to style transfer accuracy and fluency because it needs to consider the overlap in the semantics between the source text and system outputs.", "While using BLEU score between the source text and system output would be a direct solution (Xu et al., 2018), this approach has an inherent limitation in that n -gram based metrics such as BLEU are sensitive to lexical differences and will penalize modifications that are necessary for transferring text style.", "In fact, previous work has proposed various different proxy rewards for content preservation.", "One of the most popular methods is the cycle-consistency loss (Luo et al., 2019; Dai et al., 2019; Pang and Gimpel, 2019), which introduces a round-trip generation process, where the model generates an output in the target style, and the ability of a reconstruction model to re-generate the original text is used as a proxy for content preservation.", "While this method is more tolerant to lexical differences, the correlation between the reconstruction loss and content preservation can be weak.", "Therefore, we aim to design a reward for content preservation which can directly assess the semantic similarity between the system outputs and input texts .", "Specifically, we note that models of semantic similarity are widely studied (Wieting et al., 2016; Sharma et al., 2017; Pagliardini et al., 2018; Zhang* et al., 2020), and we can leverage these methods to directly calculate the similarity between the system outputs and input texts.", "This renders our method applicable for even unsupervised settings where no human references are available.", "Another key challenge for reward-based training algorithms is that the existing automatic evaluation metrics are not well-correlated with human evaluation (Li et al., 2018).", "It poses general risks to the work in this field with respect to model training and evaluation since these metrics are widely used.", "An important observation we made from our experiments is that style transfer models can exploit the weaknesses of the automatic metrics.", "They do this by making minimal changes to the input texts which are enough to trick the classifier used for style transfer accuracy while achieving high content preservation and fluency scores due to the high lexical similarity with the input texts.", "Upon identifying this risk, we re-visit and propose several strategies that serve as auxiliary regularization on the style transfer models, effectively mitigating the problem discussed above.", "We empirically show that our proposed reward functions can provide significant gains in both automatic and human evaluation over strong baselines from the literature.", "In addition, the problems we identify with existing automatic evaluation metrics suggest that the automatic metrics need to be used with caution either for model training or evaluation in order to make it truthfully reflect human evaluation.", "where x ( i ) denotes the text and s ( i ) denotes the corresponding style label.", "The objective of the task is to generate (via a generator g ) the output with the target style conditioned on s while preserving most of the semantics of the source x .", "In other words, x = g ( x, s ) should have style s and the semantics of x .", "We define the style as a binary attribute such that s { 0 , 1 } , however, it can be easily extended to a multi-class setting.", "For our generator, we fine-tune a large-scale language model GPT-2 (Radford et al., 2019).", "GPT-2 is pre-trained on large corpora and can be fine-tuned to generate fluent and coherent outputs for a variety of language generation tasks (Wolf et al., 2019).", "Since GPT-2 is a unidirectional language model, we reformulate the conditional generation task as a sequence completion task.", "Namely, as input to the generator, we concatenate the original sentence with a special token which indicates the target style.", "The sequence following the style token is our output.", "We use four reward functions to control the quality of the system outputs.", "The quality of the outputs is assessed in three ways: style transfer accuracy, content preservation, and fluency.", "We attend to each of these factors with their respective rewards.", "Rewards for Style Transfer Accuracy We use a style classifier to provide the supervision signal to the generator with respect to the style transfer accuracy.", "The min-max game between the generator g and the classifier f cls is: min g max fcls E x s [log(1 f cls ( g ( x s , 1 s ) , 1 s ))] + E x s [log f cls ( x s , s ) + log(1 f cls ( x s , 1 s ))] .", "The style transfer accuracy reward for the generator is the log-likelihood of the output being labeled as the target style:", "Following prior work, we use the CNN-based classifier (Kim, 2014) f cls , which takes both the sentence and the style label as input and its objective is to predict the likelihood of the sentence being coherent to the given style.", "Rewards for Content Preservation To ensure that the system outputs still preserve the basic semantics of the source sentences, we use the pre-trained SIM model introduced in Wieting et al. (2019b,a) to measure the semantic similarity between the source sentences and system outputs.", "The SIM score for a sentence pair is the cosine similarity of its sentence representations.", "These representations are constructed by averaging sub-word embeddings.", "Compared to the cycle-consistency loss (Luo et al., 2019; Dai et al., 2019; Pang and Gimpel, 2019), our method is more direct since it doesn't require a second-pass generation.", "It also has advantages over n -gram based metrics like BLEU (Papineni et al., 2002) since it is more robust to lexical changes and can provide smoother rewards.", "In Wieting et al. (2019a), SIM is augmented with a length penalty to help control the length of the generated text.", "We use their entire model, SIMILE , as the content preservation reward, r sim ( x s ) = LP ( x s , x s ) SIM ( x s , x s ) , (3) where LP ( r, h ) = e 1 min ( | r | , | h | ) max ( | r | , | h | ) , (4) and is an exponential term to control the weight of the length penalty, which is set to 0.25.", "We also use the cycle-consistency loss L cyc to bootstrap the training: L cyc ( g ) = E x s [ log( p g ( x s | g ( x s , 1 s ) , s ))] .", "Here, p g is the likelihood assigned by the generator g .", "This introduces two generation passes, i.e., x s = g ( x, 1 s ) and x s = g ( x s , s ) while SIM reward only requires one generation pass, as illustrated in Fig.", "1. Rewards for Fluency Style transfer accuracy rewards and content preservation rewards do not have a significant effect on the fluency of the outputs.", "Therefore, we again use the pre-trained GPT-2 model, but as a reward this time.", "To encourage the outputs to be as fluent as the source sentences, we define the fluency reward as the difference of the perplexity between the system outputs and source sentences: r lang ( x s ) = ppl ( x s ) ppl ( x s ) .", "Here, ppl denotes the length-normalized perplexity assigned by the language model fine-tuned on the training set.", "As will be further discussed in Section 3.3, we found that using the rewards mentioned above can still result in unnatural outputs.", "Therefore, we additionally use a LSTM-based (Hochreiter and Schmidhuber, 1997) discriminator f adv to provide a naturalness reward, whose job is to discriminate the system outputs and the real sentences, i.e., an adversarial discriminator.", "It constructs a min-max game with the generator: min g max fadv E x s [log(1 f adv ( g ( x s , 1 s )))] + E x s [log( f adv ( x s ))] .", "The naturalness reward is the log-likelihood of the outputs being classified as real sentences: r adv ( x s ) = log( f adv ( x s )) .", "(8) 2.4 Learning The final corresponding loss term is: L ( g ) = 1 NN (cid:88) i =1 r ( x ( i ) s ) .", "(9) Here, N is the number of samples in the dataset.", "To train the model, we use the weighted average of the losses defined in the previous section: L ( g ) = cls L cls ( g ) + adv L adv ( g ) + sim L sim ( g ) + lang L lang ( g ) + rec L rec ( g ) .", "(10) where denotes the weight of the corresponding term.", "The setting of is chosen to make the training stable and have balanced style transfer accuracy and content preservation performance on the development set.", "L rec is the reconstruction loss, i.e., L rec ( g ) = E x s [ log( p g ( x s | x s , s ))] .", "We follow a two-stage training procedure.", "We first use the cycle-consistency loss L cyc to bootstrap the training and then fine-tune the model with the rewards we introduced above to improve the output quality.", "In the bootstrap stage, the objective function is L boot ( g ) = cyc L cyc ( g ) + cls L cls ( g ) + rec L rec ( g ) (12) We select the checkpoint with the highest mean of the style transfer accuracy and BLEU on the development set as the starting point for the second training stage.", "In the second stage, the generator is optimized with Eq.", "10.", "The classifier f cls for L cls is pre-trained and the language model for L lang is fine-tuned on the training set.", "During training, the discriminator f adv for L adv is trained against the generator.", "f cls is fixed when trained on some datasets, while it is trained against the generator on others.", "We select the checkpoint that has the style transfer accuracy and BLEU score similar to that from the first stage and the lowest perplexity on the development set.", "Lastly, since gradients can not be propagated through the discrete samples, we use two approaches to circumvent this problem.", "For the content preservation reward (Eq. 3) and fluency reward (Eq. 6), we use the REINFORCE (Williams, 1992) algorithm to optimize the model, g E x s p g ( x s ) [ r ( x s )] = E x s p g ( x s ) [ g log p g ( x s ) r ( x s )] (13) We approximate the expectation by greedy decoding and the log-likelihood is normalized by sequence length, i.e., 1 L (cid:80) Li =1 log p g ( w i ) , where w i denotes the i -th token of x s and L is sequence length.", "For the style transfer accuracy reward (Eq. 2) and naturalness reward (Eq. 8), we use a different approach to generate a continuous approximation of the discrete tokens, which allows gradients to be back-propagated to the generator.", "Namely, taking the style classifier f cls as an example, we use the distribution p i of each token produced by the generator as the input of the classifier.", "This distribution is then multiplied by the classi-fier's word embedding matrix W embed to obtain a weighted average of word embeddings: w i = p i W embed (14) Then, the classifier takes the sequence of w i as its input.", "We chose this method because it provides a token-level supervision signal to the generator, while the REINFORCE algorithm provides sentence-level signals.", "We evaluate our approach on three datasets for sentiment transfer with positive and negative reviews: Yelp review dataset, Amazon review dataset provided by Li et al. (2018), 2 and the IMDb movie review dataset provided by Dai et al. (2019).", "3 We also evaluate our methods on a formality style transfer dataset, Grammarly's Yahoo Answers Formality Corpus (GYAFC), 4 introduced in Rao and Tetreault (2018).", "Although it is a parallel corpus, we treat it as an unaligned corpus in our experiments.", "In order to compare to previous work, 2 https://github.com/lijuncen/ Sentiment-and-Style-Transfer 3 https://github.com/fastnlp/ nlp-dataset 4 https://github.com/raosudha89/ GYAFC-corpus Dataset Style Train Dev Test Yelp Positive 266K 2000 500 Negative 177K 2000 500 Amazon Positive 277K 985 500 Negative 279K 1015 500 IMDb Positive 178K 2000 1000 Negative 187K 2000 1000 GYAFC Formal 52K 2247 500 Informal 52K 2788 500 Table 1: Number of samples in the Train, Dev, and Test splits for each dataset in our experiments.", "we chose the Family & Relationships category for our experiments.", "Datasets statistics are shown in Table", "1. 3.2 Experimental Details Following previous work, we measure the style transfer accuracy using a FastText 5 (Joulin et al., 2017) style classifier trained on the respective training set of each dataset.", "To measure content preservation, we use SIM and BLEU as metrics where self-SIM and self-BLEU are computed between the source sentences and system outputs, while ref-SIM and ref-BLEU are computed between the system outputs and human references when available.", "To measure the fluency we use a pre-trained GPT-2 model to compute the perplexity.", "6 Our generator, GPT-2, has 1.5 billion parameters, and we train on a GTX 1080 Ti GPU for about 12 hours.", "The weights of the loss terms in Eq.", "10 and Eq.", "12 are detailed in Table", "2. While during our experiments we found that there are other possible configurations which give higher scores with respect to the automatic evaluation metrics, as will be discussed in Section 3.3, we also found that 5 https://fasttext.cc/ 6 Note that we didn't fine-tune it on the training set Dataset Model Acc PPL BLEU Yelp DIRR-CYCLE 91.7 392 18.7 DIRR-YELP-ADV 95.2 353 20.7 Amazon DIRR 62.2 205 30.1 DIRR-AMAZON-ADV 83.2 228 29.0 Table 3: Adversarial Results.", "better performance in automatic evaluation doesn't always entail better performance in human evaluation.", "Therefore, we also manually checked the quality of the transferred texts on development set when we chose the value of the hyperparameters.", "We compare our model with several state-of-the-art methods: DeleteAndRetrieve (D&R) (Li et al., 2018), B-GST (Sudhakar et al., 2019), Cycle-Multi (Dai et al., 2019), Deep-Latent (He et al., 2020), Tag&Gen (Madaan et al., 2020), and DualRL (Luo et al., 2019).", "We also compare our final model, DIRR ( Dir ectR eward), with the model only trained with the first stage (DIRR-CYCLE ) as mentioned in Section 2.4.", "Yelp and Amazon are arguably the most frequently used datasets for the sentiment transfer task.", "In our experiments, we found that the automatic evaluation metrics can be tricked on these datasets.", "Table 3 shows the performance of the models which generate adversarial examples.", "Upon identifying these risks, we propose several design options that can effectively mitigate these problems.", "Yelp Dataset For the Yelp dataset, when trained without the adversarial discriminator f adv and the fluency reward, our model (DIRR-YELP-ADV ) is able to discover a trivial solution which receives high automatic evaluation scores: injecting a word that carries strong sentiment at the beginning of the output, and making minimum changes (if any) to the source sentences, as illustrated in Table 8.", "This obviously does not meet the objective of content-preserving sentiment transfer and is easily detectable for humans.", "In fact, after we manually removed the first word from each of the output sentences, the transfer accuracy dropped from 95.2 to 58.4.", "To address this problem, we introduced an Model \"game\" \"phone\" Pos.", "auxiliary discriminator f adv as we discussed above to penalize the trivial outputs since they can be easily captured by the discriminator.", "On the other hand, the output perplexity is not sensitive enough to this local feature so using the fluency reward alone is not sufficient.", "Our final model has much more stable performance when the first word of its output sentences is removed, experiencing only a small drop of the style transfer accuracy from 94.2 to 88.2.", "Amazon Dataset For the Amazon dataset, we found that the style classifier f cls needs to be updated during the training to prevent the model exploiting the data imbalance problem of the dataset.", "Namely, in the Amazon dataset some categories of products appear mostly in negative or positive reviews.", "In Table 4, we show the word frequency of game and phone in both negative and positive reviews.", "In the original dataset, game mostly appears in negative reviews while phone mostly appears in positive reviews.", "Therefore, without any prior knowledge, it is very likely that these words will be used as informative features by the sentiment classifier, which makes its predictions unreliable.", "7 When our second-stage model is trained with the fixed style classifier, it (DIRR-AMAZON-ADV ) learns to exploit this dataset bias by changing the nouns in the original sentences to game or phone , which achieves better transfer accuracy.", "We list some examples in Table", "5. DIRR-AMAZONADV generated 291 game in 500 positive reviews, which obviously changes the semantics of the source sentences.", "In order to show that this phenomenon is independent to the classifier architec-7 Notice that the style classifier only achieves 43 accuracy on the human references.", "ture, we additionally fine-tuned a BERT-based (De-vlin et al., 2019) classifier, which yielded 51.3, 57.6, 70.4 accuracy on human references, DIRR, DIRR-AMAZON-ADV respectively, showing the same pattern of the fastText classifier.", "We notice that some two-stage models (Li et al., 2018; Sudhakar et al., 2019; Madaan et al., 2020) and other methods (Yang et al., 2018; Luo et al., 2019) also use a fixed classifier or use words with unbalanced frequencies in different styles as important features, which means that their methods may face the same risk.", "While Li et al. (2018) has pointed out this data imbalance problem of the Amazon dataset, we further demonstrate that a strong generator can even use this discrepancy to trick the automatic metrics.", "We are able to mitigate this problem by updating the style classifier during the training, and in Table 4, DIRR is more robust to the data imbalance problem compared to other methods.", "The automatic evaluation results are shown in Table", "6. We report the performance of the previous methods based on the outputs they provided for fair comparison and omit those whose results are not available.", "We have the following observations of the results.", "First, compared to our base model (DIRR-CYCLE ), the model trained with our proposed rewards has higher fluency, while remains the same level of content preservation.", "It indicates that SIM score is as effective as cycle-consistency loss for content preservation and our fluency reward can effectively improve the output fluency.", "Secondly, there exists a trade-off among the style transfer accuracy, content preservation and language fluency.", "While our model does not outperform the previous meth-Model Acc PPL r-BLEU s-BLEU Yelp D&R 89.0 362 10.1 29.1 B-GST 86.0 269 14.5 35.1 Cycle-Multi 87.6 439 19.8 55.2 Deep-Latent 86.0 346 15.2 40.7 Tag&Gen 88.7 355 12.4 35.5 DIRR-CYCLE 91.7 392 18.7 51.2 DIRR 94.2 292 20.7 52.6 Copy 4.1 204 22.5 100.0 Human 70.7 236 99.3 22.5 Amazon D&R 50.0 233 24.1 54.1 B-GST 60.3 197 20.3 44.6 Tag&Gen 79.9 312 27.6 62.3 DIRR-CYCLE 68.4 374 29.0 60.6 DIRR 62.2 205 30.1 61.3 Copy 21.1 218 40.0 100.0 Human 43.0 209 100.0 40.0 IMDb Cycle-Multi 77.1 290 N/A 70.4 DIRR-CYCLE 80.5 253 N/A 64.3 DIRR 83.2 210 N/A 64.2 Copy 5.3 147 N/A 100.0 GYAFC D&R 51.2 226 14.4 27.1 DualRL 62.0 404 33.0 50.8 DIRR-CYCLE 76.2 162 44.1 66.5 DIRR 71.8 145 46.3 59.9 Copy 15.8 147 41.5 98.5 Human 84.5 137 97.8 21.5 Table 6: Automatic Evaluation.", "We conducted human evaluation on Yelp, Amazon and GYAFC datasets evaluating the style transfer accuracy, content preservation, and fluency separately.", "The first two aspects are rated with range 1 3 while the fluency is rated with range 0 1. We randomly select 100 candidates and compare the outputs of different systems.", "We use Amazon Turk 8 for human evaluation.", "Each candidate is rated by three annotators and we report the average scores here.", "We did not evaluate the style 8 https://www.mturk.com/ Dataset Model Style Flu.", "transfer accuracy for the GYAMC dataset since it is difficult for human annotators to accurately capture the difference between formal and informal sentences.", "The results of our human evaluations are shown in Table 7.", "We additionally report the sample-wise mean score of the metrics where the fluency scores are scaled up to be consistent with other scores.", "Our model achieves better overall performance when considering all three evaluation metrics on each dataset.", "Interestingly, we found that the automatic metrics for both the style transfer accuracy and content preservation do not accurately reflect performance as measured by human evaluation.", "For example, on the Amazon dataset, although Tag&Gen (Madaan et al., 2020) achieves significantly higher style transfer accuracy based on the automatic metric, our model achieves better performance based on the human evaluation.", "This phenomenon suggests that the importance of our findings discussed in Section 3.3, that strong neural models can potentially exploit the weaknesses of the automatic metrics.", "We next show an ablation study, demonstrating the effectiveness of the content preservation and fluency rewards in DIRR, and how SIM can be used to replace the cycle-consistency loss.", "We also compare using BLEU versus using SIM as a content-preservation reward, finding that using BLEU results in reduced performance, unstable training, and artifacts in the outputs, which makes the results less natural than the results of the model trained with SIM score.", "the cycle-consistency loss for content preservation, we fine-tuned DIRR-CYCLE on SIM to produce a new model, DIRR w/o FLU .", "The difference between DIRR and DIRR w/o FLU is that the former is additionally trained with our fluency rewards.", "The results are shown in Table 9, and show two main trends.", "First, we see that DIRR w/o FLU has better fluency and content preservation performance than DIRR-CYCLE , which shows that the cycle-consistency loss can be replaced by SIM score for content preservation.", "Second, DIRR has better fluency than DIRR w/o FLU , showing the effectiveness of our fluency rewards.", "We next investigate the effectiveness of using SIM as a reward instead of BLEU.", "To do this, we train a model, DIRR-BLEU , which uses BLEU as the content reward and report the results in Table 9.", "The results show that using BLEU has larger content preservation as measured by BLEU, but has similar performance when measured by SIM.", "However, performance on the style transfer accuracy and fluency decreases.", "We hypothesize that this is because using SIM as a reward gives the model more freedom, allowing the model to have more balanced performance since there is less pressure to copy n -grams.", "We also observe more adversarial examples in the outputs of DIRR-BLEU .", "As discussed in Section 3.3, these adversarial examples are generated by injecting a word carrying strong sentiment at the beginning of the output.", "The model trained with BLEU is more likely to generate these outputs as it will try to avoid breaking up the n grams in the source sentences, allowing for a higher BLEU reward.", "Examples of this behavior is shown in Table 8.", "Notice that the DIRR-BLEU samples start with the word great , which is enough to often fool the classifier, but are unnatural.", "A main line of work (Shen et al., 2017; Hu et al., 2017; Fu et al., 2018; Xu et al., 2018; John et al., 2019) for text style transfer aims to model the conditional distribution of the data with the encoder-decoder architecture.", "Due to the lack of parallel corpora, inductive biases are designed to make the generation conditioned on both source sentences and specific styles such that the model can rewrite the source texts with the target style while still preserve the content information of the source texts.", "Efforts are also made to design training objectives to improve performance.", "For example, Back-translation (Zhang et al., 2018; Prabhumoye et al., 2018), denoising auto-encoding (Lample et al., 2019) and the cycle-consistency loss (Luo et al., 2019; Dai et al., 2019; Pang and Gimpel, 2019) have been shown effective for improving the model performance.", "Li et al. (2018) proposes a retrieve-based pipeline, which contains three stages, namely, delete, retrieve and generate.", "Sudhakar et al. (2019) extends this pipeline by using GPT (Radford et al., 2018) as the generator.", "Compared to these methods, we propose a more direct and effective approach to encourage semantic-preserving transfer by directly measuring the semantic similarity of the source texts and system outputs.", "Recently, other works have been proposed for unsupervised text style transfer (Jin et al., 2019; Lai et al., 2019; Wu et al., 2019; Li et al., 2020).", "He et al. (2020) proposes a probabilistic view which models the non-parallel data from two domains as a partially observed parallel corpus.", "Madaan et al. (2020) proposes a tag-and-generate pipeline, which firstly identifies style attribute markers from the source texts, then replaces them with a special token, and generates the outputs based on the tagged sentences.", "Zhou et al. (2020) focuses on exploring the word-level style relevance which is assigned by a pre-trained style classifier.", "They propose a reward for content preservation which is based on the weighted combination of the word embeddings of the source texts and system outputs.", "Compared to this reward, our proposed content reward is specifically designed for semantic similarity and pre-trained on large corpora, which makes it more robust across different datasets.", "In this paper, we propose a direct approach of improving content preservation for text style transfer by leveraging a semantic similarity metric as the content reward.", "Using a large pre-trained language model (GPT-2) with our proposed rewards that target the different aspects of the output quality, our approach achieves strong performance on both automatic and human evaluation.", "Recently, several semantic similarity metrics (Zhao et al., 2019; Sellam et al., 2020; Gao et al., 2020) based on pre-trained language models have shown promising results.", "Introducing these metrics in our proposed method as the content preservation reward may bring further improvements.", "Moreover, we identify several problems in the commonly used automatic evaluation metrics and datasets, and propose several practical strategies to mitigate these problems, which makes these metrics more effective rewards for model training.", "Considering the weaknesses of the automatic metrics presented in this work, we believe that more rigorous discussion and investigation on the criteria of \"successful transferring\" is essential for this field of work.", "Since existing works mostly relied on model-based metrics to determine the success of style transfer models, methods such as adversarial training could be introduced to make the model-based metrics more robust and faithful indicators of the success of style-transferring, which would be beneficial for both model training and evaluation." ]
[ "abstain", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "objective", "objective", "objective", "abstain", "objective", "objective", "method", "abstain" ]
[ "Abstract In our everyday chit-chat, there is a conversation initiator, who proactively casts an initial utterance to start chatting.", "However, most existing conversation systems cannot play this role.", "Previous studies on conversation systems assume that the user always initiates conversation, and have placed emphasis on how to respond to the given user's utterance.", "As a result, existing conversation systems become passive.", "Namely they continue waiting until being spoken to by the users.", "In this paper, we consider the system as a conversation initiator and propose a novel task of generating the initial utterance in open-domain non-task-oriented conversation.", "Here, in order not to make users bored, it is necessary to generate diverse utterances to initiate conversation without relying on boilerplate utterances like greetings.", "To this end, we propose to generate initial utterance by summarizing and chatting about news articles, which provide fresh and various contents everyday.", "To address the lack of the training data for this task, we constructed a novel large-scale dataset through crowd-sourcing.", "We also analyzed the dataset in detail to examine how humans initiate conversations (the dataset will be released to facilitate future research activ-ities).", "We present several approaches to conversation initiation including information retrieval based and generation based models.", "Experimental results showed that the proposed models trained on our dataset performed reasonably well and outperformed baselines that utilize automatically collected training data in both automatic and manual evaluation.", "Conversation 1 systems are becoming increasingly important as a means to facilitate human-computer (cid:3)", "Japan Corporation.", "1 Conversation in this paper refers to open-domain non-task-oriented conversations and chit-chat.", "communication.", "However, most of the studies on conversation systems have been based on the as-sumption that a human always initiates conversation.", "As a result, the systems are designed to be passive ( Yan, 2018), meaning that they keep waiting until they are spoken to by the human and will never speak to the human proactively.", "For example, popular encoder-decoder models ( Sutskever et al., 2014; Vinyals and Le, 2015) are designed to respond to input utterances provided by humans, and it is difficult for them to proactively initiate the conversation.", "Although some systems are able to initiate conversations, they basically adopt template-based generation methods and thus lack diversity.", "This paper investigates generating the very first utterance in a conversation.", "We feel strongly that conversation systems should not always be passive; sometimes, they have to proactively initiate the conversation to enable more natural conversation.", "In addition, it is crucial to be able to initiate conversation in various ways in actual applications, since systems that initiate a conversation by always saying Let's talk about something or Hello are inherently boring.", "We propose a task setting in which the system initiates a conversation by talking about a news topic.", "In this task, the system is provided with 3989 a news post to talk about and uses it to generate the initial utterance of the conversation (Fig. 1).", "This task is referred to as conversation initiation in this paper.", "We have two primary reasons for using news posts.", "First, sharing and exchanging opinions about the latest news with friends is common in our daily conversations (Purcell et al., 2010) ( e.g, asking something like What do you think about today's news on Trump? ).", "Second, and more importantly, this task setting allows us to proactively generate diverse utterances to initiate conversations by simply using the latest news posts, which include a wide variety of content published daily.", "We created a large-scale dataset for training and evaluating conversation initiation models through a crowd-sourcing service.", "The crowd-sourcing workers were presented with news posts collected from Twitter and asked to create utterances to initiate a conversation about the post.", "The resulting dataset will be released to facilitate future studies at the time of publication.", "We developed several neural models, including retrieval-based and generation-based ones, to empirically compare their performances.", "We also compared the proposed models against baselines that utilize automatically constructed training dataset to investigate the effectiveness of our dataset.", "Both automatic and manual evaluation were used to assess not only the quality but also the diversity of the generated initial utterances.", "The results indicate that the proposed models successfully generated initial utterances for the given news posts, and significantly outperformed the baseline models.", "Our contributions are the following: (cid:15) We investigate the task of conversation initiation, which has been largely overlooked in previous studies.", "(cid:15)", "We construct and release a large-scale dataset for conversation initiation.", "(cid:15)", "We develop several neural models and empirically compare their effectiveness on our dataset.", "There are many existing studies on non-task-oriented conversation systems.", "Research started with rule-based methods (Weizenbaum, 1966; Wallace, 2009) and gradually shifted to statistical approaches (Ritter et al., 2011; Vinyals and Le, 2015), and many follow-up studies have since been undertaken to improve the quality of the generated responses (Hasegawa et al., 2013; Sordoni et al., 2015; Serban et al., 2016; Li et al., 2016b; Serban et al., 2017).", "However, the task of conversation initiation has been largely absent in these studies.", "There have also been efforts to develop systems that can chat with users about specific documents such as Wikipedia articles (Zhou et al., 2018) or reviews (Moghe et al., 2018).", "However, these studies did not investigate how to initiate such conversations, and as a result, their models assume that the initial utterance is always given by users.", "Also, their datasets are designed to be used to train models of multi-turn conversations about the given documents, rather than models of conversation initiation.", "For example, Moghe et al. ( 2018) utilized fixed templates to initiate conversations, and there are only a few (around 4k) utterances that can be used to train the model of conversation initiation in Zhou's dataset ( 2018).", "In contrast, we focus on the conversation initiation task, which those studies have largely overlooked, and develop a large-scale dataset that includes 109,460 utterances for this task (see Section 3).", "Therefore, our work can be considered complementary to the previous studies.", "In an approach that uses images rather than documents, ( Mostafazadeh et al., 2016) proposed a method of generating questions about an image to initiate conversation.", "Although, like us, they explored initiating conversation, they focused only on generating questions.", "In contrast, we investigate generating other types of initial utterances than questions.", "Also, they investigated a task setting in which users can see the images along with the conversation, while we do not present the news posts to users.", "This difference makes our generation task a bit more complicated (see Section 3).", "Some studies have attempted to make conversation systems more proactive rather than passively waiting for utterances from a user.", "(Li et al., 2016c) proposed a system that detects a stalemate in the conversation and then proactively 3990 casts a specific response for breaking the stalemate.", "They use the history of the user's utterances to select response candidates.", "(Yan et al., 2017; Yan and Zhao, 2018) proposed a method of proactively suggesting the user's next utterance.", "Although these methods have been successfully used in proactive conversation systems, the conversation initiation has not been investigated.", "A well-known problem of encoder-decoder-based conversational models is that they tend to generate generic responses such as I don't know (Vinyals and Le, 2015; Sordoni et al., 2015; Serban et al., 2016).", "Such responses understandably bore users, so there has been much research focus on generating more diverse responses (Li et al., 2016a; Xu et al., 2018; Baheti et al., 2018).", "We explore the problem of generating diverse initial utterances from a different perspective than other studies.", "In our problem setting, it is not obvious how to go beyond simple template-based systems, which cannot generate diverse utterances.", "We address this problem by generating initial utterances based on news posts, which feature various content and are updated every day.", "This study is complementary to previous attempts at diversification.", "Our method exploits existing neural conversation models, which tend to generate generic responses, as a component.", "The previous diversification methods can be used to improve the initial utterances in our method.", "Question Answering (QA) tasks have long been studied in the research community (Rajpurkar et al., 2016, 2018).", "In recent years, conversational variants of this task such as visual QA (Antol et al., 2015; Das et al., 2017) and conversational QA (Reddy et al., 2018) have been proposed.", "All of these tasks differ from our conversation initiation task since they focus on how to respond to questions.", "(Yoshino and Kawahara, 2014) proposed an information navigation system that presents users with the contents of news articles through conversation.", "Although this setting is similar to ours, their system always opens conversation by just presenting the news headline.", "Our study investigates initiating conversation in a more chatty way, and should contribute to making the systems more conversational and attractive.", "(Qin et al., 2018) proposed the task of generating comments about given news articles.", "Although this task is similar to ours, it is not designed to converse with users.", "Our task focuses on conversation and tries to generate initial utterances using news articles (posts).", "In this section, we explain how we constructed the dataset for the task of conversation initiation.", "We then analyze the constructed dataset to provide insights into its effectiveness.", "We first collected 104,960 Japanese news posts from the Twitter account @YahooNewsTopics, 2 which delivers the latest news in the world every day.", "The data were collected between December 31, 2013 and October 31, 2017.", "Some example posts collected from this account are listed in the third column of Table", "1. 3 We investigate the task setting in which the system opens a conversation about a given news post.", "Here, we presume the post is not presented to the user during the conversation.", "Although letting users see the news posts would be possible, such a setting is not investigated here because our focus is a situation where users converse with the system only by voice.", "Such situations are growing more popular in recent years with the rise of voice-controlled conversation systems such as intelligent assistants ( e.g., Siri, Alexa, and Cortana) ( Jiang et al., 2015; Sano et al., 2016; Akasaki and Kaji, 2017) and smart speakers ( e.g., Amazon Echo and Google Home).", "Therefore, in our task setting, since the user does not always know about the news, it is preferred to first introduce the news summary so as to share the background knowledge before starting the conversation (see Fig. 1).", "In this sense, our task can be understood as a combination of summarization and chit-chat.", "Interestingly, the summarization subtask goes beyond the ordinary one in that we not only compress the content but also generate the text in a chit-chat-like style.", "To construct the dataset, we had cloud workers create the initial utterance of a conversation on the basis of a given news post.", "We instructed workers to not only chat about the news post but also to provide its brief summary.", "The workers were asked to use colloquial expressions because users feel strange when spoken to in literary expressions.", "We obtained a total of 104,960 pairs of news post and initial utterance 4 .", "Note that we created only the initial utterances (same as (Mostafazadeh et al., 2016)) because our focus is how to initiate conversation 5 .", "4 Some news posts (typically emergency news such as earthquake) were posted more than once, and as the consequence the dataset includes 102,844 unique news posts.", "In the experiment, we took care so that the training and test datasets do not include the same news posts.", "5 Of course it is necessary to continue successive conversation in an actual application, but we here leave this setting as a future work.", "Here we discuss our investigation of the 104,960 initial utterances.", "Some examples of the utterances are listed in Table", "1. Most initial utterances first summarize the contents of the news post and then begin to chat about it, as we instructed.", "For subsequent analysis and model designing, we divided each initial utterance into sentences and then designated the one with the smallest edit distance from the input news post as summary part and the rest as chit-chat parts.", "The rationale behind the use of this heuristic is that the summary part shares more words with the original news post than the chit-chat part and consists of just one sentence in most cases.", "The statistics of the dataset are shown in Table", "2. For the summary part, as seen in Tables 1 and 2, original news posts are compressed by 32.29% on average and are converted into a colloquial style.", "This indicates that the recruited cloud workers properly extracted the important contents from the input news posts and used them for making the summary part.", "Compared with the summary part, the number of words and vocabulary size for the chit-chat part are relatively small (Table 2).", "This is a natural phenomenon since the summary part uses more con-3992 Figure 2: Overview of initial utterance generation by our proposed approaches.", "tent words for summarization than the chit-chat part.", "To clarify how workers created these chitchats, we randomly sampled 10,000 utterances and manually classified them according to their dialogue acts, as shown in Table", "1. We found that the majority (92% = (7929 + 273 + 1082) / 10000) are classified into three dialogue acts (IMPRESSION, URGING, and QUESTION).", "The remaining 8% miscellaneous utterances that do not belong to any of the three dialog acts.", "Most of the labeled initial utterances are the impressions and opinions of cloud workers about news posts (see the IMPRESSION act).", "Some of them are boilerplates ( e.g., Congrats ) while others show tremendous diversity ( e.g., It makes me want to drink a cold beer on a hot day ).", "It is interesting that some workers make an urging ( e.g., Let's evacuate quickly ) or ask a question ( e.g., Have you ever seen a handball game? ).", "These acts that attempt to solicit the user's response are important elements for conversation initiation.", "As described in Section 2, most of the initial utterances in the dataset can be divided into a summary part and a chit-chat part.", "Because it is possible to generate these two parts by two separate models or by a single joint model, we investigate both approaches and compare their performance in an experiment.", "The overview of our proposed approaches is given in Fig.", "The separate approach utilizes two different models to generate the summary part and the chit-chat part, respectively.", "The summary part is generated by the pointer-generator model, which allows both copying words by pointing to the input sentence and generating words from a fixed vocabulary (See et al., 2017).", "This model is suitable for generating the summary part because it can appropriately select the contents of the input sentence while compressing them to a proper length.", "To generate the chit-chat part, both generation-based and information retrieval (IR)-based methods are investigated.", "We use a common encoder-decoder model (Vinyals and Le, 2015) as the generation-based method (see Separate (Gen) in Fig. 2).", "Since this model tends to generate generic sentences that lack diversity (Vinyals and Le, 2015; Sordoni et al., 2015; Serban et al., 2016), we also adopt the MMI-antiLM method proposed by (Li et al., 2016a) to promote diversity.", "This method uses the following score function, instead of the commonly used log-likelihood, when decoding: log P ( T j S ) (cid:0) (cid:21) log U ( T ) ; (1) where T is an initial utterance and S is a news post.", "P ( T j S ) is the conditional likelihood of T 3993 given S , and U is a language model.", "In decoding, output candidates are generated using beam search and are then reranked by Eq.", "1. This model penalizes generic sentences by U ( T ) .", "As the IR-based method, we utilize the embedding of an input news post to retrieve the closest news posts in the training data using cosine distance, and then extract the corresponding chitchat part (Ritter et al., 2011) (see Separate (IR) in Fig. 2).", "We adopt Smooth Inverse Frequency (SIF)-based embedding (Arora et al., 2017) for inducing news post embeddings.", "This method first calculates a weighted average of word embeddings in a news post s as: v s = 1 j s j w 2 s a a + P ( w ) v w ; (2) where a is a hyperparameter and P ( w ) is the uni-gram probability calculated from the training data.", "Then, it reduces the influence of the first principal component by using the first singular vector u of the word vector matrix: v s = v s (cid:0) uu T v s ; (3) This method has demonstrated a competitive performance across various tasks ( Arora et al., 2017).", "We concatenate the summary part and the chitchat part of the training data and train only one pointer-generator model, as mentioned in Section 4.1 (see Joint in Fig. 2).", "Unlike the separate approach, this method can be considered multi-task learning of the summary and the chit-chat part generation.", "Thus, we expect it can generate the initial utterance precisely by considering the coherence between the summary and the chit-chat part.", "We examine the effectiveness of this approach through experiments in the following section.", "We empirically evaluate the performance of the proposed methods on the constructed dataset.", "In addition to the proposed methods, we implemented baselines that do not use labor-intensive labeled data, since carefully preparing the dataset", "is one of our contributions.", "These baselines generate summary and chit-chat parts separately in the following way and concatenate them as output.", "We gathered tweets (news posts) of major news accounts from Twitter and their corresponding replies (regarded as chit-chats).", "Those tweet-reply pairs can be used as pseudo training data to generate the chit-chat part.", "Since we cannot automatically acquire training data for generating the summary part, we output the first sentence of the input news post as the summary part.", "Overall, the following proposed and baseline methods were implemented for comparison: Baseline Generate the summary part and the chit-chat part by separate models using the pseudo-training data collected from Twitter.", "There are three variants of this method for generating the chit-chat part.", "Baseline (IR) and Baseline (Gen) use the IR-based method and the generation-based method, respectively.", "Baseline (Gen+MMI) uses MMI-antiLM (Li et al., 2016a) for decoding.", "Separate Generate the summary part and the chit-chat part separately using the approach described in Section 4.1 and the dataset described in Section 3.1.", "There are also three variants of this method, same as the baselines ( Separate (IR) , Separate (Gen), and Separate (Gen+MMI), respectively).", "Joint Generate the summary part and the chitchat part jointly using the approach described 3994 Model R-1 R-2 R-L D-1 D-2 D-S Baseline 70.2 59.1 67.5 17.7 60.1 99.8 Separate 66 : 5 50 : 6 63 : 8 15 : 7 52 : 4 99.8 Joint 68 : 8 54 : 1 66 : 3 15 : 2 51 : 8 99.8 Table 4: Results of summary part generation.", "We divided the 104,960 items of data (news post and initial utterance pairs) into 90,000, 10,000, and 4,960 for training data, development data, and test data, respectively.", "Input news posts that appear in the training data were removed from the test data.", "Consequently, 4,776 data were used as the final test data.", "To train the baseline model, we collected 277,813 tweets and their corresponding replies from six major Japanese news accounts 6 on Twitter.", "We then divided those pairs into 260,000 and 17,813 for training data and development data for the baselines.", "We performed tokenization using a Japanese morphological analyzer, MeCab, 7 with IPAdic dictionary, 8 and then removed usernames, URLs, and hashtags.", "We used OpenNMT-py (Klein et al., 2017) 9 for building the models described in Section", "4. Their hyperparameter settings are given in Table", "3. We used GloVe (Pennington et al., 2014) 10 to learn 300-dimensional word embeddings.", "We trained word embedding using a Japanese Wikipedia dump released on February 22nd, 2018.", "These embeddings were used for acquiring news post embeddings, as described in Section 4.1.", "As discussed in Section 3.2, since the initial utterance can be divided into separate parts that have different properties, we evaluated each part separately to examine the generated initial utterances.", "6 @YahooNewsTopics, @livedoornews, @asahi, @mainichi, @mainichi jp, @nhk news 7 http://taku910.github.io/mecab/ 8 https://ja.osdn.net/projects/ipadic/ 9 https://github.com/OpenNMT/OpenNMT-py 10 https://nlp.stanford.edu/projects/glove/ We automatically divided the generated sentences and reference sentences into summary parts and chit-chat parts, as explained in Section 3.2.", "We used ROUGE-1, ROUGE-2, and ROUGE-L (Lin, 2004) for evaluating the summary part (de-noted as R-1 , R-2 , and R-L , respectively) and BLEU (Papineni et al., 2002) for evaluating the chit-chat part.", "We use different metrics for each part because ROUGE is often used for summarization tasks while BLEU is used for conversational tasks.", "Since these automatic metrics are insuf-ficient for evaluation (Novikova et al., 2017), we also perform a manual evaluation in Section 5.4.", "To evaluate diversity, we calculate the proportion of distinct unigrams, bigrams, and sentences ( D-1 , D-2 , and D-S , respectively) in the generated initial utterances (Li et al., 2016a).", "Table 4 lists the results of the summary part.", "The baseline method that outputs the first sentence of an input news post achieved higher ROUGE scores than the proposed methods.", "This does not necessarily mean that the proposed methods are poor because even the SOTA summarization system exceeds such a baseline by only a small margin (See et al., 2017).", "Also, our task has a requirement to convert sentences into colloquial expressions, and the ROUGE metric cannot capture such a subtle difference.", "We perform a deeper investigation into the quality of the generated initial utterance in the next section.", "Regarding the diversity, almost all of the generated initial utterances are distinct, as shown in Table", "4. Table 5 shows the result of the chit-chat part.", "The proposed methods outperformed the baselines in terms of BLEU score.", "Although the baselines use two times as much training data as the proposed methods, their scores were quite low.", "This demonstrates the quality of our dataset.", "The score of Separate (IR) was relatively low among the proposed methods, presumably because the chitchat parts retrieved from the training data do not always match the content of the input news post.", "We also see that all the BLEU scores of the models are quite lower than ROUGE scores in Table", "4. In general, both summarization and chat generation tasks often use automatic evaluation metrics to evaluate generated sentences, their scores tend to be much lower in the chat generation task.", "This is because the answer sentences (utterances) of the chat generation task have more diverse candidates than other generation tasks such as machine translation and summarization ( Li et al., 2016a,b; Baheti et al., 2018).", "We also examine the diversity of the chit-chat part in Table", "5. Although the diversity of the IR-based methods was high, their BLEU scores deteriorated considerably.", "Among the generation-based methods, although Separate (Gen+MMI) achieved the highest BLEU score, it lacked diversity.", "In contrast, Joint achieved a reasonable BLEU score while maintaining diversity to some extent.", "Although diversity of utterances can be quantified automatically, ROUGE and BLEU scores do not always follow human intuition (Novikova et al., 2017; Lowe et al., 2017).", "Therefore, we evaluate the generated initial utterances manually.", "We picked the three proposed models with good performance in the automatic evaluation along with one baseline for this manual evaluation.", "300 posts were sampled as the input news posts, and the outputs of the four methods were manually evaluated from two perspectives: 1) Naturalness : Does the utterance naturally initiate conversation?", "and 2) Coherency : Is the content of the utterance coherent with the given news post?", "We recruited crowd workers to score each utterance on a 4-point scale (Agree, Slightly Agree, Slightly Disagree, Dis-agree).", "Table 6 show the results of the manual evaluation for Naturalness and Coherency of the generated initial utterances.", "The proposed methods excluding Separate (IR) outperformed Baseline (Gen+MMI) in both perspectives and achieved reasonable scores compared to human upper-bound.", "The scores of Separate (IR) are quite low because the retrieval result does not follow the input news post in many cases.", "This reveals that although those sentences have high diversity, their quality is poor as initial utterances.", "Although Baseline (Gen+MMI) achieved high ROUGE scores in Table 4, its style is not colloquial.", "Thus, workers felt odd and lowered their scores.", "In conclusion, it is better to use the generation-based methods for conversation initiation.", "We also evaluated Dullness : Is the given utterance dull or boring?", "We used 15 manually created boilerplate utterances ( e.g., Hello., How are you?, Let's talk with me.) rather than Baseline (Gen+MMI) to confirm the effectiveness of utilizing news contents as the initial utterances.", "Table 7 show the results of the manual evaluation for Dullness of the generated initial utterances.", "We see that compared to our proposed methods, the score of the boilerplate baseline is quite high.", "This indicates that using boilerplate utterances for conversation initiation often bores users and possibly leads to early abandonment of the conversation.", "To determine the statistical significance of our results, we performed Wilcoxon signed-rank tests with Bonferroni correction (Wilcoxon, 1945).", "In Table 6, for all combinations except Baseline (Gen+MMI) vs. Separate (IR) and Separate (Gen+MMI) vs. Joint , there were significant differences (p-value < 0 : 005 (corrected)) in both perspectives.", "Similarly, in Table 7, there were statistically significant differences for all combinations except Separate (IR) vs. Separate (Gen+MMI) and Separate (Gen+MMI) vs. Joint .", "Finally, we investigated the initial utterances generated by Separate (Gen+MMI) and Joint .", "Ex-3996 news posts initial utterances A parade for the Rio Olympics and Paralympic medalists will be held in October.", "We found that Separate (Gen+MMI) tended to generate generic utterances ( e.g., That's amaz-ing, Get it together ) as the chit-chat part that fit any context, even though it uses a diversity-promoting function when decoding.", "In contrast, Joint could generate more diverse chit-chat parts by utilizing contents words such as parade and poaching .", "One possible reason for this phenomena is that the generated summary part acts like an additional condition of P ( T j S ) at the time of decoding the chit-chat part.", "This does not happen with Separate (Gen+MMI) , which simply concatenates the outputs of separate models.", "Interestingly, we found that there are some utterances giving a question (third example of Joint in Table 8) or making an urging (fourth example of Separate (Gen+MMI) in Table 8).", "Controlling utterances of the model by such dialogue acts (Wen et al., 2015; Zhao et al., 2017) can make the conversation initiation more diverse and attractive.", "We leave them as the future work at this time.", "We should note that although it is a problem common to all the generation-models, there is a possibility of transmitting false news contents (as in the second example of Separate (Gen+MMI) in Table 8) or ethically inappropriate contents to the users.", "Therefore, when adopting our method into an actual conversation application, we have to pay close attention to this problem.", "In this paper, we proposed the new task of conversation initiation.", "To generate diverse initial utterances that can improve user engagement, we utilized news articles that provide fresh and varied information every day and constructed a large-scale dataset using crowd workers.", "To perform the conversation initiation, we designed separate and joint approaches including both IR-based and generation-based methods.", "Empirical experiments showed that the proposed methods outperformed the baselines in both automatic and manual evaluation, and can generate diverse initial utterances that template-based methods cannot make.", "These results demonstrate the quality of our constructed dataset, that will be released for future studies 11 .", "As a natural next step, we plan to develop a more sophisticated conversation model, which can not only generate initial utterances but also continue the conversation for the given news contents (Yoshino and Kawahara, 2014).", "In that case, depending on the user's interest, the model needs to determine whether to do a usual chat or talk about the news contents.", "We also plan to improve the proposed method so that it can generate even better initial utterances.", "Since our task has two elements, summarization and chit-chat, the focus of our future work will be a more sophisticated multitask model that considers these relations.", "We thank Manabu Sassano for fruitful discussions and comments.", "We also thank the anonymous reviewers." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "objective", "method", "objective", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "other", "objective", "other", "other", "objective", "other", "other", "method", "method", "method", "objective", "abstain", "abstain", "objective", "method", "objective", "abstain", "objective", "abstain", "method", "method", "method", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "abstain", "objective", "abstain", "abstain", "objective", "method", "other", "other" ]
[ "Compositionalitythe ability to combine familiar units like words into novel phrases and sentenceshas been the focus of intense interest in artificial intelligence in recent years.", "To test compositional generalization in semantic parsing, Keysers et al. (2020) introduced Compositional Freebase Queries (CFQ).", "This dataset maximizes the similarity between the test and train distributions over primitive units, like words, while maximizing the compound divergence the dissimilarity between test and train distributions over larger structures, like phrases.", "Dependency parsing, however, lacks a compositional generalization benchmark.", "In this work, we introduce a gold-standard set of dependency parses for CFQ, and use this to analyze the behavior of a state-of-the art dependency parser (Qi et al., 2020) on the CFQ dataset.", "We find that increasing compound divergence degrades dependency parsing performance, although not as dramatically as semantic parsing performance.", "Additionally, we find the performance of the dependency parser does not uniformly degrade relative to compound divergence, and the parser performs differently on different splits with the same compound divergence.", "We explore a number of hypotheses for what causes the non-uniform degradation in dependency parsing performance, and identify a number of syntactic structures that drive the dependency parser's lower performance on the most challenging splits.", "People understand novel combinations of familiar words in part due to the principle of composition-ality : We expect the meaning of a phrase to be a predictable composition of the meanings of its parts.", "Unlike humans, many neural models fail to Majority of work completed during internship at Element AI, now ServiceNow Research.", "1 Figure 1: An example question from the CFQ dataset, with the associated SPARQL query and dependency parse.", "generalize compositionally; a growing interest in this area has led to novel architectures and datasets designed to test compositional generalization (see 7).", "One recently-introduced semantic parsing dataset, Compositional Freebase Queries (CFQ), consists of English questions with corresponding database queries written in SPARQL.", "Figure 1 shows an example question and SPARQL query.", "To test compositional generalization, CFQ includes test and train sets with a highly similar distribution of primitive units (like words) and increasingly divergent distribution of larger compound units (like phrases).", "The most challenging of these splits, with the highest compound divergence, are dubbed maximum compound divergence (MCD) splits.", "Although CFQ has proven to be a valuable resource, the difficulty of the splits appears to be influenced by factors other than compositional generalization.", "First, some evidence suggests that the complexity of the SPARQL output is in part responsible for CFQ performance (Furrer et al., 2020; Herzig et al., 2021).", "Furthermore, splits of the same compound divergence are not equally difficult.", "One 6482 possible explanation is a difference in the syntactic constructions of different splits; however, this has not yet been explored in CFQ.", "To address these issues, we created a dependency-parsing version of CFQ.", "Using our dataset, we evaluated a state-of-the-art dependency parser for compositional generalization, and used the dependency annotations to identify syntactic structures predictive of parsing failure on each MCD split.", "We found that the dependency parser is more robust to increased compound divergence than the semantic parser, but performance still decreased with higher compound divergence.", "We also found the dependency parser, like semantic parsers, varied widely in performance on different splits of the same compound divergence.", "Finally, we found that a small number (less than seven) of syntactic constructions seem to drive the difficulty of the MCD splits.", "Our dataset is publically available on github.", "1 1.1 Motivation for Dependency Parsing In this section, we discuss three problems of CFQ, and our motivation for studying compositional generalization in dependency parsing.", "First, CFQ is hard: seq2seq models trained from scratch score at most 12% on MCD2 and MCD3 sets (Google Research, 2020).", "Because of its difficulty, CFQ may lack sensitivity to capture small but significant progress in neural modelling of compo-sitionality.", "Second, recent work shows that CFQ's difficulty is in part due to the output representation being raw SPARQL: Models perform better when outputs are replaced with compressed versions of SPARQL, that are more aligned with the natural-language-like questions (Furrer et al., 2020; Herzig et al., 2021).", "In interpreting performance on CFQ, we might be conflating challenges of compositional generalization with challenges related to the output representation.", "Third, different splits of the same compound divergence vary widely in difficulty: seven of the nine semantic parsers currently listed on the leaderboard perform at least twice as well on MCD1 as MCD 2, despite the splits having the same compound divergence (Google Research, 2020).", "Performance on CFQ is thus heavily influenced by some factor about the splits other than compound divergence.", "lated benchmarks, like COGS (Kim and Linzen, 2020) and CLOSURE (Bahdanau et al., 2020), test a clearly-defined set of generalizations (for example, training a noun in subject position and testing in object position).", "CFQ splits, by contrast, optimize a gross metric over the distribution of all syntactic compounds in the dataset.", "This complicates in-depth analyses of CFQ results: For a particular split, it is unclear what syntactic constructions are tested in out-of-distribution contexts.", "Meanwhile, for a particular test sentence, it is unclear which of its syntactic structures caused the model to fail.", "To address the issues with the CFQ semantic parsing benchmark, we studied compositional generalization in syntactic parsing.", "While syntactic parsing is simpler than mapping to a complete meaning representation, a language-to-SPARQL semantic parser must understand the question's syntax.", "For example, to generate the triple", "?x0 ns:film.editor.film M0 in the SPARQL query shown in Figure 1, a semantic parser must first identify that actor is the subject of edit.", "We chose dependency trees as the target syntactic formalism due to the maturity of the universal dependencies annotation standard, the popularity of dependency trees among the NLP practitioners, and the availability of popular high-performance software such as Stanza (Qi et al., 2020).", "Importantly, dependency parsing does not require auto regressive models; instead, graph dependency parsers independently predict edge labels.", "This different way of employing deep learning for parsing has the additional advantage of allowing us to separate the challenge of compositional generalization from challenges related to auto regressive models' teacher forcing training.", "Finally, having gold dependency annotations for CFQ questions enables detailed analysis of the relation between the model errors and syntactic discrepancies that are featured by the MCD splits.", "CFQCFQ is designed to test compositional generalization by combining familiar units in novel ways.", "To ensure the primitive units are familiar to the learner, CFQ test and train sets are sampled in a way that ensures a low divergence in the frequency distribution of atoms .", "Here, atoms refers to individual predicates or entities, (like produced or Christopher Nolan), and the rules used to generate questions.", "To ensure the compounds in test are novel, train 6483 and test sets were sampled in a way that ensures higher divergence between the frequency distribution of compounds, weighted to prevent double-counting of any nested compounds which co-occur frequently.", "Keysers et al. (2020) released dataset splits with compound divergence on a scale between 0 (a random split) and .7 (Maximum Compound Divergence, or MCD, splits).", "To train a dependency parser and analyze syntactic structures in the CFQ dataset, we created a corpus of gold dependency parses.", "Because the questions in CFQ are synthetically generated, we were able to write a full-coverage context-free grammar for the CFQ language (see Appendix C).", "Using this grammar, and the chart parser available in Python's natural language toolkit, we generated a constituency parse for each question.", "Finally, we designed an algorithm to map to the dependency parse.", "To map from constituency to dependency parses, we wrote a dependency-mapping rule for each production rule in the CFG (Collins, 2003).", "Each dependency rule describes the dependency relation between the elements in the constituent; for example, if the production rule is VP V NP , the dependency-mapping rule connects the head of the right-hand node (the head of NP) as a dependent of the left-hand node (the V), with the arc label OBJ .", "We follow version two of the Universal Dependencies Corpus annotation standards (Nivre et al., 2020), 2 but simplify the categorization of nominal subjects for active and passive verbs into one category ( NSUBJ ), and do not include part of speech tags in the dataset.", "Our algorithm then recursively walks the constituency tree from bottom to top, mapping non-head children of each node to their syntactic heads and passing the head of each constituent up the tree.", "A number of sentences in the CFQ dataset exhibit dependency structures which cannot be directly read off the constituency parse in this manner: Such right-node-raising constructions involve a word without a syntactic head in the immediate constituent.", "For example, in Was Tonny written by and executive produced by Mark Marabella? the first instance of by is a dependent of Mark Marabella, but its immediate constituent is di-2 www.universaldependencies.org rected by.", "To handle right-node raising cases, our dependency-mapping algorithm identifies prepositions with no head in the immediate constituent, and passes them up the tree until they can be attached to their appropriate syntactic head.", "Finally, we performed a form of anonymization on the questions, replacing entities with single-word proper names.", "This reflects the anonymization strategy used in Keysers et al. (2020), and prevents the dependency parser from failing because of named entities with particularly complex internal syntax (for example, Did a Swedish film producer edit Giliap and Who Saw Him Die? ) The experiments in this paper are based on the original CFQ splits.", "However, these validation sets are constructed from the same distribution as the test sets; some information about the test distribution is therefore available during train.", "To ensure that the model only had access to the training distribution during the training phase, we followed the suggestion of Keysers et al. (2020) and discarded the MCD validation sets, 3 randomly sampling 20% of the training data to use instead (see 5.1 of that paper for more details).", "The resulting splits have 11 , 968 test sentences and 76 , 595 train sentences.", "To evaluate the effect of compound divergence on dependency parsing, we used Stanza (Qi et al., 2020), a state-of-the-art dependency parser, on the gold label dependency parses described in 3.", "We trained Stanza five times on each of 22 splits from the CFQ release: one random split (which has a compound divergence of 0), 18 splits with increasing compound divergence (ranging from .1 to .6) and three MCD splits (divergence of .7).", "To evaluate performance on each test set, we used the CoNLL18 shared task dependency parsing challenge evaluation script (CoNLL Shared Task, 2018), which gives a Labeled Attachment Score (LAS) and Content-word Labeled Attachment Score (CLAS), reflecting how many of the total dependency arcs in the test set were correctly labeled, and how many of the arcs connecting content words were correctly labeled, respectively.", "test questions for which every content word arc was correctly labeled, which we call Whole Sentence Content-word Labeled Attachment Score (WSCLAS).", "This all-or-nothing evaluation scheme for each sentence more closely resembles the exact-match accuracy of semantic parser evaluation.", "4 4.2 Dependency Parsing Results We plot Stanza's performance as a function of the split compound divergence in Figure 2.", "Increasing compound divergence had a negative effect on performance: Stanza's accuracy on the random split (zero compound divergence) was near perfect, with an average CLAS of 99 .", "98% and WSCLAS of 99 .", "89% .", "Meanwhile, accuracy on the three MCD splits (divergence of .7) dropped to an average CLAS of 92 .", "85% and WSCLAS of 74 .", "92% .", "A linear regression predicting CLAS found a slope of 6 .", "91 , and predicting WSCLAS found a slope of 28 .", "89 ; in other words, for each .1 increase in compound divergence the linear model predicts a 2 .", "889% lower WSCALS, and .", "691% lower CLAS.", "These linear models are also shown in Figure 2.", "We note, however, two exceptions to the generally negative relationship between compound divergence and accuracy, which indicate that other characteristics of the test set have a large effect on accuracy.", "First, all splits with a target compound divergence of .", "4 performed stronger than 4 The code to calculate WSCLAS is also available at https://github.com/emilygoodwin/CFQ-dependencies Dependency Parser Semantic Parser Split WSCLAS CLAS LAS Exact Match mean(sd) mean(sd) mean(sd) mean( 95% confinterval) MCD1 96.57 1 .", "3 and .", "2 .", "Secondly, we observed considerable variation in performance on different splits that have the same compound divergence, particularly the MCD splits.", "Stanza's performance on the three maximum-compound-divergence splits and one random split is shown in Table 1.", "While all three MCD splits were harder than the random split, performance varied from 96 .", "57% WSCLAS (MCD1) to 56 .", "76% WSCLAS (MCD3).", "Thus, while compound divergence is a factor in performance, idiosyncrasies in the individual splits also have large effects on performance.", "Finally, we note that while Stanza was more robust to compound divergence than the semantic parser, it also ranked the splits differently in difficulty.", "Table 1 reproduces mean accuracies from Keysers et al. (2020)'s strongest-performing semantic parser, a universal transformer (Dehghani et al., 2019).", "The universal transformer's exact-match is lower than Stanza's WSCLAS on every MCD split.", "Additionally, while Stanza performed worst on MCD3, the universal transformer and most other semantic parsers in the CFQ leaderboard performed worst on MCD2 (Google Research, 2020).", "In the next sections, we explore what causes the variation in performance on different MCD splits.", "The compound divergence metric treats all compounds of any number of words identically; therefore, the differences between the MCD splits may be driven by differing distributions of compounds of different complexities.", "In this section, we show 6485 that this is not the case.", "We first describe how we characterize syntactic constructions using the dependency annotations.", "1 Figure 3: A dependency parse and two of its subtrees", "We explored differences in the distributions of syntactic constructions by looking at a restricted set of the subtrees of each dependency parse, which we will now describe.", "With respect to any target node in the corpus, we consider a syntactic construction to be any subtree that consists of that target node together with a constituent-contiguous subset of the target node's immediate children.", "Here, constituent-contiguous means the subsets of child nodes which are heads of phrases that are adjacent to one another or to the target node in the string.", "We include only the immediate children in the subtree (excluding their descendants).", "We also replace words with their category label in CFQ: in addition to traditional parts of speech like verb and adjective , the category labels include nominal categories role (which occurs in possessive constructions like mother in Alice's mother), entity for proper nouns, and noun for common nouns.", "For the analyses in this and the following section, we extract every syntactic construction for every dependency parse in our corpus, and compare their complexity .", "We define complexity to be the number of arcs in the subtree, discounting the dummy ROOT arc.", "Two of the subtrees for sentence Did M1 's female actor edit and produce M0 ? are shown in Figure 3 (these subtrees have a complexity of two).", "Table 2 shows the number of unique constructions in each test and train set.", "One possible source of the differences between MCD splits may be that they differ in their distribu-Total", "tions of subtrees at differing complexities.", "In this section, we present two analyses showing that this is not the case.", "In our first analysis, we analyzed the distance between test and train distribution for each split.", "To do this we calculated the Jensen-Shannon (JS) distance between the test and train histograms of syntactic constructions at differing complexities.", "6 2 4 6 8 10 12 Complexity of Construction (Number of Arcs in Subtree) 0.0 0.2 0.4 0.6 0.8 D i s t a n c e Jensen-Shannon Distance of Syntactic Constructions in Test and Train Rand MCD1 MCD2 MCD3 Figure 4: Divergence between test and train of the MCD and random splits.", "The JS distances for constructions of each complexity are plotted in Figure", "4. As can be seen in the figure, the distances between test and train are similar for all MCD splits at all subtree complexities.", "Even the MCD1 distances pattern with the other MCD splits, despite the parser performance on MCD1 being more similar to the random split.", "where m is the pointwise mean of p and q , and D is the Kullback-Leibler divergence.", "Thus, differences between the test and train distributions at different complexities cannot explain the MCD splits' differential performance.", "In our second analysis, we examined whether the MCD splits differ in the proportion of untrained subtrees at different complexities.", "The proportions are plotted in Figure", "5. The MCD splits pattern together, with far more untrained constructions at each complexity than the random split.", "We thus conclude it is unlikely that gross distributional properties of the MCD splits explain the differences in parser performance.", "In the next section, we show that parser mistakes for all splits seem to be driven by a very small number of hard-to-parse subtrees.", "Thus, performance differences between splits likely depend on idiosyncratic interactions between the specific data splits and models.", "To identify syntactic constructions that are predictive of dependency parsing error, we fit a logistic model predicting Stanza's performance on each test question from the question's syntactic constructions.", "Because we trained five randomly-initialized versions of Stanza, the model was fit with five instances of each question.", "To encourage sparse subtree feature weights, we used L1 regularization.", "We used 90% of the test set to train the logistic model, and the remaining 10% to test it and select a regularization coefficient of .", "01 .", "subtrees with a coefficient less than or equal to 1 .", "Finally, to quantify the effect these trees have on test performance, we removed all the sentences containing the trees for each split, and calculated Stanza's accuracy on the remaining test sentences.", "Table 3 shows the number of subtrees found to be predictive of parsing error, together with the accuracy when those trees are removed from test.", "Removing five subtrees from MCD2's test set improves the accuracy to 92 .", "46% (an increase of 21 . 05% ), and removing seven trees from MCD3's test set improves the accuracy to 93 .", "09% (an increase of 36 . 33% ).", "We thus conclude that the performance degradation of Stanza on higher compound divergence splits is driven by a relatively small number of syntactic constructions.", "Table 4 shows the subtrees most predictive of a dependency parsing error, with their test and train frequency.", "To quantify the effect of each subtree on the test accuracy, we also report the Test set : WSCLAS ( T (cid:48) ) WSCLAS ( T ) where T is the original test set and T (cid:48) is all test sentences which do not include the construction.", "A positive means that removing the subtree from the test set improved performance, while a negative indicates that removing the subtree from the test set degraded performance.", "Subtrees that are predictive of error for a particular split are often missing from train, together with others that share a similar syntactic structure.", "For instance, there are a set of trees that form questions with common nouns as subject and predicate, and a copula verb was appearing to the left of the subject (e.g. Was an art director of Palm County a person?).", "7 The fourth, fifth and sixth subtrees in Table 4 are subtrees which form these questions; all three are missing from train for both MCD2 and MCD3, and all are predictive of parser error for these splits.", "In contrast, the MCD1 training set includes one of the subtrees (fourth in Table 4) and leaves the other two untrained; none are predictive of parser error (with of 0.0, -0.06 and 0.02, the performance on these trees is close to average for MCD1).", "The model performs better on the untrained trees in MCD1, perhaps because of the similar trees in train; with no evidence of this 7 Note that CFQ has two part-of-speech categories which are common nouns: a role , like the word mother in the phrase Henry's mother, and a category labeled noun , like person in the phrase a person.", "kind of structure in MCD2 and MCD3, the model struggles.", "8 Another group of subtrees with similar syntactic structure is the second and last subtrees in Table", "4. These coordinate three and four entities in an of-type prepositional phrase, which occurs in phrases like the mother of Alice, Bob, Carl and Dave.", "Both trees are absent from MCD1 train, and both have a large effect on performance for MCD1 ( of 1.29 and 1.33).", "In MCD2, only the tree with four coordinated entities is absent from train, and it is not difficult for the model ( of -0.15, indicating that removing it from test reduces performance); the model is likely able to parse four coordinated entities based on the training examples with three coordinated entities.", "A growing body of work uses CFQ to investigate better models for compositional generalization in semantic parsing (Herzig and Berant, 2021; Guo et al., 2020; Furrer et al., 2020).", "Tsarkov et al. (2020) also recently released an expanded version of CFQ called *-CFQ, which remains challenging for transformers even when they are trained on much more data.", "Our methodology can be easily be applied to *-CFQ at the cost of a straight-forward extension of the grammar.", "Other datasets focused on compositional generalization include SCAN (Lake and Baroni, 2018), a dataset of English commands and navigation sequences; gSCAN (Ruis et al., 2020), a successor to SCAN with grounded navigation sequences; and COGS (Kim and Linzen, 2020), where English sentences are paired with semantic representations based on lambda calculus and the UDepLambda framework (Reddy et al., 2017).", "In contrast to CFQ, these datasets challenge models by targeting specific, linguistically-motivated generalizations.", "For example, COGS includes tests of novel verb argument structures (like training on a verb in active voice and testing in passive voice), and novel grammatical roles for primitives (like training with a noun in object position and testing in subject po-sition); similarly, SCAN includes splits which test 8 Not shown in Table 4 is the tree with a left-edge copula and simple nouns in both predicate and subject position.", "This structure was also absent from train in the MCD2 and MCD3 splits, but present in MCD1 train.", "It was not found to be strongly predictive of errors by the logistic model, likely because it was infrequent in test (occurring 188 times in MCD2 and 208 in MCD3).", "novel combinations of specific predicates (train-ing a predicate jump or turn left in isolation, and testing it composed with additional predicates from train).", "Finally, the CLOSURE benchmark for visual question answering tests systematic generalization of familiar words by constructing novel referring expressions; for example, a cube that is the same size as the brown cube (Bahdanau et al., 2020).", "In this paper, we presented a dependency parsing version of the Compositional Freebase Queries (CFQ) dataset.", "We showed that a state-of-the-art dependency parser's performance degrades with increased compound divergence, but varies on different splits of the same compound divergence.", "Finally, we showed the majority of the parser failures on each split can be characterized by a small (seven or fewer) number of specific syntactic structures.", "To our knowledge, this is the first explicit test of compositional generalization in dependency parsing.", "We hope that the gold-standard dependency parses that we have developed will be a useful resource in future work on compositional generalization.", "Existing work on syntactic (and in particular dependency) parsing can provide researchers in compositional generalization with ideas and inspiration which can then be empirically validated using our corpus.", "Finally, our work represents a step forward in understanding the syntactic structures which drive lower performance on MCD test sets.", "Predicting parser performance from the syntactic constructions contained in the question provides a new method for understanding the syntactic structures that can cause parser failure; in future work, similar methods can also be used to better understand failures of semantic parsers on the CFQ dataset.", "This article contributes to compositional generalization research, a foundational concern for neural natural natural language processing models.", "Breakthroughs in this research might eventually lead to smaller, and more efficient models, as well as better performance on low-resource languages.", "The ethical and societal consequences of these improvements will depend on downstream applications.", "nal CFQ dataset was artificially generated, so there was no process of data collection and therefore no ethics review process.", "The dataset was annotated by the author, so there was no ethics review of the annotation process or demographic information of this population to report.", "We thank Christopher Manning, the Montreal Computational and Quantitative Linguistics lab at McGill University, and the Human and Machine Interaction Through Language group at ServiceNow Research for helpful feedback.", "We are grateful to ServiceNow Research for providing extensive compute and other support.", "We also gratefully acknowledge the support of the MITACS accelerate internship, the Natural Sciences and Engineering Research Council of Canada, the Fonds de Recherche du Qubec, Socit et Culture, and the Canada CIFAR AI Chairs Program." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "result", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "objective", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "method", "result", "result", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Training semantic parsers from question-answer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer.", "We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones.", "This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness.", "We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision.", "Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms .", "These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation .", "This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013), program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018).", "Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013).", "This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016)), and not assuming full supervision lets us be agnostic about the logical form language.", "The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.", "However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia ) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.", "For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.", "This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.", "We introduce two innovations to improve learning from denotations.", "Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.", "This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.", "Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.", "We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIKITABLEQUESTIONS (WTQ) (Pasupat and Liang, 2015), an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017), a closed domain task with binary denotations, and thus far less supervision.", "We show that:", "1) interleaving online search and MML over retrieved logical forms ( 4) is a more effective training algorithm than each of those objectives alone;", "2) coverage guidance during search ( 3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker;", "3) a combination of the two techniques yields 44 .", "3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82 .", "9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.", "We formally define semantic parsing in a weakly supervised setup as follows.", "Given a dataset where the i th instance is the triple { x i , w i , d i } , representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see 5.3), such that (cid:74) y i (cid:75) w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .", "A semantic parser defines a distribution over logical forms given an input utterance: p ( Y | x i ; ) .", "In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and reward-based methods.", "Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.", "The semantic parsing model itself defines a distribution over logical forms , however, not denotations , so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max (cid:89) x i ,d i D (cid:88) y i Y | (cid:74) y i (cid:75) wi = d i p ( y i | x i ; ) (1) This objective function is called maximum marginal likelihood (MML).", "The inner summation is in general intractable to perform during training, so it is only approximated.", "Most prior work (Berant et al., 2013; Goldman et al., 2018, inter alia ) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.", "This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.", "Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Gold-man et al., 2018).", "One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017).", "The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.", "In this paper, we refer to the MML that does search during training as dynamic MML , and the one that does an offline search as static MML .", "The main benefit of dynamic MML is that it adapts its training signal over time.", "As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.", "The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.", "When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.", "There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., 2017, 2018), and otherwise (Iyyer et al., 2017; Guu et al., 2017).", "In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in 3.", "Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.", "Traditionally, these lexical cues were provided in the parser's lexicon.", "Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.", "In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.", "Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .", "We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .", "We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.", "We use beam search to train a model to minimize the expected value of a cost function C : min N (cid:88) i =1 E p ( y i | x i ; ) C ( x i , y i , w i , d i ) (2) where p is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.", "where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if (cid:74) y i (cid:75) w i = d i , or a value e otherwise.", "We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.", "is a hyperparameter that gives the relative weight of the coverage cost.", "1 Note that without this re-normalization, and with a -1/0 cost function based on denotation accuracy, MBR will maximize the likelihood of correct logical forms on the beam, which is equivalent to dynamic MML.", "In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.", "As discussed in 2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.", "This is particularly problematic in domains like NLVR, where the supervision signal is binaryit is very hard for dynamic MML to bootstrap its way to finding good logical forms.", "To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverage-augmented MBR algorithm described in 3.", "In order to use static MML, we need an initial set of candidate logical forms.", "We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in 3.", "A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.", "We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.", "This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.", "We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.", "This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.", "In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.", "Algorithm 1 concretely describes this process.", "Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e. T = 0 ) for each of the input utterances.", "We start off with a seed dataset D 0 for which consistent logical forms are available.", "We will now describe the two datasets we use in this work to evaluate our methods Cornell NLVR", "Input : Dataset D = { X, W, D } ; and seed set D 0 = { X 0 , Y 0 } such that X 0 X and C ( x 0 i , y 0 i , W i , D i ) = Output: Model parameters MBR Initialize dataset DMML = D 0 ; while Acc ( D dev ) is increasing do MML = MML ( DMML ) ; Initialize MBR = MML ; Update MBR = MBR ( D ; MBR ) ; Update DMML = Decode ( D ; MBR ) ; endAlgorithm", "and WIKITABLEQUESTIONS .", "Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.", "Figure 1 shows two example sentence-image pairs from the dataset (with the same sen-tence).", "The dataset also comes with structured representations of images, indicating the color, shape, size, and xand y-coordinates of each of the objects in the image.", "While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.", "that is blue .", "The structured representations associated with the two images shown are two of the worlds ( w 1 i and w 2 i ), in which x i could be evaluated.", "The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.", "That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if w ji ,d ji (cid:74) y i (cid:75) w ji = d ji .", "WIKITABLEQUESTIONSWIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.", "An example can be seen in Figure 2.", "Unlike NLVR, the answers are not binary.", "They can instead be cells in the table or the result of numerical or set-theoretic operations performed on them.", "For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996).", "Our language contains six basic types: box (referring to one of the three gray areas in Figure 1), object (refer-ring to the circles, triangles and squares in Figure 1), shape , color , number and boolean .", "The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.", "The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.", "This type specifica-tion of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).", "Figure 1 shows an example of a complete logical form in our language.", "The lexicon we use for the coverage measure described in 3 contains under 40 rules for each logical form language.", "They mainly map words and phrases to constants and unary functions in the target language.", "The complete lexicons are shown in the Appendix.", "Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.", "We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.", "Moreover, as shown empirically in 6.4, the model for NLVR does not learn much without this simple but crucial guidance.", "We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS .", "In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.", "Of the many variants of this basic architecture (see 7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al. (2017), as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.", "The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammar-constrained decoder also with LSTM cells.", "Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.", "These production rules sequentially build up an abstract syntax tree, which determines the logical form.", "The model also has an entity linking component for producing table entities in the logical forms; this component is only applicable to WIKITABLEQUESTIONS , and we remove it when running experiments on NLVR.", "The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.", "In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S ( y i , x i ) .", "This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.", "That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.", "This is similar to the idea of checklists used by Kiddon et al. (2016).", "The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.", "We add a weighted sum of all the actions that are yet to produced: s ai = e a .", "( p i + v S i .E ) (4) where s ai is the score of action a at time step i , e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.", "NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.", "NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sen-tence).", "We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.", "We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010).", "We found that using pretrained word representations did not help.", "We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.", "All the hyper-parameters are tuned on the validation set.", "WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.", "We first show results aggregated from all five folds in 6.3, and then show results from controlled experiments on fold 1.", "We replicated the model presented in Krishnamurthy et al. (2017), and only changed the training algorithm and the language used.", "We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.", "Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .", "During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in 5.4, and choose the topk 3 .", "At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.", "While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.", "Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.", "The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse .", "WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.", "We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.", "We show both best and aver-2 It was prohibitively expensive to search beyond depth of 10.", "age (over 5 folds) single model performance from Liang et al. (2018) (Memory Augmented Policy Optimization).", "The best model was chosen based on performance on the development set.", "Our single model performances are computed in the same way.", "Note that Liang et al. (2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.", "In Table 2, we compare the performance of our iterative search algorithm with three baselines:", "1) Static MML, as described in 2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in 6.2;", "2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and", "3) MAPO (Liang et al., 2018), the current best published system on WTQ.", "All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.", "NLVR In Table 3, we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.", "The first two rows correspond to models that are not semantic parsers.", "This shows that semantic parsing is a promising direction for this task.", "The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018).", "They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.", "But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.", "They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.", "As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with Abs. Sup. being more comparable to our work.", "Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-the-art result on this dataset.", "To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.", "We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.", "Table 4 shows the results of this comparison.", "We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.", "Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.", "The cost weight ( in Equation", "3) was tuned based on validation set performance for the runs with coverage, and we found that = 0 .", "4 worked best.", "It can be seen that both with and without initialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.", "When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.", "We found that coverage guidance was not as useful for WTQ.", "The average value of the best performing was around 0 .", "2 , and higher values neither helped nor hurt performance.", "To evaluate the effect of iterative search , we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6, showing results on NLVR and WTQ, respectively.", "Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.", "It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.", "The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.", "Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.", "As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.", "Table 7 shows examples from the validation set that indicate this trend.", "Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005).", "The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.", "More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013), or trying to automatically infer logical forms from denotations (Pa-supat and Liang, 2016).", "However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016).", "The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in 2.2.", "We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.", "Other work that evaluates on on these datasets include Goldman et al. (2018), Tan and Bansal (2018), Neelakantan et al. (2017), Krishnamurthy et al. (2017), Haug et al. (2018), and (Liang et al., 2018).", "These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.", "There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017).", "Recent attempts at dealing with the problem of spuriousness include Misra et al. (2018) and Guu et al. (2017).", "Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017).", "There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.", "Goldman et al. (2018)'s abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011).", "0 There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.", "(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3)))))", "1) Table 7: Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations 8 Conclusion We have presented a new technique for training semantic parsers with weak supervision.", "Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.", "To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.", "For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.", "Together these two contributions greatly improve semantic parsing performance, leading to new state-of-the-art results on NLVR and WIKITABLEQUESTIONS .", "As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.", "One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing.", "We would like to thank Jonathan Berant and Noah Smith for comments on earlier drafts and Chen Liang for helping us with implementation details of MAPO.", "Computations on beaker.org were supported in part by credits from Google Cloud." ]
[ "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "objective", "objective", "other", "objective", "abstain", "other", "other", "other", "other", "other", "other", "objective", "abstain", "objective", "objective", "other", "other", "other", "other", "other" ]
[ "Emotion Recognition in Conversations (ERC) has gained increasing attention for developing empathetic machines.", "Recently, many approaches have been devoted to perceiving conversational context by deep learning models.", "However, these approaches are insufficient in understanding the context due to lacking the ability to extract and integrate emotional clues.", "In this work, we propose novel Contextual Reasoning Networks (DialogueCRN) to fully understand the conversational context from a cognitive perspective.", "Inspired by the Cognitive Theory of Emotion, we design multiturn reasoning modules to extract and integrate emotional clues.", "The reasoning module iteratively performs an intuitive retrieving process and a conscious reasoning process, which imitates human unique cognitive thinking.", "Extensive experiments on three public benchmark datasets demonstrate the effectiveness and superiority of the proposed model.", "Emotion recognition in conversation (ERC) aims to detect emotions expressed by the speakers in each utterance of the conversation.", "The task is an important topic for developing empathetic machines (Zhou et al., 2020) in a variety of areas including social opinion mining (Kumar et al., 2015), intelligent assistant (Konig et al., 2016), health care (Pujol et al., 2019), and so on.", "A conversation often contains contextual clues (Poria et al., 2019) that trigger the current utterance's emotion, such as the cause or situation.", "Recent context-based works (Poria et al., 2017; Hazarika et al., 2018b; Majumder et al., 2019) on ERC have been devoted to perceiving situation-level or speaker-level context by deep learning models.", "However, these methods are insufficient in understanding the context that usually contains rich emotional clues.", "We argue they mainly suffer from the following challenges.", "1) The extraction of emotional clues .", "Most approaches (Hazarika et al., 2018a,b; Jiao et al., 2020b) generally retrieve the relevant context from a static memory, which limits the ability to capture richer emotional clues.", "2) The integration of emotional clues .", "Many works (Majumder et al., 2019; Ghosal et al., 2019; Lu et al., 2020) usually use the attention mechanism to integrate encoded emotional clues, ignoring their intrinsic semantic order.", "It would lose logical relationships between clues, making it difficult to capture key factors that trigger emotions.", "The Cognitive Theory of Emotion (Schachter and Singer, 1962; Scherer et al., 2001) suggests that cognitive factors are potently determined for the formation of emotional states.", "These cognitive factors can be captured by iteratively performing the intuitive retrieving process and conscious reasoning process in our brains (Evans, 1984, 2003, 2008; Sloman, 1996).", "Motivated by them, this paper attempts to model both critical processes to reason emotional clues and sufficiently understand the conversational context.", "By following the mechanism of working memory (Baddeley, 1992) in the cognitive phase, we can iteratively perform both cognitive processes to guide the extraction and integration of emotional clues, which imitates human unique cognitive thinking.", "In this work, we propose novel Contextual Reasoning Networks (DialogueCRN) to recognize the utterance's emotion by sufficiently understanding the conversational context.", "The model introduces a cognitive phase to extract and integrate emotional clues from the context retrieved by the perceive phase.", "Firstly, in the perceptive phase, we leverage Long Short-Term Memory (LSTM) (Hochre-iter and Schmidhuber, 1997) networks to capture situation-level and speaker-level context.", "Based on the above context, global memories can be obtained to storage different contextual information.", "Secondly, in the cognitive phase, we design multi-turn reasoning modules to iteratively extract and integrate the emotional clues.", "The reasoning module performs two processes, i.e. , an intuitive retrieving process and a conscious reasoning process.", "The former utilizes the attention mechanism to match relevant contextual clues by retrieving static global memories, which imitates the intuitive retrieving process.", "The latter adopts LSTM networks to learn intrinsic logical order and integrate contextual clues by retaining and updating dynamic working memory, which imitates the conscious reasoning process.", "It is slower but with human-unique rationality (Baddeley, 1992).", "Finally, according to the above contextual clues at situation-level and speaker-level, an emotion classifier is used to predict the emotion label of the utterance.", "To evaluate the performance of the proposed model, we conduct extensive experiments on three public benchmark datasets, i.e., IEMOCAP , SEMAINE and MELD datasets.", "Results consistently demonstrate that our proposed model significantly outperforms comparison methods.", "Moreover, understanding emotional clues from a cognitive perspective can boost the performance of emotion recognition.", "The main contributions of this work are summarized as follows: We propose novel Contextual Reasoning Networks (DialogueCRN) to fully understand the conversational context from a cognitive perspective.", "To the best of our knowledge, this is the first attempt to explore cognitive factors for emotion recognition in conversations.", "We design multi-turn reasoning modules to extract and integrate emotional clues by iteratively performing the intuitive retrieving process and conscious reasoning process, which imitates human unique cognitive thinking.", "We conduct extensive experiments on three public benchmark datasets.", "The results consistently demonstrate the effectiveness and superiority of the proposed model 1 .", "Formally, let U = [ u 1 , u 2 , ..., u N ] be a conversation, where N is the number of utterances.", "And 1 The source code is available at https://github.", "there are M speakers/parties p 1 , p 2 , ..., p M ( M 2) .", "Each utterance u i is spoken by the speaker p ( u i ) , where maps the index of the utterance into that of the corresponding speaker.", "Moreover, for each [1 , M ] , we define U to represent the set of utterances spoken by the speaker p , i.e. , U = { u i | u i U and u i spoken by p , i [1 , N ] } .", "The task of emotion recognition in conversations (ERC) aims to predict the emotion label y i for each utterance u i from the pre-defined emotions Y .", "Convolutional neural networks (CNNs) (Kim, 2014) are capable of capturing n-grams information from an utterance.", "Following previous works (Hazarika et al., 2018b; Majumder et al., 2019; Ghosal et al., 2019), we leverage a CNN layer with max-pooling to exact context-free textual features from the transcript of each utterance.", "Concretely, the input is the 300 dimensional pre-trained 840B GloVe vectors (Pennington et al., 2014).", "We employ three filters of size 3 , 4 and 5 with 50 feature maps each.", "These feature maps are further processed by max-pooling and ReLU activation (Nair and Hinton, 2010).", "Then, these activation features are concatenated and finally projected onto a dense layer with dimension d u = 100 , whose output forms the representation of an utterance.", "We denote { u i } Ni =1 , u i R d u as the representation for N utterances.", "Then, we propose Contextual Reasoning Networks (DialogueCRN) for emotion recognition in conversations.", "DialogueCRN is comprised of three integral components, i.e., the perception phase (Section 2.3.1), the cognition phase (Section 2.3.2), and an emotion classifier (Section 2.3.3).", "The overall architecture is illustrated in Figure", "1. 2.3.1 Perception Phase In the perceptive phase, based on the input textual features, we first generate the representation of conversational context at situation-level and speaker-level.", "Then, global memories are obtained to storage different contextual information.", "Context Representation.", "Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) introduces the gating mechanism into recurrent neural networks to time current utterance Reasoning Module Reasoning Module Reasoning Module C l a ss i f i e r input situation-level context speaker-level context current utterance Reasoning Module Reasoning Module Reasoning Module contextual clue output situation-level clue speaker-level clue Perception Phase Cognition Phase Emotion Recognition Figure 1: The architecture of the proposed model DialogueCRN.", "capture long-term dependencies from the input sequences.", "In this part, two bi-directional LSTM networks are leveraged to capture situation-level and speaker-level context dependencies, respectively.", "For learning the context representation at the situation level, we apply a bi-directional LSTM network to capture sequential dependencies between adjacent utterances in a conversational situation.", "The input is each utterance's textual features u i R d u .", "The situation-level context representation c si R 2 d u can be computed as: c si , h si = LST M s ( u i , h si 1 ) , (1) where h si R d u is the i -th hidden state of the situation-level LSTM.", "For learning the context representation at the speaker level, we also employ another bi-directional LSTM network to capture self-dependencies between adjacent utterances of the same speaker.", "Given textual features u i of each utterance, the speaker-level context representation c vi R 2 d u is computed as: c vi , h v,j = LSTM v ( u i , h v,j 1 ) , j [1 , | U | ] , (2) where = ( u i ) .", "U refers to all utterances of the speaker p .", "h v,j R d u is the j -th hidden state of speaker-level LSTM for the speaker p .", "Global Memory Representation.", "Based on the above conversational context representation, global memories can be obtained to storage different contextual information via a linear layer.", "That is, global memory representation of situation-level globalmemory 1 workingmemory Attention ReasoningModule LSTM 1 1 Figure 2: The detailed structure of reasoning module.", "context G s = [ g s 1 , g s 2 , ..., g sN ] and that of speaker-level context G v = [ g v 1 , g v 2 , ..., g vN ] can be computed as: g si = W sg c si + b sg , (3) g vi = W vg c vi + b vg , (4) where W sg , W vg R 2 d u 2 d u , b sg , b vg R 2 d u are learnable parameters.", "Inspired by the Cognitive Theory of Emotion (Schachter and Singer, 1962; Scherer et al., 2001), cognitive factors are potently determined for the formation of emotional states.", "Therefore, in the cognitive phase, we design multi-turn reasoning modules to iteratively extract and integrate the emotional clues.", "The architecture of a reasoning module is depicted in Figure", "2. The reasoning module performs two processes, the intuitive retrieving process, and the conscious reasoning process.", "In the t -th turn, for the reasoning process , we adopt the LSTM network to learn intrinsic logical order and integrate contextual clues in the working memory, which is slower but with human-unique rationality (Baddeley, 1992).", "i i i i", "where q ( t 1) i R 2 d u is the output vector.", "q ( t ) i R 4 d u is initialized by the context representation c i of the current utterance, i.e. , q (0) i = W q c i + b q , where W q R 4 d u 2 d u and b q R 4 d u are learnable parameters.", "h ( t ) i R 2 d u refers to the working memory, which can not only storage and update the previous memory h ( t 1) i , but also guide the extraction of clues in the next turn.", "During sequential flowing of the working memory, we can learn implicit logical order among clues, which resembles the conscious thinking process of humans.", "h ( t ) i is initialized with zero.", "t is the index that indicates how many processing steps are being carried to compute the final state.", "For the retrieving process , we utilize an attention mechanism to match relevant contextual clues from the global memory.", "The detailed calculations are as follows: e ( t 1) ij = f ( g j , q ( t 1) i ) , (6) ( t 1) ij = exp( e ( t 1) ij ) (cid:80) Nj =1 exp( e ( t 1) ij ) , (7) r ( t 1) i = N (cid:88) j =1 ( t 1) ij g j , (8) where f is a function that computes a single scalar from g j and q ( t 1) i ( e.g. , a dot product).", "Then, we concatenate the output of reasoning process q ( t 1) i with the resulting attention readout r ( t 1) i to form the next-turn query q ( t ) i .", "That is, q ( t ) i = [ q ( t 1) i ; r ( t 1) i ] .", "The query q ( t ) i will be updated under the guidance of working memory h ( t ) i , and more contextual clues can be retrieved from the global memory.", "To sum up, given context representation c i of the utterance u i , global memory representation G , and the number of turns T , the whole cognitive phase (Eq.5-9) can be denoted as, q i = Cognition ( c i , G ; T ) .", "In this work, we design two individual cognition phases to explore contextual clues at situation-level and speaker-level, respectively.", "The outputs are defined as: q si = Cognition s ( c si , G s ; T s ) , (10) q vi = Cognition v ( c vi , G v ; T v ) , (11) where T s and T v are the number of turns in situation-level and speaker-level cognitive phases, respectively.", "Based on the above output vectors, the final representation o can be defined as a concatenation of both vectors, i.e. , o i = [ q si ; q vi ] .", "Finally, according to the above contextual clues, an emotion classifier is used to predict the emotion label of the utterance.", "where L is the total number of conversa-tions/samples in the training set.", "( i ) is the number of utterances in the sample i .", "y li,k and y li,k denote the one-hot vector and probability vector for emotion class k of utterance i of sample l , respectively.", "We evaluate our proposed model on following benchmark datasets, IEMOCAP (Busso et al., 2008), SEMAINE (McKeown et al., 2012), and MELD (Poria et al., 2019) datasets.", "The statistics are reported in Table", "1. The above datasets are multimodal datasets with textual, visual, and acoustic features.", "In this paper, we focus on emotion recognition in textual conversations.", "Multimodal emotion recognition in conversations is left as future work.", "2 https://sail.usc.edu/iemocap/", "unique speakers, where only the first eight speakers from session one to four belong to the training set.", "The utterances are annotated with one of six emotion labels, namely happy , sad , neutral , angry , excited , and frustrated .", "Following previous works (Hazarika et al., 2018a; Ghosal et al., 2019; Jiao et al., 2020b), the validation set is extracted from the randomly shuffled training set with the ratio of 80:20 since no pre-defined train/val split is provided in the IEMOCAP dataset.", "SEMAINE 3 : The dataset (McKeown et al., 2012) is a video database of human-agent interactions.", "It is available at AVEC 2012's fully continuous sub-challenge (Schuller et al., 2012) that requires predictions of four continuous affective attributes: Arousal , Expectancy , Power , and Valence .", "The gold annotations are available for every 0:2 seconds in each video (Nicolle et al., 2012).", "Following (Hazarika et al., 2018a; Ghosal et al., 2019), the attributes are averaged over the span of an utterance to obtain utterance-level annotations.", "We utilize the standard both training and testing splits provided in the sub-challenge.", "MELD 4 : Multimodal Emotion Lines Dataset (MELD) (Poria et al., 2019), a extension of the EmotionLines (Hsu et al., 2018), is collected from TV-series Friends containing more than 1400 multiparty conversations and 13000 utterances.", "Each utterance is annotated with one of seven emotion labels ( i.e. , happy / joy , anger , fear , disgust , sadness , surprise , and neutral ).", "We use the pre-defined train/val split provided in the MELD dataset.", "We compare the proposed model against the following baseline methods.", "TextCNN (Kim, 2014) is a convolutional neural network trained on context-independent utterances.", "Memnet (Sukhbaatar et al., 2015) is an end-to-end memory network and update memories in a multi-hop fashion.", "bc-LSTM+Att (Poria et al., 2017) adopts a bidirectional LSTM network to capture the contextual content from the surrounding utterances.", "Additionally, 3 https://semaine-db.eu 4 https://github.com/SenticNet/MELD an attention mechanism is adopted to re-weight features and provide a more informative output.", "CMN (Hazarika et al., 2018b) encodes conversational context from dialogue history by two distinct GRUs for two speakers.", "ICON (Hazarika et al., 2018a) extends CMN by connecting outputs of individual speaker GRUs using another GRU for perceiving inter-speaker modeling.", "DialogueRNN (Majumder et al., 2019) is a recurrent network that consists of two GRUs to track speaker states and context during the conversation.", "DialogueGCN (Ghosal et al., 2019) a graph-based model where nodes represent utterances and edges represent the dependency between the speakers of the utterances.", "Following previous works (Hazarika et al., 2018a; Jiao et al., 2020b), for IEMOCAP and MELD datasets, we choose the accuracy score ( Acc .) to measure the overall performance.", "We also report the Weighted-average F1 score ( Weighted F 1 ) and Macro-averaged F1 score ( Macro F 1 ) to evaluate the model performance on both majority and minority classes, respectively.", "For the SEMAINE dataset, we report Mean Absolute Error ( MAE ) for each attribute.", "The lower MAE , the better the detection performance.", "We use the validation set to tune hyperparameters.", "In the perceptive phase, we employ two-layer bidirectional LSTM on IEMOCAP and SEMAINE datasets and single-layer bi-directional LSTM on the MELD dataset.", "In the cognitive phase, single-layer LSTM is used on all datasets.", "The batch size is set to 32.", "We adopt Adam (Kingma and Ba, 2015) as the optimizer with an initial learning rate of { 0.0001, 0.001, 0.001 } and L2 weight decay of { 0.0002, 0.0005, 0.0005 } for IEMOCAP , SEMAINE , MELD datasets, respectively.", "The dropout rate is set to 0 .", "2 .", "We train all models for a maximum of 100 epochs and stop training if the validation loss does not decrease for 20 consecutive epochs.", "For results of DialogueGCN and DialogueRNN, we implement them according to the public code 5 provided by Majumder et al. (2019); Ghosal et al. (2019) under the same environment.", "Table 2, 3 and 4 show the comparison results for emotion recognition in textual conversations.", "DialogueCRN consistently achieves better performance than the comparison methods on all datasets, while also being statistically significant under the paired t -test (p < 0.05).", "IEMOCAP and SEMAINE .", "Both IEMOCAP and SEMAINE datasets have long conversation lengths and the average length is not less than 50.", "The fact implies that the two datasets contain richer contextual information.", "TextCNN ignoring conversational context obtains the worst performance.", "Memnet and bc-LSTM+Att perceive the situation-level context of the current utterance.", "CMN perceives the speaker-level context.", "Thereby, Memnet , bc-LSTM+Att and CMN slightly outperforms TextCNN .", "ICON , DialogueRNN , and DialogueGCN consider both situation-level and speaker-level context to model the perceptive phase of context.", "They achieve better performance than the above methods.", "Compared with baseline methods, DialogueCRN can extract and integrate rich MELD Methods Acc.", "emotional clues by exploring cognitive factors.", "Accordingly, our model obtains more effective performance.", "That is, as shown in Table 2 and 3, for the IEMOCAP dataset, DialogueCRN gains 3.2%, 4.0%, 4.7% relative improvements over the previous best baselines in terms of Acc.", ", Weighted-F 1 , and Macro-F 1 , respectively.", "For the SEMAINE dataset, DialogueCRN achieves a large margin of 11.1% MAE for the Arousal attribute.", "MELD .", "From Table 1, the number of speakers of each conversation in the MELD dataset is large (up to 9), and the average length of conversations is 10.", "The shorter conversation length of the MELD dataset indicates it contains less contextual information.", "From the result in Table 4, interestingly, TextCNN ignoring conversational context achieves better results than most baselines.", "It indicates that it is difficult to learn useful features from perceiving a limited and missing context.", "Besides, DialogueGCN leverages graph structure to perceive the interaction of multiple speakers, which is suffi-cient to perceive the speaker-level context.", "Thereby, the performance is slightly improved.", "Compared with baselines, DialogueCRN enables to perform sequential thinking of context and understand emotional clues from a cognitive perspective.", "Therefore, it achieves the best recognition results, e.g. , 2.9% improvements on Weighted-F 1 .", "To better understand the contribution of different modules in DialogueCRN to the performance, we conduct several ablation studies on both IEMOCAP and SEMAINE datasets.", "Different modules that model the situation-level and speaker-level context in both perceptive and cognitive phases are removed separately.", "The results are shown in Table 5.", "When cognition and perception modules are removed successively, the performance is greatly Cognition Perception IEMOCAP SEMAINE Situation Speaker Situation Speaker Acc.", "declined.", "It indicates the importance of both the perception and cognition phases for ERC.", "Effect of Cognitive Phase.", "When only removing cognition phase, as shown in the third block of Table 5, the performance on the IEMOCAP dataset decreases 4.3%, 4.3% and 6.5% in terms of Acc.", ", Weighted-F 1 , and Macro-F 1 , respectively.", "And on the SEMAINE dataset, the MAE scores of Valence , Arousal , and Expectancy attributes are increased by 2.3%, 12.5% and 2.9%, respectively.", "These results indicate the efficacy of the cognitive phase, which can reason based on the perceived contextual information consciously and sequentially.", "Besides, if removing the cognitive phase for either speaker-level or situation-level context, as shown in the second block, the results decreased on both datasets.", "The fact reflects both situational factors and speaker factors are critical in the cognitive phase.", "Effect of Perceptive Phase.", "As shown in the last row, when removing the perception module, the performance is dropped sharply.", "The inferior results reveal the necessity of the perceptive phase to unconsciously match relevant context based on the current utterance.", "Effect of Different Context.", "When removing either situation-level or speaker-level context in both cognitive and perceptive phases, respectively, the performance has a certain degree of decline.", "The phenomenon shows both situation-level and speaker-level context play an effective role in the perceptive and cognitive phases.", "Besides, the margin of dropped performance is different on both datasets.", "This suggests speaker-level context plays a greater role in the perception phase while more complex situation-level context works well in the cognitive phase.", "The explanation is that it is limited to learn informative features from context by intuitive matching perception, but conscious cognitive reasoning can boost better understanding.", "We investigate how our model performs w.r.t the number of turns in the cognitive phase.", "From Figure 3, the best { T s , T v } is { 2 , 2 } and { 1 , 3 } on IEMOCAP and SEMAINE datasets, which obtain 66.20% Weighted-F 1 and 0.1522 MAE of Arousal attribute, respectively.", "Note that the SEMAINE dataset needs more turns for the speaker-level cognitive phase.", "It implies speaker-level contextual clues may be more vital in arousal emotion, espe-All you're going to do is just give me fifty dollars and say go have fun on your vacation without any of your stuff?", "Besides, if we solely consider either situation-level or speaker-level context in the cognitive phase, results on the two datasets are significantly improved within a certain number of turns.", "The fact indicates the effectiveness of using multi-turn reasoning modules to understand contextual clues.", "Figure 4 shows a conversation sampled from the IEMOCAP dataset.", "The goal is to predict the emotion label of utterance 8.", "Methods such as DialogueRNN and DialogueGCN lack the ability to consciously understand emotional clues, e.g. , the cause of the emotion (failed expectation).", "They are easy to mistakenly identify the emotion as angry or neutral .", "Our model DialogueCRN can understand the conversational context from a cognitive perspective.", "In the cognitive phase, the following two processes are performed iteratively: the intuitive retrieving process of 8-7-2-1 (blue arrows) and the conscious reasoning process of a-b-c (red arrows), to extract and integrate emotional clues.", "We can obtain that utterance 8 implied that more compensation expected by female was not achieved.", "The failed compensation leads to more negative of his emotion and thus correctly identified as depression .", "Emotion recognition (ER) has been drawing increasing attention to natural language processing (NLP) and artificial intelligence (AI).", "Existing works generally regard the ER task as a classification task based on context-free blocks of data, such as individual reviews or documents.", "They can roughly divided into two parts, i.e. , feature-engineering based (Devillers and Vidrascu, 2006), and deep-learning based methods (Tang et al., 2016; Wei et al., 2020).", "Recently, the task of Emotion Recognition in Conversations (ERC) has received attention from researchers.", "Different traditional emotion recognition, both situation-level and speaker-level context plays a significant role in identifying the emotion of an utterance in conversations (Li et al., 2020).", "The neglect of them would lead to quite limited performance (Bertero et al., 2016).", "Existing works generally capture contextual characteristics for the ERC task by deep learning methods, which can be divided into sequence-based and graph-based methods.", "Sequence-based Methods.", "Many works capture contextual information in utterance sequences.", "Poria et al. (2017) employed LSTM (Hochreiter and Schmidhuber, 1997) to capture conversational context features.", "Hazarika et al. (2018a,b) used end-to-end memory networks (Sukhbaatar et al., 2015) to capture contextual features that distinguish different speakers.", "Zhong et al. (2019); Li et al. (2020) utilized the transformer (Vaswani et al., 2017) to capture richer contextual features based on the attention mechanism.", "Majumder et al. (2019) introduced a speaker state and global state for each conversation based on GRUs (Cho et al., 2014).", "Moreover, Jiao et al. (2020a) introduced a conversation completion task to learn from unsupervised conversation data.", "Jiao et al. (2020b) proposed a hierarchical memory network for real-time emotion recognition without future context.", "Wang et al. (2020) modeled ERC as sequence tagging to learn the emotional consistency.", "Lu et al. (2020) proposed an iterative emotion interaction network to explicitly model the emotion interaction.", "Graph-based Methods.", "Some works (Zhang et al., 2019; Ghosal et al., 2019; Ishiwatari et al., 2020; Lian et al., 2020) model the conversational context by designing a specific graphical structure.", "They utilize graph neural networks (Kipf and Welling, 2017; Velickovic et al., 2017) to capture multiple dependencies in the conversation, which have achieved appreciable performance.", "Singer, 1962; Scherer et al., 2001), this paper makes the first attempt to explore cognitive factors for emotion recognition in conversations.", "To sufficiently understand the conversational context, we propose a novel DialogueCRN to extract and then integrate rich emotional clues in a cognitive manner.", "This paper has investigated cognitive factors for the task of emotion recognition in conversations (ERC).", "We propose novel contextual reasoning networks (DialogueCRN) to sufficiently understand both situation-level and speaker-level context.", "DialogueCRN introduces the cognitive phase to extract and integrate emotional clues from context retrieved by the perceptive phase.", "In the cognitive phase, we design multi-turn reasoning modules to iteratively perform the intuitive retrieving process and conscious reasoning process, which imitates human unique cognitive thinking.", "Finally, emotional clues that trigger the current emotion are successfully obtained and used for better classification.", "Experiments on three benchmark datasets have proved the effectiveness and superiority of the proposed model.", "The case study shows that considering cognitive factors can better understand emotional clues and boost the performance of ERC." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "objective", "method", "method", "objective", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain" ]
[ "We address the task of unsupervised Semantic Textual Similarity (STS) by ensembling diverse pre-trained sentence encoders into sentence meta-embeddings .", "We apply, extend and evaluate different meta-embedding methods from the word embedding literature at the sentence level, including dimensionality reduction (Yin and Sch utze, 2016), generalized Canonical Correlation Analysis (Rastogi et al., 2015) and cross-view auto-encoders (Bolle-gala and Bao, 2018).", "Our sentence meta-embeddings set a new unsupervised State of The Art (SoTA) on the STS Benchmark and on the STS12STS16 datasets, with gains of between 3.7% and 6.4% Pearson's r over single-source systems.", "Word meta-embeddings have been shown to exceed single-source word embeddings on word-level semantic benchmarks (Yin and Schutze, 2016; Bollegala and Bao, 2018).", "Presumably, this is because they combine the complementary strengths of their components.", "There has been recent interest in pre-trained uni-versal sentence encoders, i.e., functions that encode diverse semantic features of sentences into fixed-size vectors (Conneau et al., 2017).", "Since these sentence encoders differ in terms of their architecture and training data, we hypothesize that their strengths are also complementary and that they can benefit from meta-embeddings.", "To test this hypothesis, we adapt different meta-embedding methods from the word embedding literature.", "These include dimensionality reduction (Yin and Schutze, 2016), cross-view autoencoders (Bollegala and Bao, 2018) and Generalized Canonical Correlation Analysis (GCCA) (Rastogi et al., 2015).", "The latter method was also used by Poerner and Schutze (2019) for domain-specific Duplicate Question Detection.", "Our sentence encoder ensemble includes three models from the recent literature: Sentence-BERT (Reimers and Gurevych, 2019), the Universal Sentence Encoder (Cer et al., 2017) and averaged ParaNMT vectors (Wieting and Gimpel, 2018).", "Our meta-embeddings outperform every one of their constituent single-source embeddings on STS1216 (Agirre et al., 2016) and on the STS Benchmark (Cer et al., 2017).", "Crucially, since our meta-embeddings are agnostic to the contents of their ensemble, future improvements may be possible by adding new encoders.", "Word embeddings are functions that map word types to vectors.", "They are typically trained on unlabeled corpora and capture word semantics (e.g., Mikolov et al. (2013); Pennington et al. (2014)).", "Word meta-embeddings combine ensembles of word embeddings by various operations: Yin and Schutze (2016) use concatenation, SVD and linear projection, Coates and Bollegala (2018) show that averaging word embeddings has properties similar to concatenation.", "Rastogi et al. (2015) apply generalized canonical correlation analysis (GCCA) to an ensemble of word vectors.", "Bollegala and Bao (2018) learn word meta-embeddings using autoencoder architectures.", "Neill and Bollegala (2018) evaluate different loss functions for autoencoder word meta-embeddings, while Bollegala et al. (2018) explore locally linear mappings.", "Sentence embeddings are methods that produce one vector per sentence.", "They can be grouped into two categories: F 1 (e.g., SBERT) ...", "(a) Word embedding average sentence encoders take a (potentially weighted) average of pre-trained word embeddings.", "Despite their inability to understand word order, they are surprisingly effective on sentence similarity tasks (Arora et al., 2017; Wieting and Gimpel, 2018; Ethayarajh, 2018)", "(b) Complex contextualized sentence encoders, such as Long Short Term Memory Networks (LSTM) (Hochreiter and Schmidhuber, 1997) or Transformers (Vaswani et al., 2017).", "Contextualized encoders can be pre-trained as unsupervised language models (Peters et al., 2018; Devlin et al., 2019), but they are usually improved on supervised transfer tasks such as Natural Language Inference (Bowman et al., 2015).", "Sentence meta-embeddings have been explored less frequently than their word-level counterparts.", "Kiela et al. (2018) create meta-embeddings by training an LSTM sentence encoder on top of a set of dynamically combined word embeddings.", "Since this approach requires labeled data, it is not applicable to unsupervised STS.", "Tang and de Sa (2019) train a Recurrent Neural Network (RNN) and a word embedding average encoder jointly on a large corpus to predict similar representations for neighboring sentences.", "Their approach trains both encoders from scratch, i.e., it cannot be used to combine existing encoders.", "Poerner and Schutze (2019) propose a GCCA-based multi-view sentence encoder that combines domain-specific and generic sentence embeddings for unsupervised Duplicate Question Detection.", "In this paper, we extend their approach by exploring a wider range of meta-embedding methods and an ensemble that is more suited to STS.", "Semantic Textual Similarity (STS) is the task of rating the similarity of two natural language sentences on a real-valued scale.", "Related applications are semantic search, duplicate detection and sentence clustering.", "Supervised SoTA systems for STS typically apply cross-sentence attention (Devlin et al., 2019; Raffel et al., 2019).", "This means that they do not scale well to many real-world tasks.", "Supervised siamese models (Reimers and Gurevych, 2019) on the other hand, while not competitive with cross-sentence attention, can cache sentence embeddings independently of one another.", "For instance, to calculate the pairwise similarities of N sentences, a cross-sentence attention system must calculate O ( N 2 ) slow sentence pair embeddings, while the siamese model calculates O ( N ) slow sentence embeddings and O ( N 2 ) fast vector similarities.", "Below, we assume access to an ensemble of pre-trained sentence encoders, denoted F 1 . . . FJ .", "Every F j maps from the (infinite) set of possible sentences S to a fixed-size d j -dimensional vector.", "Word meta-embeddings are usually learned from a finite vocabulary of word types (Yin and Schutze, 2016).", "Sentence embeddings lack such a vocabu-lary, as they can encode any member of S .", "Therefore, we train on a sample S S , i.e., on a corpus of unlabeled sentences.", "We create naive sentence meta-embeddings by concatenating (Yin and Schutze, 2016) or averaging 1 (Coates and Bollegala, 2018) sentence embeddings.", "1 If embeddings have different dimensionalities, we pad the shorter ones with zeros.", "F j ( s ) = F j ( s ) ||F j ( s ) || 2", "Yin and Schutze (2016) use Singular Value Decomposition (SVD) to compactify concatenated word embeddings.", "The method is straightforward to extend to sentence meta-embeddings.", "Let X conc R | S | (cid:80) j d j with x conc n = F conc ( s n ) E s S [ F conc ( s )] Let USVT X conc be the d -truncated SVD.", "F svd ( s (cid:48) ) = VT ( F conc ( s (cid:48) ) E s S [ F conc ( s )])", "Given random vectors x 1 , x 2 , Canonical Correlation Analysis (CCA) finds linear projections such that T 1 x 1 and T 2 x 2 are maximally correlated.", "Generalized CCA (GCCA) extends CCA to three or more random vectors.", "Bach and Jordan (2002) show that a variant of GCCA reduces to a generalized eigenvalue problem on block matrices: 1 , 1 0 0 0 ... 0 0 0 J,J 1 . . . J = 0 ... 1 ,J ... 0 ... J, 1 ... 0 1 . . . J where j,j (cid:48) = E s S [( F j ( s ) j )( F j (cid:48) ( s ) j (cid:48) ) T ] j = E s S [ F j ( s )] For stability, we add d j (cid:80) d j n =1 diag( j,j ) n to diag( j,j ) , where is a hyperparameter.", "We stack the eigenvectors of the topd eigenvalues into matrices j R d d j and define the GCCA meta-embedding of sentence s (cid:48) as: F gcca ( s (cid:48) ) = J (cid:88) j =1 j ( F j ( s (cid:48) ) j ) F gcca corresponds to MV-DASE in Poerner and Schutze (2019).", "Autoencoder meta-embeddings are trained by gradient descent to minimize some cross-embedding reconstruction loss.", "For example, Bollegala and Bao (2018) train feed-forward networks (FFN) to encode two sets of word embeddings into a shared space, and then reconstruct them such that mean squared error with the original embeddings is minimized.", "Neill and Bollegala (2018) evaluate different reconstruction loss functions: Mean Squared Error (MSE), Mean Absolute Error (MAE), KL-Divergence (KLD) or squared cosine distance (1-COS) 2 .", "We extend their approach to sentence encoders as follows: Every sentence encoder F j has a trainable encoder E j : R d j R d and a trainable decoder D j : R d R d j , where d is a hyperparameter.", "Our training objective is to reconstruct every embedding x j (cid:48) from every E j ( x j ) .", "This results in J 2 loss terms, which are jointly optimized: L ( x 1 . . . x J ) = (cid:88) j (cid:88) j (cid:48) l ( x j (cid:48) , D j (cid:48) ( E j ( x j ))) where l is one of the reconstruction loss functions listed above.", "The autoencoder meta-embedding of a new sentence s (cid:48) is: F ae ( s (cid:48) ) = (cid:88) j E j ( F j ( s (cid:48) )) 4 Experiments 4.1 Data We train on all sentences of length < 60 from the first file ( news.en-00001-of-00100 ) of the tok-enized, lowercased Billion Word Corpus (BWC) (Chelba et al., 2014) ( 302K sentences).", "We evaluate on STS12 STS16 (Agirre et al., 2016) and the unsupervised STS Benchmark test set (Cer et al., dimensionality STS12 STS13 STS14 STS15 STS16 STS-B single:ParaNMT d = 600 67.5/66.3 62.7/62.8 77.3/74.9 80.3/80.8 78.3/79.1 79.8/78.9 single:USE d = 512 62.6/63.8 57.3/57.8 69.5/66.0 74.8/77.1 73.7/76.4 76.2/74.6 single:SBERT d = 1024 66.9/66.8 63.2/64.8 74.2/74.3 77.3/78.3 72.8/75.7 76.2/79.2 single:ParaNMT up-projection d = 1024 67.3/66.2 62.1/62.4 77.1/74.7 79.7/80.2 77.9/78.7 79.5/78.6 single:USE up-projection d = 1024 62.4/63.7 57.0/57.5 69.4/65.9 74.7/77.1 73.6/76.3 76.0/74.5 meta:conc d = 2136 72.7/71.3 68.4/68.6 81.0/79.0 84.1/ 85.5 82.0 / 83.8 82.8/83.4 meta:avg d = 1024 72.5/71.2 68.1/68.3 80.8/78.8 83.7/85.1 81.9/83.6 82.5/83.2 meta:svd d = 1024 71.9/70.8 68.3/68.3 80.6/78.6 83.8/85.1 81.6/83.6 83.4/83.8 meta:gcca (hyperparams on dev set) d = 1024 72.8 / 71.6 69.6 / 69.4 81.7 / 79.5 84.2 / 85.5 81.3/83.3 83.9 / 84.4 meta:ae (hyperparams on dev set) d = 1024 71.5/70.6 68.5/68.4 80.1/78.5 82.5/83.1 80.4/81.9 82.1/83.3 Ethayarajh (2018) (unsupervised) 68.3/-66.1/78.4/-79.0/-/-79.5/Wieting and Gimpel (2018) (unsupervised) 68.0/-62.8/77.5/-80.3/78.3/-79.9/Tang and de Sa (2019) (unsupervised meta) 64.0/-61.7/73.7/-77.2/76.7/-Hassan et al. (2019) (unsupervised meta) 67.7/-64.6/75.6/-80.3/79.3/-77.7/Poerner and Schutze (2019) (unsupervised meta) -/--/-/--/-/-80.4/Reimers and Gurevych (2019) (sup. siamese SoTA) -/--/-/--/-/--/86.2 Raffel et al. (2019) (supervised SoTA) -/--/-/--/-/-93.1/92.8 Table 2: Results on STS1216 and STS Benchmark test set.", "2017).", "2 These datasets consist of triples ( s 1 , s 2 , y ) , where s 1 , s 2 are sentences and y is their ground truth semantic similarity.", "The task is to predict similarity scores y that correlate well with y .", "We predict y = cos( F ( s 1 ) , F ( s 2 )) .", "Previous work on STS differs with respect to", "(a) the correlation metric and", "(b) how to aggregate the sub-testsets of STS1216.", "To maximize comparability, we report both Pearson's r and Spearman's .", "On STS1216, we aggregate by a non-weighted average, which diverges from the original shared tasks (Agirre et al., 2016) but ensures comparability with more recent baselines (Wieting and Gimpel, 2018; Ethayarajh, 2018).", "Results for individual STS12 16 sub-testsets can be found in the Appendix.", "We select our ensemble according to the following criteria: Every encoder should have near-SoTA performance on the unsupervised STS benchmark, and the encoders should not be too similar with regards to their training regime.", "For instance, we do not 2 We use SentEval for evaluation (Conneau and Kiela, 2018).", "Since original SentEval does not support the unsupervised STS Benchmark, we use a non-standard repository ( https://github.com/sidak/SentEval ).", "We manually add the missing STS13-SMT subtask.", "use Ethayarajh (2018), which is a near-SoTA unsupervised method that uses the same word vectors as ParaNMT (see below).", "We choose the Universal Sentence Encoder (USE) 3 (Cer et al., 2018), which is a Transformer trained on skip-thought, conversation response prediction and Natural Language Inference (NLI), Sentence-BERT (SBERT) 4 (Reimers and Gurevych, 2019), which is a pre-trained BERT transformer finetuned on NLI, and ParaNMT 5 (Wi-eting and Gimpel, 2018), which averages word and 3-gram vectors trained on backtranslated similar sentence pairs.", "To our knowledge, ParaNMT is the current single-source SoTA on the unsupervised STS Benchmark.", "We set d = 1024 in all experiments, which corresponds to the size of the biggest single-source embedding (SBERT).", "The value of (GCCA), as well as the autoencoder depth and loss function are tuned on the STS Benchmark development set (see 3 https://tfhub.dev/google/ universal-sentence-encoder/2 4 https://github.com/UKPLab/ sentence-transformers . We use the large-nlimean-tokens model, which was not finetuned on STS. 5 https://github.com/jwieting/ para-nmt-50m full without without without ensemble ParaNMT USE SBERT meta:svd 85.0 / 85.4 79.6/81.3 79.7/81.4 83.7/83.5 meta:gcca 85.5 / 86.1 84.9/84.8 83.8/83.8 85.4/85.4 meta:ae 85.1 / 85.5 76.5/80.3 82.5/83.5 28.7/41.0 Table 3: Ablation study: Pearson's r 100 / Spearman's 100 on STS Benchmark development set when one encoder is left out. Table 1).", "We train the autoencoder for a fixed number of 500 epochs with a batch size of 10,000.", "We use the Adam optimizer (Kingma and Ba, 2014) with 1 = 0 .", "9 , 2 = 0 .", "999 and learning rate 0 .", "001 .", "Our main baselines are our single-source embeddings.", "Wieting and Kiela (2019) warn that high-dimensional sentence representations can have an advantage over low-dimensional ones, i.e., our meta-embeddings might be better than lower-dimensional single-source embeddings due to size alone.", "To exclude this possibility, we also up-project smaller embeddings by a random d d j matrix sampled from: U ( 1 (cid:112) d j , 1 (cid:112) d j ) Since the up-projected sentence embeddings perform slightly worse than their originals (see Table 2, rows 45), we are confident that performance gains by our meta-embeddings are due to content rather than size.", "Table 2 shows that even the worst of our meta-embeddings consistently outperform their single-source components.", "This underlines the overall usefulness of ensembling sentence encoders, irrespective of the method used.", "GCCA outperforms the other meta-embeddings on five out of six datasets.", "We set a new unsupervised SoTA on the unsupervised STS Benchmark test set, reducing the gap with the supervised siamese SoTA of Reimers and Gurevych (2019) from 7% to 2% Spearman's .", "Interestingly, the naive meta-embedding methods (concatenation and averaging) are competitive with SVD and the autoencoder, despite not needing any unsupervised training.", "In the case of concatenation, this comes at the cost of increased dimensionality, which may be problematic for downstream applications.", "The naive averaging method by Coates and Bollegala (2018) however does not have this problem, while performing only marginally worse than concatenation.", "Table 3 shows that all single-source embeddings contribute positively to the meta-embeddings, which supports their hypothesized complementarity.", "This result also suggests that further improvements may be possible by extending the ensemble.", "All of our meta-embeddings are fast to train, either because they have closed-form solutions (GCCA and SVD) or because they are lightweight feed-forward nets (autoencoder).", "The underlying sentence encoders are more complex and slow, but since we do not update them, we can apply them to the unlabeled training data once and then reuse the results as needed.", "As noted in Section 2.4, cross-sentence attention systems do not scale well to many real-world STS-type tasks, as they do not allow individual sentence embeddings to be cached.", "Like Reimers and Gurevych (2019), our meta-embeddings do not have this problem.", "This should make them more suitable for tasks like sentence clustering or real-time semantic search.", "Inspired by the success of word meta-embeddings, we have shown how to apply different meta-embedding techniques to ensembles of sentence encoders.", "All sentence meta-embeddings consistently outperform their individual single-source components on the STS Benchmark and the STS1216 datasets, with a new unsupervised SoTA set by our GCCA meta-embeddings.", "Because sentence meta-embeddings are agnostic to the size and specifics of their ensemble, it should be possible to add new encoders to the ensemble, potentially improving performance further.", "Acknowledgments.", "This work was supported by Siemens AG and by the European Research Council (# 740516)." ]
[ "method", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "method", "abstain", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain" ]
[ "We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure.", "We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks, while posing a more difficult challenge with its new test set.", "Our analysis sheds light on the shortcomings of current state-of-the-art models, and shows that non-expert annotators are successful at finding their weaknesses.", "The data collection method can be applied in a never-ending learning scenario, becoming a moving target for NLU, rather than a static benchmark that will quickly saturate.", "Progress in AI has been driven by, among other things, the development of challenging large-scale benchmarks like ImageNet (Russakovsky et al., 2015) in computer vision, and SNLI (Bowman et al., 2015), SQuAD (Rajpurkar et al., 2016), and others in natural language processing (NLP).", "Recently, for natural language understanding (NLU) in particular, the focus has shifted to combined benchmarks like SentEval (Conneau and Kiela, 2018) and GLUE (Wang et al., 2018), which track model performance on multiple tasks and provide a unified platform for analysis.", "With the rapid pace of advancement in AI, however, NLU benchmarks struggle to keep up with model improvement.", "Whereas it took around 15 years to achieve near-human performance on MNIST (LeCun et al., 1998; Ciresan et al., 2012; Wan et al., 2013) and approximately 7 years to surpass humans on ImageNet (Deng et al., 2009; Russakovsky et al., 2015; He et al., 2016), the GLUE benchmark did not last as long as we would have hoped after the advent of BERT (Devlin et al., 2018), and rapidly had to be extended into Super-GLUE (Wang et al., 2019).", "This raises an important question: Can we collect a large benchmark dataset that can last longer?", "The speed with which benchmarks become obsolete raises another important question: are current NLU models genuinely as good as their high performance on benchmarks suggests?", "A growing body of evidence shows that state-of-the-art models learn to exploit spurious statistical patterns in datasets (Gururangan et al., 2018; Poliak et al., 2018; Tsuchiya, 2018; Glockner et al., 2018; Geva et al., 2019; McCoy et al., 2019), instead of learning meaning in the flexible and generalizable way that humans do.", "Given this, human annotatorsbe they seasoned NLP researchers or non-experts might easily be able to construct examples that expose model brittleness.", "We propose an iterative, adversarial human-and-model-in-the-loop solution for NLU dataset collection that addresses both benchmark longevity and robustness issues.", "In the first stage, human annotators devise examples that our current best models cannot determine the correct label for.", "These resulting hard exampleswhich should expose additional model weaknessescan be added to the training set and used to train a stronger model.", "We then subject the strengthened model to the same procedure and collect weaknesses over several rounds.", "After each round, we train a new model and set aside a new test set.", "The process can be iteratively repeated in a never-ending learning (Mitchell et al., 2018) setting, with the model getting stronger and the test set getting harder in each new round.", "Thus, not only is the resultant dataset harder than existing benchmarks, but this process also yields a moving post dynamic target for NLU systems, rather than a static benchmark that will eventually saturate.", "Our approach draws inspiration from recent ef-Context Target Label Hypothesis Writer Compare Prediction Verifier Disagree Train Dev Test Agree Step 1: Write examples Step 2: Get model feedback Step 3: Verify examples and make splits Step 4: Retrain model for next round Training Phase Collection Phase F e e d b a c k Model correct Model wrong Figure 1: Adversarial NLI data collection via human-and-model-in-the-loop enabled training (HAMLET).", "forts that gamify collaborative training of machine learning agents over multiple rounds (Yang et al., 2017) and pit builders against breakers to learn better models (Ettinger et al., 2017).", "Recently, Dinan et al. (2019) showed that such an approach can be used to make dialogue safety classifiers more robust.", "Here, we focus on natural language inference (NLI), arguably the most canonical task in NLU.", "We collected three rounds of data, and call our new dataset Adversarial NLI (ANLI).", "Our contributions are as follows: 1) We introduce a novel human-and-model-in-the-loop dataset, consisting of three rounds that progressively in-crease in difficulty and complexity, that includes annotator-provided explanations.", "2) We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks.", "3) We provide a detailed analysis of the collected data that sheds light on the shortcomings of current models, categorizes the data by inference type to examine weaknesses, and demonstrates good performance on NLI stress tests.", "The ANLI dataset is available at github.com/facebookresearch/anli/.", "A demo is available at adversarialnli.com.", "The primary aim of this work is to create a new large-scale NLI benchmark on which current state-of-the-art models fail.", "This constitutes a new target for the field to work towards, and can elucidate model capabilities and limitations.", "As noted, however, static benchmarks do not last very long these days.", "If continuously deployed, the data collection procedure we introduce here can pose a dynamic challenge that allows for never-ending learning.", "To paraphrase the great bard (Shakespeare, 1603), there is something rotten in the state of the art .", "We propose Human-And-Model-in-the-Loop Enabled Training (HAMLET), a training procedure to automatically mitigate problems with current dataset collection procedures (see Figure 1).", "In our setup, our starting point is a base model , trained on NLI data.", "Rather than employing automated adversarial methods, here the model's ad-versary is a human annotator.", "Given a context (also often called a premise in NLI), and a desired target label , we ask the human writer to provide a hypothesis that fools the model into misclassifying the label.", "One can think of the writer as a white hat hacker, trying to identify vulnerabilities in the system.", "For each human-generated example that is misclassified, we also ask the writer to provide a reason why they believe it was misclassified.", "For examples that the model misclassified, it is necessary to verify that they are actually correct i.e., that the given context-hypothesis pairs genuinely have their specified target label.", "The best way to do this is to have them checked by another human.", "Hence, we provide the example to human verifiers .", "If two human verifiers agree with the writer, the example is considered a good example.", "If they disagree, we ask a third human verifier to break the tie.", "If there is still disagreement between the writer and the verifiers, the example is discarded.", "If the verifiers disagree, they can over-Context Hypothesis Reason Round Labels Annotations orig.", "Once data collection for the current round is fin-ished, we construct a new training set from the collected data, with accompanying development and test sets, which are constructed solely from verified correct examples.", "The test set was further restricted so as to: 1) include pairs from exclusive annotators who are never included in the training data; and 2) be balanced by label classes (and genres, where applicable).", "We subsequently train a new model on this and other existing data, and repeat the procedure.", "We employed Mechanical Turk workers with quali-fications and collected hypotheses via the ParlAI 1 framework.", "Annotators are presented with a context and a target labeleither entailment', con-tradiction', or neutral'and asked to write a hypothesis that corresponds to the label.", "We phrase the label classes as definitely correct, definitely incorrect, or neither definitely correct nor definitely incorrect given the context, to make the task easier to grasp.", "Model predictions are obtained for the context and submitted hypothesis pair.", "The probability of each label is shown to the worker as feedback.", "If the model prediction was incorrect, the job is complete.", "If not, the worker continues to write hypotheses for the given (context, target-label) pair until the model predicts the label incor-1 https://parl.ai/ rectly or the number of tries exceeds a threshold (5 tries in the first round, 10 tries thereafter).", "To encourage workers, payments increased as rounds became harder.", "For hypotheses that the model predicted incorrectly, and that were verified by other humans, we paid an additional bonus on top of the standard rate.", "For the first round, we used a BERT-Large model (Devlin et al., 2018) trained on a concatenation of SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2017), and selected the best-performing model we could train as the starting point for our dataset collection procedure.", "For Round 1 contexts, we randomly sampled short multi-sentence passages from Wikipedia (of 250-600 characters) from the manually curated HotpotQA training set (Yang et al., 2018).", "Contexts are either ground-truth contexts from that dataset, or they are Wikipedia passages retrieved using TF-IDF (Chen et al., 2017) based on a HotpotQA question.", "For the second round, we used a more powerful RoBERTa model (Liu et al., 2019b) trained on SNLI, MNLI, an NLI-version 2 of FEVER (Thorne et al., 2018), and the training data from the previous round (A1).", "After a hyperparameter search, we 2 The NLI version of FEVER pairs claims with evidence retrieved by Nie et al. (2019) as (context, hypothesis) inputs.", "selected the model with the best performance on the A1 development set.", "Then, using the hyperpa-rameters selected from this search, we created a final set of models by training several models with different random seeds.", "During annotation, we constructed an ensemble by randomly picking a model from the model set as the adversary each turn.", "This helps us avoid annotators exploiting vulnerabilities in one single model.", "A new non-overlapping set of contexts was again constructed from Wikipedia via HotpotQA using the same method as Round", "1. 2.5 Round 3 For the third round, we selected a more diverse set of contexts, in order to explore robustness under domain transfer.", "In addition to contexts from Wikipedia for Round 3, we also included contexts from the following domains: News (extracted from Common Crawl), fiction (extracted from Sto-ryCloze (Mostafazadeh et al., 2016) and CBT (Hill et al., 2015)), formal spoken text (excerpted from court and presidential debate transcripts in the Manually Annotated Sub-Corpus (MASC) of the Open American National Corpus 3 ), and causal or procedural text, which describes sequences of events or actions, extracted from WikiHow.", "Finally, we also collected annotations using the longer contexts present in the GLUE RTE training data, which came from the RTE5 dataset (Bentivogli et al., 2009).", "We trained an even stronger RoBERTa ensemble by adding the training set from the second round (A2) to the training data.", "The ANLI dataset, comprising three rounds, improves upon previous work in several ways.", "First, and most obviously, the dataset is collected to be more difficult than previous datasets, by design.", "Second, it remedies a problem with SNLI, 3 anc.org/data/masc/corpus/ namely that its contexts (or premises) are very short, because they were selected from the image captioning domain.", "We believe longer contexts should naturally lead to harder examples, and so we constructed ANLI contexts from longer, multi-sentence source material.", "Following previous observations that models might exploit spurious biases in NLI hypotheses, (Gururangan et al., 2018; Poliak et al., 2018), we conduct a study of the performance of hypothesis-only models on our dataset.", "We show that such models perform poorly on our test sets.", "With respect to data generation with nave annotators, Geva et al. (2019) noted that models can pick up on annotator bias, modelling annotator artefacts rather than the intended reasoning phenomenon.", "To counter this, we selected a subset of annotators (i.e., the exclusive workers) whose data would only be included in the test set.", "This enables us to avoid overfitting to the writing style biases of particular annotators, and also to determine how much individual annotator bias is present for the main portion of the data.", "Examples from each round of dataset collection are provided in Table", "1. Furthermore, our dataset poses new challenges to the community that were less relevant for previous work, such as: can we improve performance online without having to train a new model from scratch every round, how can we overcome catastrophic forgetting, how do we deal with mixed model biases, etc.", "Because the training set includes examples that the model got right but were not verified, learning from noisy and potentially unverified data becomes an additional interesting challenge.", "The dataset statistics can be found in Table", "2. The number of examples we collected increases per round, starting with approximately 19k examples for Round 1, to around 47k examples for Round 2, Model Training Data A1 A2 A3 ANLI ANLI-E SNLI MNLI-m/-mm BERT S,M (cid:63) 1 00.0 28.9 28.8 19.8 19.9 91.3 86.7 / 86.4 +A1 44.2 32.6 29.3 35.0 34.2 91.3 86.3 / 86.5 +A1+A2 57.3 45.2 33.4 44.6 43.2 90.9 86.3 / 86.3 +A1+A2+A3 57.2 49.0 46.1 50.5 46.3 90.9 85.6 / 85.4 S,M,F,ANLI 57.4 48.3 43.5 49.3 44.2 90.4 86.0 / 85.8 XLNet S,M,F,ANLI 67.6 50.7 48.3 55.1 52.0 91.8 89.6 / 89.4 RoBERTa S,M 47.6 25.4 22.1 31.1 31.4 92.6 90.8 / 90.6 +F 54.0 24.2 22.4 32.8 33.7 92.7 90.6 / 90.5 +F+A1 (cid:63) 2 68.7 19.3 22.0 35.8 36.8 92.8 90.9 / 90.7 +F+A1+A2 (cid:63) 3 71.2 44.3 20.4 43.7 41.4 92.9 91.0 / 90.7 S,M,F,ANLI 73.8 48.9 44.4 53.7 49.7 92.6 91.0 / 90.6 Table 3: Model Performance.", "to over 103k examples for Round", "3. We collected more data for later rounds not only because that data is likely to be more interesting, but also simply because the base model is better and so annotation took longer to collect good, verified correct examples of model vulnerabilities.", "For each round, we report the model error rate, both on verified and unverified examples.", "The unverified model error rate captures the percentage of examples where the model disagreed with the writer's target label, but where we are not (yet) sure if the example is correct.", "The verified model error rate is the percentage of model errors from example pairs that other annotators confirmed the correct label for.", "Note that error rate is a useful way to evaluate model quality: the lower the model error rateassuming constant annotator quality and context-difficultythe better the model.", "We observe that model error rates decrease as we progress through rounds.", "In Round 3, where we included a more diverse range of contexts from various domains, the overall error rate went slightly up compared to the preceding round, but for Wikipedia contexts the error rate decreased substantially.", "While for the first round roughly 1 in every 5 examples were verified model errors, this quickly dropped over consecutive rounds, and the overall model error rate is less than 1 in 10.", "On the one hand, this is impressive, and shows how far we have come with just three rounds.", "On the other hand, it shows that we still have a long way to go if even untrained annotators can fool ensembles of state-of-the-art models with relative ease.", "Table 2 also reports the average number of tries, i.e., attempts made for each context until a model error was found (or the number of possible tries is exceeded), and the average time this took (in seconds).", "Again, these metrics are useful for evaluating model quality: observe that the average number of tries and average time per verified error both go up with later rounds.", "This demonstrates that the rounds are getting increasingly more difficult.", "Further dataset statistics and inter-annotator agreement are reported in Appendix C. 4 Results Table 3 reports the main results.", "In addition to BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019b), we also include XLNet (Yang et al., 2019) as an example of a strong, but different, model architecture.", "We show test set performance on the ANLI test sets per round, the total ANLI test set, and the exclusive test subset (examples from test-set-exclusive workers).", "We also show accuracy on the SNLI test set and the MNLI development set (for the purpose of comparing between different model configurations across table rows).", "In what follows, we discuss our observations.", "Base model performance is low.", "Notice that the base model for each round performs very poorly on that round's test set.", "This is the expected outcome: For round 1, the base model gets the entire test set wrong, by design.", "For rounds 2 and 3, we used an ensemble, so performance is not necessarily zero.", "However, as it turns out, performance still falls well below chance 4 , indicating that workers did not find vulnerabilities specific to a single model, but generally applicable ones for that model class.", "Rounds become increasingly more difficult.", "As already foreshadowed by the dataset statistics, round 3 is more difficult (yields lower performance) than round 2, and round 2 is more difficult than round", "1. This is true for all model architectures.", "Training on more rounds improves robustness.", "Generally, our results indicate that training on more rounds improves model performance.", "This is true for all model architectures.", "Simply training on more normal NLI data would not help a model be robust to adversarial attacks, but our data actively helps mitigate these.", "RoBERTa achieves state-of-the-art perfor-mance...", "We obtain state of the art performance on both SNLI and MNLI with the RoBERTa model finetuned on our new data.", "The RoBERTa paper (Liu et al., 2019b) reports a score of 90 .", "2 for both MNLI-matched and -mismatched dev, while we obtain 91 .", "0 and 90 .", "7 .", "The state of the art on SNLI is currently held by MT-DNN (Liu et al., 2019a), which reports 91 .", "6 compared to our 92 .", "9 .", "...but is outperformed when it is base model.", "However, the base (RoBERTa) models for rounds 2 and 3 are outperformed by both BERT and XLNet (rows 5, 6 and 10).", "This shows that annotators found examples that RoBERTa generally struggles with, which cannot be mitigated by more examples alone.", "It also implies that BERT, XLNet, and RoBERTa all have different weaknesses, possibly as a function of their training data (BERT, XLNet and RoBERTa were trained on different data sets, which might or might not have contained information relevant to the weaknesses).", "Continuously augmenting training data does not downgrade performance.", "Even though ANLI training data is different from SNLI and MNLI, adding it to the training set does not harm performance on those tasks.", "Our results (see also rows 2-3 of Table 6) suggest the method could successfully be applied for multiple additional rounds.", "Exclusive test subset difference is small.", "We included an exclusive test subset (ANLI-E) with examples from annotators never seen in training, and find negligible differences, indicating that our models do not over-rely on annotator's writing styles.", "We examine the effectiveness of the adversarial training data in two ways.", "First, we sample from respective datasets to ensure exactly equal amounts of training data.", "Table 5 shows that the adversarial data improves performance, including on SNLI and MNLI when we replace part of those datasets with the adversarial data.", "This suggests that the adversarial data is more data efficient than normally collected data.", "Figure 2 shows that adversarial data collected in later rounds is of higher quality and more data-efficient.", "Second, we compared verified correct examples of model vulnerabilities (examples that the model got wrong and were verified to be correct) to unverified ones.", "Figure 3 shows that the verified correct examples are much more valuable than the unverified examples, especially in the later rounds (where the latter drops to random).", "We also test models on two recent hard NLI test sets: SNLI-Hard (Gururangan et al., 2018) and", "the NLI stress tests (Naik et al., 2018) (see Appendix A for details).", "The results are in Table 4.", "We observe that all our models outperform the models presented in original papers for these common stress tests.", "The RoBERTa models perform best on SNLI-Hard and achieve accuracy levels in the high 80s on the antonym' (AT), numerical rea-soning' (NR), length' (LN), spelling error'(SE) sub-datasets, and show marked improvement on both negation' (NG), and word overlap' (WO).", "Training on ANLI appears to be particularly useful for the AT, NR, NG and WO stress tests.", "For SNLI and MNLI, concerns have been raised about the propensity of models to pick up on spurious artifacts that are present just in the hypotheses (Gururangan et al., 2018; Poliak et al., 2018).", "Here, we compare full models to models trained only on the hypothesis (marked H ).", "Table 6 reports results on ANLI, as well as on SNLI and MNLI.", "The table shows that hypothesis-only models perform poorly on ANLI 5 , and obtain good performance on SNLI and MNLI.", "Hypothesis-only performance 5 Obviously, without manual intervention, some bias remains in how people phrase hypothesese.g., contradiction might have more negationwhich explains why hypothesis-only performs slightly above chance when trained on ANLI.", "We observe that in rounds 2 and 3, RoBERTa is not much better than hypothesis-only.", "This could mean two things: either the test data is very difficult, or the training data is not good.", "To rule out the latter, we trained only on ANLI ( 163k training examples): RoBERTa matches BERT when trained on the much larger, fully in-domain SNLI+MNLI combined dataset (943k training examples) on MNLI, with both getting 86 (the third row in Table 6).", "Hence, this shows that the test sets are so difficult that state-of-the-art models cannot outperform a hypothesis-only prior.", "We explore the types of inferences that fooled models by manually annotating 500 examples from each round's development set.", "A dynamically evolving dataset offers the unique opportunity to track how model error rates change over time.", "Since each round's development set contains only verified examples, we can investigate two interesting questions: which types of inference do writers employ to fool the models, and are base models differentially sensitive to different types of reasoning?", "The results are summarized in Table 7.", "We devised an inference ontology containing six types of inference: Numerical & Quantitative (i.e., reason-Round Numerical & Quant. Reference & Names Standard Lexical Tricky Reasoning & Facts Quality A1 38% 13% 18% 13% 22% 53% 4% A2 32% 20% 21% 21% 20% 59% 3% A3 10% 18% 27% 27% 27% 63% 3% Average 27% 17% 22% 22% 23% 58% 3% Table 7: Analysis of 500 development set examples per round and on average. ing about cardinal and ordinal numbers, inferring dates and ages from numbers, etc.), Reference & Names (coreferences between pronouns and forms of proper names, knowing facts about name gender, etc.), Standard Inferences (conjunctions, negations, cause-and-effect, comparatives and superlatives etc.), Lexical Inference (inferences made possible by lexical information about synonyms, antonyms, etc.), Tricky Inferences (wordplay, linguistic strategies such as syntactic transformations/reorderings, or inferring writer intentions from contexts), and reasoning from outside knowledge or additional facts (e.g., You can't reach the sea directly from Rwanda).", "The quality of annotations was also tracked; if a pair was ambiguous or a label debatable (from the expert annotator's perspective), it was flagged.", "Quality issues were rare at 3-4% per round.", "Any one example can have multiple types, and every example had at least one tag.", "We observe that both round 1 and 2 writers rely heavily on numerical and quantitative reasoning in over 30% of the development setthe percentage in A2 (32%) dropped roughly 6% from A1 (38%)while round 3 writers use numerical or quantitative reasoning for only 17%.", "The majority of numerical reasoning types were references to cardinal numbers that referred to dates and ages.", "Inferences predicated on references and names were present in about 10% of rounds 1 & 3 development sets, and reached a high of 20% in round 2, with coreference featuring prominently.", "Standard inference types increased in prevalence as the rounds increased, ranging from 18%27%, as did Lexi-cal' inferences (increasing from 13%31%).", "The percentage of sentences relying on reasoning and outside facts remains roughly the same, in the mid-50s, perhaps slightly increasing over the rounds.", "For round 3, we observe that the model used to collect it appears to be more susceptible to Standard, Lexical, and Tricky inference types.", "This finding is compatible with the idea that models trained on adversarial data perform better, since annotators seem to have been encouraged to devise more creative examples containing harder types of inference in order to stump them.", "Further analysis is provided in Appendix B. 6 Related work Bias in datasets Machine learning methods are well-known to pick up on spurious statistical patterns.", "For instance, in the first visual question answering dataset (Antol et al., 2015), biases like 2 being the correct answer to 39% of the questions starting with how many allowed learning algorithms to perform well while ignoring the visual modality altogether (Jabri et al., 2016; Goyal et al., 2017).", "In NLI, Gururangan et al. (2018), Poliak et al. (2018) and Tsuchiya (2018) showed that hypothesis-only baselines often perform far better than chance.", "NLI systems can often be broken merely by performing simple lexical substitutions (Glockner et al., 2018), and struggle with quantifiers (Geiger et al., 2018) and certain superficial syntactic properties (McCoy et al., 2019).", "In question answering, Kaushik and Lipton (2018) showed that questionand passage-only models can perform surprisingly well, while Jia and Liang (2017) added adversarially constructed sentences to passages to cause a drastic drop in performance.", "Many tasks do not actually require sophisticated linguistic reasoning, as shown by the surprisingly good performance of random encoders (Wieting and Kiela, 2019).", "Similar observations were made in machine translation (Belinkov and Bisk, 2017) and dialogue (Sankar et al., 2019).", "Machine learning also has a tendency to overfit on static targets, even if that does not happen deliberately (Recht et al., 2018).", "In short, the field is rife with dataset bias and papers trying to address this important problem.", "This work presents a potential solution: if such biases exist, they will allow humans to fool the models, resulting in valuable training examples until the bias is mitigated.", "Dynamic datasets.", "Bras et al. (2020) proposed AFLite, an approach for avoiding spurious biases through adversarial filtering, which is a model-in-the-loop approach that iteratively probes and improves models.", "Kaushik et al. (2019) offer a causal account of spurious patterns, and counterfactually augment NLI datasets by editing examples to break the model.", "That approach is human-in-the-loop, using humans to find problems with one single model.", "In this work, we employ both human and model-based strategies iteratively, in a form of human-and-model-in-the-loop training, to create completely new examples, in a potentially never-ending loop (Mitchell et al., 2018).", "Human-and-model-in-the-loop training is not a new idea.", "Mechanical Turker Descent proposes a gamified environment for the collaborative training of grounded language learning agents over multiple rounds (Yang et al., 2017).", "The Build it Break it Fix it strategy in the security domain (Ruef et al., 2016) has been adapted to NLP (Ettinger et al., 2017) as well as dialogue safety (Dinan et al., 2019).", "The QApedia framework (Kratzwald and Feuerriegel, 2019) continuously refines and updates its content repository using humans in the loop, while human feedback loops have been used to improve image captioning systems (Ling and Fidler, 2017).", "Wallace et al. (2019) leverage trivia experts to create a model-driven adversarial question writing procedure and generate a small set of challenge questions that QA-models fail on.", "Relatedly, Lan et al. (2017) propose a method for continuously growing a dataset of paraphrases.", "There has been a flurry of work in constructing datasets with an adversarial component, such as Swag (Zellers et al., 2018) and HellaSwag (Zellers et al., 2019), CODAH (Chen et al., 2019), Adversarial SQuAD (Jia and Liang, 2017), Lambada (Paperno et al., 2016) and others.", "Our dataset is not to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself NLI, or ART. 7 Discussion & Conclusion In this work, we used a human-and-model-in-the-loop training method to collect a new benchmark for natural language understanding.", "The benchmark is designed to be challenging to current state-of-the-art models.", "Annotators were employed to act as adversaries, and encouraged to find vulnerabilities that fool the model into misclassifying, but that another person would correctly classify.", "We found that non-expert annotators, in this gamified setting and with appropriate incentives, are remarkably creative at finding and exploiting weaknesses.", "We collected three rounds, and as the rounds progressed, the models became more robust and the test sets for each round became more difficult.", "Training on this new data yielded the state of the art on existing NLI benchmarks.", "The ANLI benchmark presents a new challenge to the community.", "It was carefully constructed to mitigate issues with previous datasets, and was designed from first principles to last longer.", "The dataset also presents many opportunities for further study.", "For instance, we collected annotator-provided explanations for each example that the model got wrong.", "We provided inference labels for the development set, opening up possibilities for interesting more fine-grained studies of NLI model performance.", "While we verified the development and test examples, we did not verify the correctness of each training example, which means there is probably some room for improvement there.", "A concern might be that the static approach is probably cheaper, since dynamic adversarial data collection requires a verification step to ensure examples are correct.", "However, verifying examples is probably also a good idea in the static case, and adversarially collected examples can still prove useful even if they didn't fool the model and weren't verified.", "Moreover, annotators were better incentivized to do a good job in the adversarial setting.", "Our finding that adversarial data is more data-efficient corroborates this theory.", "Future work could explore a detailed cost and time trade-off between adversarial and static collection.", "It is important to note that our approach is model-agnostic.", "HAMLET was applied against an ensemble of models in rounds 2 and 3, and it would be straightforward to put more diverse ensembles in the loop to examine what happens when annotators are confronted with a wider variety of architectures.", "The proposed procedure can be extended to other classification tasks, as well as to ranking with hard negatives either generated (by adversarial models) or retrieved and verified by humans.", "It is less clear how the method can be applied in generative cases.", "Adversarial NLI is meant to be a challenge for measuring NLU progress, even for as yet undiscovered models and architectures.", "Luckily, if the benchmark does turn out to saturate quickly, we will always be able to collect a new round.", "YN interned at Facebook.", "YN and MB were sponsored by DARPA MCS Grant #N66001-19-2-4031, ONR Grant #N00014-18-1-2871, and DARPA YFA17-D17AP00022.", "Special thanks to Sam Bowman for comments on an earlier draft." ]
[ "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "objective", "abstain", "abstain", "result", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Toward Better Storylines with Sentence-Level Language Models Daphne Ippolito [email protected] David Grangier [email protected] Douglas Eck [email protected] Chris Callison-Burch * [email protected] Abstract We propose a sentence-level language model which selects the next sentence in a story from a finite set of fluent alternatives.", "Since it does not need to model fluency, the sentence-level language model can focus on longer range dependencies, which are crucial for multi-sentence coherence.", "Rather than dealing with individual words, our method treats the story so far as a list of pre-trained sentence embeddings and predicts an embedding for the next sentence, which is more efficient than predicting word embeddings.", "Notably this allows us to consider a large number of candidates for the next sentence during training.", "We demonstrate the effectiveness of our approach with state-of-the-art accuracy on the unsupervised Story Cloze task and with promising results on larger-scale next sentence prediction tasks.", "Computer generation of stories and other kinds of creative writing is a challenging endeavor.", "It entangles two difficult tasks: the generation of fluent natural language and the generation of a coherent storyline.", "In the recent year, neural language models have made tremendous progress with respect to fluency (Bahdanau et al., 2015; Vaswani et al., 2017; Bengio et al., 2003; Devlin et al., 2019), but coherency is still a major challenge (See et al., 2019).", "The generation of coherent stories has recently been addressed with additional conditioning: Fan et al. (2018) suggest conditioning on a story prompt, Clark et al. (2018) propose collaboration between a generative model and a human writer, and Guan et al. (2019) suggest attending to a commonsense graph relevant to the story plot.", "Conditioning based on a generated story plan (Martin et al., 2018; Fan et al., 2019; Yao et al., 2019), a seUniversity of Pennsylvania, Google quence of images (Chandu et al., 2019) or character roles (Liu et al., 2020) have also been considered.", "Our work is orthogonal to these efforts.", "Rather than considering additional conditioning, we propose a model which takes as input several sentences of context and selects the best next sentence within a large set of fluent candidate sentences.", "We leverage pre-trained BERT embeddings (Devlin et al., 2019) to build this sentence-level language model.", "Given the embeddings of the previous sentences of the story, our model learns to predict a likely embedding of the next sentence.", "This task isolates the modeling of long-range dependencies from the prediction of individual words, which has several advantages.", "First, since our model only needs to determine how well each candidate sentence would fit as a coherent continuation to the story, it does not spend capacity and time to learn fluency.", "Second, our model does not manipulate individual words but full sentences, which allows us to consider tens of thousands of candidate sentences at a time.", "This contrasts with prior work (Logeswaran and Lee, 2018) where the need to learn token-level representations limited the number of candidate next sentences that could be considered to a few hundred.", "Third, we can rely on compact model architectures that train quickly because we take advantage of strong semantic representation from a pre-trained bidirectional language model, BERT, as our sentence embeddings.", "Of course, these benefits also imply that our sentence representation is limited to the information extracted by the pre-trained model.", "Nevertheless, we show that our model achieves state-of-the-art accuracy among unsupervised approaches on the Story Cloze task: predicting which of two sentences coherently ends a short story.", "Our work also opens up the possibility of ranking thousands of candidate sentences from a large literature repository.", "On the ROC Stories dataset, we observe that training with a large number of candidates is key for selecting the most coherent ending among a large set of candidates at test time.", "We also show preliminary results on the efficacy of our method for ranking candidate next sentence on the Toronto Book Corpus (Kiros et al., 2015), a much larger book dataset.", "We envision that our methods for scoring many candidate next sentences by their coherence with the context might be useful to downstream generation tasks where it is possible to generate many fluent continuations of a text, but it remains an unsolved problem how to refine and choose the best of them.", "To encourage this exploration, we release our code and models 1 .", "We propose a sentence-level language model: our model estimates P ( s t +1 | s 1: t ) , the probability distribution for sentence s t +1 given the t previous sentences, s 1 , . . . s t .", "Since it is intractable to marginalize over all possible candidate next sentences, we consider a finite but large set of N valid, fluent sentences.", "Without loss of generality, we can consider s t +1 { 1 , . . . , N } as an integer index into that set of possible next sentences.", "This strategy resembles negative sampling in word2vec (Mikolov et al., 2013).", "Our model represents sentences with precomputed vector embeddings.", "Specifically, sentences are represented by the mean of the 768-dimensional contextual word embeddings of the second-to-last layer of BERT (Devlin et al., 2019).", "This representation has shown to encode more transferable features compare to other layers (Liu et al., 2019).", "Alternative sentence representations were considered, including embeddings from the universal sentence encoder (Cer et al., 2018) and a weighted mean of the BERT embeddings using inverse document frequency weighting (Zhang et al., 2019).", "None of these alternatives improved our results however.", "Motivated by simplicity, we consider a classical multi-layer perceptron (MLP) f which takes as input the context sentence embeddings concatenated into a single vector.", "At the output layer, we perform a softmax operation.", "If we represent candidate sentences { 1 , . . . , N } by the embeddings { e i } Ni =1 , our model estimates the probability that i is the next 1 Code for ROC Stories experiments can be found at https://github.com/google-research/google-research/tree/ master/better storylines.", "where h = f ( s 1: t ) is the output of the MLP given context s 1: t , and Z ( h ) = (cid:80) Nj =1 exp e (cid:62) j h is the partition function.", "At train time, the candidate set { 1 , . . . , N } consists of the correct next sentence along with N 1 distractor sentences.", "The distractors can either be static (the same set used throughout training) or dynamic (picked at random from a larger set for each train batch).", "In this case, the vocabulary of next values to choose from changes with each train step, similar to negative sampling (Mikolov et al., 2013).", "At test time, novel sentences can be embedded with BERT and scored by our model.", "Like a classical language model, we optimize for the likelihood of the true next sentence's embedding.", "However, when training we found that the sentences from the context ( s 1 , . . . , s t ) often ended up being given very high scores by our model.", "Inspired by work in sentence reordering (Lapata, 2003; Logeswaran and Lee, 2018), we incorporated an auxiliary loss, which we refer to as CSLoss , that only includes the context sentences s 1: t in the distractor set.", "Lastly, we consider a residual variant of the MLP (referred to as resMLP ) with skip connection between layers, as described in He et al. (2016).", "The residual model trains faster and sometimes achieves higher accuracy than the non-residual model.", "Though we experimented with recurrent (Sundermeyer et al., 2012) and self-attention (Vaswani et al., 2017) models, we did not observe improvements, perhaps because the input to our model is already the high-dimensional output of a large mask language model.", "We leave deeper architecture exploration, which will be especially critical as context length is extended, to future work.", "Dataset Our experiments use the ROC Stories dataset, which consists of stories focusing on common sense (Mostafazadeh et al., 2016).", "The training set has 98k stories, with five sentences each.", "The validation and test sets each contain 1.8k stories consisting of four sentences followed by two alternative endings: one ending is coherent with the context; the other is not.", "The dataset was introduced for the Story Cloze task, inspired by Taylor (1953), where the goal is to select the coherent ending.", "While the dataset and task were introduced as a way to probe for coherence and commonsense in models trained only on the unlabeled portion, most research derived from this dataset focuses on a supervised setting, using the validation set as a smaller, labeled training set (Chaturvedi et al., 2017; Sun et al., 2019; Cui et al., 2019; Li et al., 2019; Zhou et al., 2019).", "Our work is faithful to the original task objective.", "We train solely on the training set, i.e. the model never sees incoherent endings at training time.", "Model We consider two models, an MLP and a residual MLP.", "They take as input the previous sentences represented as the concatenation of their embeddings.", "Alternative context aggregation strategies were considered with recurrent (Sundermeyer et al., 2012) and attention (Vaswani et al., 2017) architectures, without strong empirical advantages.", "The models maps its input to a vector which is compared to a set of candidate sentence embeddings via dot product.", "The embedding of the true next sentence should receive the highest score.", "For each example, we consider all other fifth sentences in the training set (96k in total) as the candidate set.", "The input of our model is 3,072 dimensional, i.e. 4 context sentences represented by 768 dimensional BERT embeddings.", "After an architecture search, our best MLP has 3 layers of 1,024 units, and our best resMLP has a single residual layer with hidden size of 1,024.", "Both contain just over 6M trainable parameters.", "Both apply dropout with a rate of 0.5 after each ReLU, and layer normalization is performed on the concatenated context sentence embedding passed in as input to the network and on the final predicted embedding for the next sentence.", "For the Story Cloze task, the two architectures achieve similar validation accuracy, but when considering more than two distractors, the resMLP significantly outperforms the standard MLP.", "The resMLP also converges quicker than the MLP.", "Training to convergence takes under 2 hours for each model on a Tesla V100.", "Dataset ROC Stories contains only self-contained five-sentence stories, focusing on everyday life scenarios.", "They contain no dialog and very little flowery, expository language.", "Ideally our method would also be successful at scoring potential continuations to more naturally-written stories.", "To this end, we test out our approach on excerpts from the Toronto Book Corpus (Kiros et al., 2015), a dataset of self-published novels.", "The dataset contains over 7,000 unique books totalling over 45 million sentences.", "Since these stories are much longer than the ROC Stories ones and many of the sentences are uninformative (nearly 5% of sentences are 3 words or shorter, and 14% are 5 words or shorter), we double the context length to 8 sentences.", "Model In addition to experimenting with a similar residual MLP architecture to the one used on ROC Stories, we also ran experiments with a Transformer model (Vaswani et al., 2017).", "The residual MLP architecture contains 2 residual layers with hidden size of 1024 (11M params total).", "The transformer has 4 self-attention layers with hidden size of 768, filter size of 2048 and 8 attention heads (22M params total).", "While the residual MLP is trained to predict the 9th sentence given the previous 8 sentences, the Transformer is trained to predict each next sentence given the previous sentences in a sequence of length 10 sentences.", "However, we only evaluate the Transformer on the task of predicting the 9th sentence so that evaluation results are directly comparable to the residual MLP.", "For each batch during training, 2k distractors are randomly selected from the train set.", "Like with ROC Stories, we experiment with an auxiliary loss where just sentences from the context were used as distractors.", "Table 3 reports the results.", "We evaluate on the Story Cloze task, a binary classification task, as well as on the task of ranking large set of possible next sentences.", "Table 1 shows that our method outperforms unsupervised alternatives.", "The introduction of the CSLoss which considers only context sentences as candidates improves accuracy compared to only using a loss over all possible fifth sentences.", "For comparison, we include the accuracies of the best unsupervised methods in the literature.", "Schenk and Chiarcos (2017) construct negative examples for their binary classification task by pairing contexts with random fifth sentences selected from the training set.", "Peng et al. (2017) train a language model to predict a representation of the semantic frame, entities, and sentiment of the fifth sentence given the representations of the previous sentences, then take the more likely fifth sentence.", "We achieve higher accuracy without relying on a task-specific architecture.", "Table 1 also shows that picking the ending that is more likely according to a word-level language model, in our case GPT-2's 355M parameter model, does not yield very high accuracies, even when the language model is finetuned on ROC Stories text (Radford et al., 2019).", "Lastly, we also include the accuracy reported by Schwartz et al. (2017), where a logistic classifier is trained to combine multiple language model scores.", "It is worth noting that state-of-the-art on the Story Cloze task is over 90% accuracy (Li et al., 2019; Cui et al., 2019) for semi-supervised settings.", "The methods achieving this level of performance 0% 2% 4% 6% 8% 10% 12% 1 10 100 1,000 10,000 96k P @ 10 Number of Distractors in Train Loss CLLoss=0.0 CLLoss=1.0 Figure 1: The impact of the number of negative sentences used during training on the rank of the true ending out of 98k distractors.", "are not comparable to our unsupervised approach as they require training on the labeled validation set.", "The language model approach from Schwartz et al. (2017) also falls into this category.", "For generation and suggestion scenarios, it is useful to be able to surface the best next sentence out of hundreds or thousands of candidates.", "In Table 3, we show the performance of our method on the 2018 validation set when all 98,161 fifth sentences in the training set plus all 1,571 correct 5th sentences in the 2018 validation are considered as candidate endings.", "Top-10 accuracy is highest, at 10.3%, when training a residual MLP without CSLoss.", "Interestingly, strong performance on the Story Cloze task does not necessarily translate to strong performance on the large-scale ranking task.", "The CSLoss improves performance on the Story Cloze task but hurts it for large-scale ranking.", "In Figure 1, we show how large-scale ranking performance improves as the size of the train-time distractor set is increased.", "However, on the Story Cloze task, the number of training distractors has no significant impact on performance.", "Even when only a single distractor is randomly chosen at each step of training, our method achieves over 70% 2016 test accuracy.", "It seems that training for the goal of detecting the true next sentence out of a very diverse candidate set is useful at test time only when the set of distractors at test time is similarly large and diverse.", "The many-distractors training regime might be less useful for the Story Cloze task since the two candidate endings are designed to be quite topically similar to each other.", "4.", "The failure examples showcase a side-effect of relying on pre-trained sentence embeddings: if common names like Becky or Laura or sports such as fishing and golf are close to each other in embedding space, our model will fail to distinguish between them.", "When evaluating with 100k distractors, about as many as our ROC Stories large-scale ranking task, P@10 is at best 7.1%, compared with 22.7% for ROC Stories.", "We suspect that this task would ben-efit from longer contexts and better selection of distractors.", "In particular, a qualitative evaluation of the data highlighted the presence of a large quantify of short, generic sentences in the high ranking sentences (e.g. he said. and Yes.).", "We see reducing the density of such sentences at training time as a potential for improvement.", "In addition, further investigation is necessary into why the Transformer did not work as well as the residual MLP.", "The use of variable sequence length architectures like the Transformer will become more critical as the input sequence length is increased beyond what an MLP can easily handle.", "This work introduces a sentence-level language model which takes a sequence of sentences as context and predicts a distribution over a finite set of candidate next sentences.", "It takes advantage of pretrained BERT embeddings to avoid having to learn token-level fluency, allowing the model to focus solely on the coherence of the sentence sequences.", "Our results on the Story Cloze task highlight the advantage of this strategy over word-level language models.", "At train time, our model considers much larger amounts of text per update than typical token-level language models.", "We show that this strategy Context: My family got up one morning while on vacation.", "allows our model to surface appropriate endings to short stories out of a large set of candidates.", "As future work, we plan to further evaluate the impact of different sequential architectures, longer contexts, alternative sentence embeddings, and cleverer selection of distractors.", "Inspired by deliberation networks and automatic post editing methods (Xia et al., 2017; Freitag et al., 2019), we ultimately want to apply our model to two-step generation, first selecting a sentence from a large set before refining it to fit the context.", "This research is based upon work supported in part by U.S. DARPA KAIROS Program No.", "FA8750-19-2-1004.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." ]
[ "objective", "abstain", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "objective", "method", "abstain", "method", "method", "result", "abstain", "result", "result", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "method", "result", "abstain", "method", "abstain", "other", "other", "other", "other" ]
[ "We present NEWSROOM , a summarization dataset of 1.3 million articles and summaries written by authors and editors in newsrooms of 38 major news publications.", "Extracted from search and social media metadata between 1998 and 2017, these high-quality summaries demonstrate high diversity of summarization styles.", "In particular, the summaries combine abstractive and extractive strategies, borrowing words and phrases from articles at varying rates.", "We analyze the extraction strategies used in NEWSROOM summaries against other datasets to quantify the diversity and difficulty of our new data, and train existing methods on the data to evaluate its utility and challenges.", "The dataset is available online at summari.es.", "The development of learning methods for automatic summarization is constrained by the limited high-quality data available for training and evaluation.", "Large datasets have driven rapid improvement in other natural language generation tasks, such as machine translation, where data size and diversity have proven critical for modeling the alignment between source and target texts (Tiedemann, 2012).", "Similar challenges exist in summarization, with the additional complications introduced by the length of source texts and the diversity of summarization strategies used by writers.", "Access to large-scale high-quality data is an essential prerequisite for making substantial progress in summarization.", "In this paper, we present NEWSROOM , a dataset with 1.3 million news articles and human-written summaries.", "NEWSROOM 's summaries were written by authors and editors in the newsrooms of news, sports, entertainment, financial, and other publications.", "The summaries were published with articles as HTML metadata for social media services and Abstractive Summary: South African photographer Anton Hammerl, missing in Libya since April 4th , was killed in Libya more than a month ago .", "Mixed Summary: A major climate protest in New York on Sunday could mark a seminal shift in the politics of global warming, just ahead of the U.N. Climate Summit.", "Extractive Summary: A person familiar with the search tells The Associated Press that Texas has offered its head coaching job to Louisvilles Charlie Strong and he is expected to accept.", "search engines page descriptions.", "NEWSROOM summaries are written by humans, for common readers, and with the explicit purpose of summarization.", "As a result, NEWSROOM is a nearly two decade-long snapshot representing how single-document summarization is used in practice across a variety of sources, writers, and topics.", "Identifying large, high-quality resources for summarization has called for creative solutions in the past.", "This includes using news headlines as summaries of article prefixes (Napoles et al., 2012; Rush et al., 2015), concatenating bullet points as summaries (Hermann et al., 2015; See et al., 2017), or using librarian archival summaries (Sandhaus, 2008).", "While these solutions provide large scale data, it comes at the cost of how well they reflect the summarization problem or their focus on very specific styles of summarizations, as we discuss in Section 4.", "NEWSROOM is distinguished from these resources in its combination of size and diversity.", "The summaries were written with the explicit goal of concisely summarizing news articles over almost two decades.", "Rather than rely on a single source, the dataset includes summaries from 38 major publishers.", "This diversity of sources and time span translate into a diversity of summarization styles.", "We explore NEWSROOM to better understand the dataset and how summarization is used in practice by newsrooms.", "Our analysis focuses on a key dimension, extractivenss and abstractiveness : extractive summaries frequently borrow words and phrases from their source text, while abstractive summaries describe the contents of articles primarily using new language.", "We develop measures designed to quantify extractiveness and use these measures to subdivide the data into extractive, mixed, and abstractive subsets, as shown in Figure 1, displaying the broad set of summarization techniques practiced by different publishers.", "Finally, we analyze the performance of three summarization models as baselines for NEWSROOM to better understand the challenges the dataset poses.", "In addition to automated ROUGE evaluation (Lin, 2004a,b), we design and execute a benchmark human evaluation protocol to quantify the output summaries relevance and quality.", "Our experiments demonstrate that NEWSROOM presents an open challenge for summarization systems, while providing a large resource to enable data-intensive learning methods.", "The dataset and evaluation protocol are available online at summari.es.", "There are a several frequently used summarization datasets.", "Listed in Figure 2 are examples from four datasets.", "The examples are chosen to be representative: they have scores within 5% of their dataset average across our analysis measures (Sec-tion 4).", "To illustrate the extractive and abstractive nature of summaries, we underline multi-word phrases shared between the article and summary, and italicize words used only in the summary.", "Datasets produced for the Document Understanding Conference (DUC) 1 are small, high-quality datasets developed to evaluate summarization systems (Harman and Over, 2004; Dang, 2006).", "DUC data consist of newswire articles paired with human summaries written specifically for DUC.", "One distinctive feature of the DUC datasets 1 http://duc.nist.gov/ DUC Example Summary: Floods hit north Mozambique as aid to flooded south continues Start of Article: MAPUTO, Mozambique (AP) Just as aid agencies were making headway in feeding hundreds of thousands displaced by flooding in southern and central Mozambique, new floods hit a remote northern region Monday.", "The Messalo River overflowed [...] Gigaword Example Summary: Seve gets invite to US Open Start of Article: Seve Ballesteros will be playing in next month's US Open after all.", "The USGA decided Tuesday to give the Spanish star a special exemption.", "American Ben Crenshaw was also given a special exemption by the United States Golf Association.", "Earlier this week [...] New York Times Corpus Example Summary: Annual New York City Toy Fair opens in Manhattan; feud between Toy Manufacturers of America and its landlord at International Toy Center leads to confusion and turmoil as registration begins ; dispute discussed .", "Start of Article: There was toylock when the Toy Fair opened in Manhattan yesterday.", "The reason?", "A family feud between the Toy Manufacturers of America and its landlord at Fifth Avenue and 23d Street.", "Toy buyers and exhibitors arriving to attend the kickoff of the [...] CNN / Daily Mail Example Summary: Eight Al Jazeera journalists are named on an Egyptian charge sheet, the network says The eight were among 20 people named Most are not employees of Al Jazeera, the network said The eight include three journalists jailed in Egypt Start of Article: Egyptian authorities have served Al Jazeera with a charge sheet that identifies eight of its staff on a list of 20 people all believed to be journalists for allegedly conspiring with a terrorist group, the network said Wednesday. The 20 are wanted by Egyptian [...] Figure 2: Example summaries for existing datasets. is the availability of multiple reference summaries for each article. This is a major advantage of DUC compared to other datasets, especially when evaluating with ROUGE (Lin, 2004b,a), which was designed to be used with multiple references. However, DUC datasets are small, which makes it dif-ficult to use them as training data. DUC summaries are often used in conjunction with larger training datasets, including Gigaword (Rush et al., 2015; Chopra et al., 2016), CNN / Daily Mail (Nallapati et al., 2017; Paulus et al., 2017; See et al., 2017), or Daily Mail alone (Nallapati et al., 2016b; Cheng and Lapata, 2016). The data have also been used to evaluate 709 unsupervised methods (Dorr et al., 2003; Mihalcea and Tarau, 2004; Barrios et al., 2016). 2.2 Gigaword The Gigaword Corpus (Napoles et al., 2012) contains nearly 10 million documents from seven newswire sources, including the Associated Press, New York Times Newswire Service, and Washington Post Newswire Service. Compared to other existing datasets used for summarization, the Gigaword corpus is the largest and most diverse in its sources. While Gigaword does not contain summaries, prior work uses Gigaword headlines as simulated summaries (Rush et al., 2015; Chopra et al., 2016). These systems are trained on Gigaword to recreate headlines given the first sentence of an article. When used this way, Gigaword's simulated summaries are shorter than most natural summary text. Gigaword, along with similar text-headline datasets (Filippova and Altun, 2013), are also used for the related sentence compression task (Dorr et al., 2003; Filippova et al., 2015). 2.3 New York Times Corpus The New York Times Annotated Corpus (Sand-haus, 2008) is the largest summarization dataset currently available. It consists of carefully cu-rated articles from a single source, The New York Times. The corpus contains several hundred thousand articles written between 19872007 that have paired summaries. The summaries were written for the corpus by library scientists, rather than at the time of publication. Our analysis in Section 4 reveals that the data are somewhat biased toward extractive strategies, making it particularly useful as an extractive summarization dataset. Despite this, limited work has used this dataset for summarization (Hong and Nenkova, 2014; Durrett et al., 2016; Paulus et al., 2017). 2.4 CNN / Daily Mail The CNN / Daily Mail question answering dataset (Hermann et al., 2015) is frequently used for summarization. The dataset includes CNN and Daily Mail articles, each associated with several bullet point descriptions. When used in summarization, the bullet points are typically concatenated into a single summary. 2 The dataset has been used for summarization as is (See et al., 2017), or after pre-processing for entity 2 https://github.com/abisee/cnn-dailymail anonymization (Nallapati et al., 2017). This different usage makes comparisons between systems using these data challenging. Additionally, some systems use both CNN and Daily Mail for training (Nallapati et al., 2017; Paulus et al., 2017; See et al., 2017), whereas others use only Daily Mail articles (Nallapati et al., 2016b; Cheng and Lapata, 2016). Our analysis shows that the CNN / Daily Mail summaries have strong bias toward extraction (Section 4). Similar observations about the data were made by Chen et al. (2016) with respect to the question answering task. 3 Collecting NEWSROOM Summaries The NEWSROOM dataset was collected using social media and search engine metadata. To create the dataset, we performed a Web-scale crawling of over 100 million pages from a set of online publishers. We identify newswire articles and use the summaries provided in the HTML metadata. These summaries were created to be used in search engines and social media. We collected HTML pages and metadata using the Internet Archive (Archive.org), accessing archived pages of a large number of popular news, sports, and entertainment sites. Using Archive.org provides two key benefits. First, the archive provides an API that allows for collection of data across time, not limited to recently available articles. Second, the archived URLs of the dataset articles are immutable, allowing distribution of this dataset using a thin, URL-only list. The publisher sites we crawled were selected using a combination of Alexa.com top overall sites, as well as Alexa's top news sites. 3 We supplemented the lists with older lists published by Google of the highest-traffic sites on the Web. 4 We excluded sites such as Reddit that primarily aggregate rather than produce content, as well as publisher sites that proved to have few or no articles with summary metadata available, or have articles primarily in languages other than English. This process resulted in a set of 38 publishers that were included in the dataset. 3 Alexa removed the extended public list in 2017, see: https://web.archive.org/web/2016/https://www.alexa.com/topsites/category/News 4 Google removed this list in 2013, see: https://web.archive.org/web/2012/http://www.google.com/adplanner/static/top1000 710 3.1 Content Scraping We used two techniques to identify article pages from the selected publishers on Archive.org: the search API and index-page crawl. The API allows queries using URL pattern matching, which focuses article crawling on high-precision subdomains or paths. We used the API to search for content from the publisher domains, using specific patterns or post-processing filtering to ensure article content. In addition, we used Archive.org to retrieve the historical versions of the home page for all publisher domains. The archive has content from 1998 to 2017 with varying degrees of time resolution. We obtained at least one snapshot of each page for every available day. For each snapshot, we retrieved all articles listed on the page. For both search and crawled URLs, we performed article de-duplication using URLs to control for varying URL fragments, query parameters, protocols, and ports. When performing the merge, we retained only the earliest article version available to prevent the collection of stale summaries that are not updated when articles are changed. 3.2 Content Extraction Following identification and de-duplication, we extracted the article texts and summaries and further cleaned and filtered the dataset. Article Text We used Readability 5 to extract HTML body content. Readability uses HTML heuristics to extract the main content and title of a page, producing article text without extraneous HTML markup and images. Our preliminary testing, as well as comparison by Peters (2015), found Readability to be one of the highest accuracy content extraction algorithms available. To exclude inline advertising and image captions sometimes present in extractions, we applied additional filtering of paragraphs with fewer than five words. We excluded articles with no body text extracted. Summary Metadata We extracted the article summaries from the metadata available in the HTML pages of articles. These summaries are often written by newsroom editors and journalists to appear in social media distribution and search results. While there is no standard metadata format for summaries online, common fields are often present in the page's HTML. Popular metadata field types include: og:description , twit-ter:description , and description . In cases where 5 https://pypi.org/project/readability-lxml/0.6.2/ Dataset Size 1,321,995 articles Training Set Size 995,041 articles Mean Article Length 658.6 words Mean Summary Length 26.7 words Total Vocabulary Size 6,925,712 words Occurring 10+ Times 784,884 words Table 1: Dataset Statistics different metadata summaries were available, and were different, we used the first field available according to the order above. We excluded articles with no summary text of any type. We also removed article-summary pairs with a high amount of precisely-overlapping text to remove rule-based automatically-generated summaries fully copied from the article (e.g., the first paragraph). 3.3 Building the Dataset Our scraping and extraction process resulted in a set of 1,321,995 article-summary pairs. Simple dataset statistics are shown in Table 1. The data are divided into training (76%), development (8%), test (8%), and unreleased test (8%) datasets using a hash function of the article URL. We use the ar-ticles' Archive.org URLs for lightweight distribution of the data.", "Archive.org is an ideal platform for distributing the data, encouraging its users to scrape its resources.", "We provide the extraction and analysis scripts used during data collection for reproducing the full dataset from the URL list.", "NEWSROOM contains summaries from different topic domains, written by many authors, over the span of more than two decades.", "This diversity is an important aspect of the dataset.", "We analyze the data to quantify the differences in summarization styles and techniques between the different publications to show the importance of reflecting this diversity.", "In Sections 6 and 7, we examine the effect of the dataset diversity on the performance of a variety of summarization systems.", "We examine summarization strategies using three measures that capture the degree of text overlap between the summary and article, and the rate of compression of the information conveyed.", "Given an article text A = h a 1 , a 2 , . . . , a n i consisting of a sequence of tokens a i and the corresponding article summary S = h s 1 , s 2 , , s m i consisting of tokens s i , the set of extractive frag-711 function F ( A, S ) F , h i, j i h 1 , 1 i while i | S | do f h i while j | A | do if s i = a j then h i 0 , j 0 i h i, j i while s i 0 = a j 0 do h i 0 , j 0 i h i 0 + 1 , j 0 + 1 i if | f | < ( i 0 i 1) then f h s i s i 0 1 i j j 0 else j j + 1 h i, j i h i + max {| f | , 1 } , 1 i F F { f } return F Figure 3: Procedure to compute the set F ( A, S ) of extractive phrases in summary S extracted from article A .", "ments F ( A, S ) is the set of shared sequences of tokens in A and S .", "We identify these extractive fragments of an article-summary pair using a greedy process.", "We process the tokens in the summary in order.", "At each position, if there is a sequence of tokens in the source text that is prefix of the remainder of the summary, we mark this prefix as extractive and continue.", "We prefer to mark the longest prefix possible at each step.", "Otherwise, we mark the current summary token as abstractive.", "The set F ( A, S ) includes all the tokens sequences identified as extractive.", "Figure 3 formally describes this procedure.", "Underlined phrases of Figures 1 and 2 are examples of fragments identified as extractive.", "Using F ( A, S ) , we compute two measures: extractive fragment coverage and extractive fragment density .", "Extractive Fragment Coverage The coverage measure quantifies the extent to which a summary is derivative of a text.", "COVERAGE ( A, S ) measures the percentage of words in the summary that are part of an extractive fragment with the article: COVERAGE ( A, S ) = 1 | S | X f F ( A,S ) | f | .", "For example, a summary with 10 words that borrows 7 words from its article text and includes 3 new words will have COVERAGE ( A, S ) = 0 .", "7 .", "Extractive Fragment Density The density measure quantifies how well the word sequence of a summary can be described as a series of extractions.", "For instance, a summary might contain many individual words from the article and therefore have a high coverage.", "However, if arranged in a new order, the words of the summary could still be used to convey ideas not present in the article.", "We define DENSITY ( A, S ) as the average length of the extractive fragment to which each word in the summary belongs.", "The density formulation is similar to the coverage definition but uses a square of the fragment length: DENSITY ( A, S ) = 1 | S | X f F ( A,S ) | f | 2 .", "For example, an article with a 10-word summary made of two extractive fragments of lengths 3 and 4 would have COVERAGE ( A, S ) = 0 .", "7 and DENSITY ( A, S ) = 2 .", "5 .", "Compression Ratio We use a simple dimension of summarization, compression ratio , to further characterize summarization strategies.", "We define COMPRESSION as the word ratio between the article and summary: COMPRESSION ( A, S ) = | A | (cid:14) | S | .", "Summarizing with higher compression is challenging as it requires capturing more precisely the critical aspects of the article text.", "We use density, coverage, and compression to understand the distribution of human summarization techniques across different sources.", "Figure 4 shows the distributions of summaries for different domains in the NEWSROOM dataset, along with three major existing summarization datasets: DUC 2003-2004 (combined), CNN / Daily Mail, and the New York Times Corpus.", "Publication Diversity Each NEWSROOM publication shows a unique distribution of summaries mixing extractive and abstractive strategies in varying amounts.", "For example, the third entry on the top row shows the summarization strategy used by BuzzFeed.", "The density (y-axis) is relatively low, meaning BuzzFeed summaries are unlikely to include long extractive fragments.", "While the coverage (x-axis) is more varied, BuzzFeed's coverage tends to be lower, indicating that it frequently uses novel words in summaries.", "The publication plots in the figure are sorted by median compression ratio.", "We observe that publications with lower compression ratio (top-left of the figure) exhibit higher diversity along both dimensions of extractiveness.", "However, as the median compression ratio increases, the distributions become more con-712 0.2 0.4 0.6 0.8 1 2 3 4 DUC 2003-2004 n = 4,214 c = 47:1 0.2 0.4 0.6 0.8 CNN / Daily Mail n = 287,227 c = 14:1 0.2 0.4 0.6 0.8 New York Times n = 457,006 c = 12:1 0.2 0.4 0.6 0.8 Newsroom n = 995,041 c = 17:1 Figure 4: Density and coverage distributions across the different domains and existing datasets.", "Dataset Diversity Figure 4 demonstrates how DUC, CNN / Daily Mail, and the New York Times exhibit different human summarization strategies.", "DUC summarization is fairly similar to the high-compression newsrooms shown in the lower publication plots in Figure 4.", "However, DUC's median compression ratio is much higher than all other datasets and NEWSROOM publications.", "The figure shows that CNN / Daily Mail and New York Times are skewed toward extractive summaries with lower compression ratios.", "CNN / Daily Mail shows higher coverage and density than all other datasets and publishers in our data.", "Compared to existing datasets, NEWSROOM covers a much larger range of summarization styles, ranging from both highly extractive to highly abstractive.", "We train and evaluate several summarization systems to understand the challenges of NEWSROOM and its usefulness for training systems.", "We evaluate three systems, each using a different summarization strategy with respect to extractiveness: fully extractive (TextRank), fully abstractive (Seq2Seq), and mixed (pointer-generator).", "We further study the performance of the pointer-generator model on NEWSROOM by training three systems using different dataset configurations.", "We compare these systems to two rule-based systems that provide baseline (Lede-3) and an extractive oracle (Fragments).", "Extractive: TextRank TextRank is a sentence-level extractive summarization system.", "The system was originally developed by Mihalcea and Tarau (2004) and was later further developed and improved by Barrios et al. (2016).", "TextRank uses an unsupervised sentence-ranking approach similar to Google PageRank (Page et al., 1999).", "TextRank picks a sequence of sentences from a text for the summary up to a maximum allowable length.", "While this maximum length is typically preset by the user, in order to optimize ROUGE scoring, we tune this parameter to optimize ROUGE-1 F 1 score on the NEWSROOM training data.", "We experimented with values between 1200, and found the optimal value to be 50 words.", "We use tuned TextRank of in Tables 2, 3, and in the supplementary material.", "Abstractive: Seq2Seq / Attention Sequence-to-sequence models with attention (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2014) have been applied to various language tasks, including summarization (Chopra et al., 2016; Nallapati et al., 2016a).", "The process by which the model produces tokens is abstractive, as there is no explicit mechanism to copy tokens from the input text.", "We train a TensorFlow implementation 6 of the Rush et al. (2015) model using NEWSROOM .", "Mixed: Pointer-Generator The pointer-generator model (See et al., 2017) uses abstractive token generation and extractive token copying using a pointer mechanism (Vinyals et al., 2015; Gulcehre et al., 2016), keeping track of extractions using coverage (Tu et al., 2016).", "We evaluate three instances of this model by varying the training data: (1) Pointer-C: trained on the CNN / Daily Mail dataset; (2) Pointer-N: trained on the NEWSROOM dataset; and (3) Pointer-S: trained on a random subset of NEWSROOM training data the same size as the CNN / Daily Mail training.", "The last instance aims to understand the effects of dataset size and summary diversity.", "Lower Bound: Lede-3 A common automatic summarization strategy of online publications is to copy the first sentence, first paragraph, or first k words of the text and treat this as the summary.", "Following prior work (See et al., 2017; Nallapati et al., 2017), we use the Lede-3 baseline, in which the first three sentences of the text are returned as the summary.", "Though simple, this baseline is competitive with state-of-the-art systems.", "Extractive Oracle: Fragments This system has access to the reference summary.", "Given an article A and its summary S , the system computes F ( A, S ) (Section 4).", "Fragments concatenates the fragments in F ( A, S ) in the order they appear in the summary, representing the best possible performance of an ideal extractive system.", "Only systems that are capable of abstractive reasoning can outperform the ROUGE scores of Fragments.", "We study model performance of NEWSROOM , CNN / Daily Mail, and the combined DUC 2003 and 2004 datasets.", "We use the five systems described in Section 5, including the extractive oracle.", "We also evaluate the systems using subsets of 6 https://github.com/tensorflow/models/tree/f87a58/ research/textsum 714 DUC 2003 & 2004 CNN / DAILYMAILNEWSROOMT NEWSROOMU R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L Lede-3 12.99 3.89 11.44 38.64 17.12 35.13 30.49 21.27 28.42 30.63 21.41 28.57 Fragments 87.04 68.45 87.04 93.36 83.19 93.36 88.46 76.03 88.46 88.48 76.06 88.48 TextRank 15.75 4.06 13.02 29.06 11.14 24.57 22.77 9.79 18.98 22.76 9.80 18.97 Abs-N 2.44 0.04 2.37 5.07 0.16 4.80 5.88 0.39 5.32 5.90 0.43 5.36 Pointer-C 12.40 2.88 10.74 32.51 11.90 28.95 20.25 7.32 17.30 20.29 7.33 17.31 Pointer-S 15.10 4.55 12.42 34.33 13.79 28.42 24.50 12.60 20.33 24.48 12.52 20.30 Pointer-N 17.29 5.01 14.53 31.61 11.70 27.23 26.02 13.25 22.43 26.04 13.24 22.45 Table 2: ROUGE-1, ROUGE-2, and ROUGE-L scores for baselines and systems on two common existing datasets, the combined DUC 2003 & 2004 datasets and CNN / Daily Mail dataset, and the released (T) and unreleased (U) test sets of NEWSROOM .", "NEWSROOM to characterize the sensitivity of systems to different levels of extractiveness in reference summaries.", "We use the F 1 -score variants of ROUGE-1, ROUGE-2, and ROUGE-L to account for different summary lengths.", "ROUGE scores are computed with the default configuration of the Lin (2004b) ROUGE v1.5.5 reference implementation.", "Input article text and reference summaries for all systems are tokenized using the Stanford CoreNLP tokenizer (Manning et al., 2014).", "Table 2 shows results for summarization systems on DUC, CNN / Daily Mail, and NEWSROOM .", "In nearly all cases, the fully extractive Lede-3 baseline produces the most successful summaries, with the exception of the relatively extractive DUC.", "Among models, NEWSROOM trained Pointer-N performs best on all datasets other than CNN / Daily Mail, an out-of-domain dataset.", "Pointer-C, which has access to only a limited subset of NEWSROOM , performs worse than Pointer-N on average.", "However, despite not being trained on CNN / Daily Mail, Pointer-S outperforms Pointer-C on its own data under ROUGE-N and is competitive under ROUGE-L.", "Finally, both Pointer-N and Pointer-S outperform other systems and baselines on DUC, whereas Pointer-C does not outperform Lede-3.", "Table 3 shows development results on the NEWSROOM data for different level of extractiveness.", "Pointer-N outperforms the remaining models across all extractive subsets of NEWSROOM and, in the case of the abstractive subset, exceeds the performance of Lede-3.", "The success of Pointer-N and Pointer-S in generalizing and outperforming models on DUC and CNN / Daily Mail indicates the usefulness of NEWSROOM in generalizing to out-of-domain data.", "Similar subset analysis for our other two measures, coverage and compression, are included in the supplementary material.", "ROUGE scores systems using frequencies of shared n -grams.", "Evaluating systems with ROUGE alone biases scoring against abstractive systems, which rely more on paraphrasing.", "To overcome this limitation, we provide human evaluation of the different systems on NEWSROOM .", "While human evaluation is still uncommon in summarization work, developing a benchmark dataset presents an opportunity for developing an accompanying protocol for human evaluation.", "Our evaluation method is centered around three objectives: (1) distinguishing between syntactic and semantic summarization quality, (2) providing a reliable (consistent and replicable) measurement, and (3) allowing for portability such that the 715 DIMENSIONPROMPT Informativeness How well does the summary capture the key points of the article?", "We select two semantic and two syntactic dimensions for evaluation based on experiments with evaluation tasks by Paulus et al. (2017) and Tan et al. (2017).", "The two semantic dimensions, summary informativeness (INF) and relevance (REL), measure whether the system-generated text is useful as a summary, and appropriate for the source text, respectively.", "The two syntactic dimensions, fluency (FLU) and coherence (COH), measure whether individual sentences or phrases of the summary are well-written and whether the summary as a whole makes sense respectively.", "Evaluation was performed on 60 summaries, 20 from each extractive NEWSROOM subset.", "Each system-article pair was evaluated by three unique raters.", "Exact prompts given to raters for each dimension are shown in Table 4.", "Table 5 shows the mean score given to each system under each of the four dimensions, as well as the mean overall score (rightmost column).", "No summarization system exceeded the scores given to the Lede-3 baseline.", "However, the extractive oracle designed to maximize n -gram based evaluation performed worse than the majority of systems under human evaluation.", "While the fully abstractive Abs-N model performed very poorly under automatic evaluation, it fared slightly better when scored by humans.", "TextRank received the highest overall score.", "TextRank generates full sentences extracted from the article, and raters preferred TextRank primarily for its fluency and coherence.", "The pointer-generator models do not have this advantage, and raters did not find the pointer-generator models to be as syntactically sound as TextRank.", "However, raters preferred the informativeness and relevance of the Pointer-S and Pointer-N models, though not the Pointer-C model, over TextRank.", "We present NEWSROOM , a dataset of articles and their summaries written in the newsrooms of online publications.", "NEWSROOM is the largest summarization dataset available to date, and exhibits a wide variety of human summarization strategies.", "Our proposed measures and the analysis of strategies used by different publications and articles propose new directions for evaluating the difficulty of summarization tasks and for developing future summarization models.", "We show that the dataset's diversity of summaries presents a new challenge to summarization systems.", "Finally, we find that using NEWSROOM to train an existing state-of-art mixed-strategy summarization model results in performance improvements on out-of-domain data.", "The NEWSROOM dataset is available online at summari.es.", "This work is funded by Oath as part of the Connected Experiences Laboratory and by a Google Research Award.", "We thank the anonymous reviewers for their feedback." ]
[ "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "method", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "result", "abstain", "other", "other" ]
[ "We propose a multi-task learning framework to learn a joint Machine Reading Comprehension (MRC) model that can be applied to a wide range of MRC tasks in different domains.", "Inspired by recent ideas of data selection in machine translation, we develop a novel sample re-weighting scheme to assign sample-specific weights to the loss.", "Empirical study shows that our approach can be applied to many existing MRC models.", "Combined with contextual representations from pre-trained language models (such as ELMo), we achieve new state-of-the-art results on a set of MRC benchmark datasets.", "We release our code at https://github.com/ xycforgithub/MultiTask-MRC .", "Machine Reading Comprehension (MRC) has gained growing interest in the research community (Rajpurkar et al., 2016; Yu et al., 2018).", "In an MRC task, the machine reads a text passage and a question, and generates (or selects) an answer based on the passage.", "This requires the machine to possess strong comprehension, inference and reasoning capabilities.", "Over the past few years, there has been much progress in building end-to-end neural network models (Seo et al., 2016) for MRC.", "However, most public MRC datasets (e.g., SQuAD, MS MARCO, TriviaQA) are typically small (less than 100K) compared to the model size (such as SAN (Liu et al., 2018c,b) with around 10M parameters).", "To prevent over-fitting, recently there have been some studies on using pre-trained word embeddings (Pennington et al., 2014) and contextual embeddings in the MRC model training, as well as back-translation approaches (Yu et al., 2018) for data augmentation.", "Multi-task learning (Caruana, 1997) is a widely studied area in machine learning, aiming at better model generalization by combining training datasets from multiple tasks.", "In this work, we explore a multi-task learning (MTL) framework to enable the training of one universal model across different MRC tasks for better generalization.", "Intuitively, this multi-task MRC model can be viewed as an implicit data augmentation technique, which can improve generalization on the target task by leveraging training data from auxiliary tasks.", "We observe that merely adding more tasks cannot provide much improvement on the target task.", "Thus, we propose two MTL training algorithms to improve the performance.", "The first method simply adopts a sampling scheme, which randomly selects training data from the auxiliary tasks controlled by a ratio hyperparameter; The second algorithm incorporates recent ideas of data selection in machine translation (van der Wees et al., 2017).", "It learns the sample weights from the auxiliary tasks automatically through language models.", "Prior to this work, many studies have used upstream datasets to augment the performance of MRC models, including word embedding (Pen-nington et al., 2014), language models (ELMo) (Peters et al., 2018) and machine translation (Yu et al., 2018).", "These methods aim to obtain a robust semantic encoding of both passages and questions.", "Our MTL method is orthogonal to these methods: rather than enriching semantic embedding with external knowledge, we leverage existing MRC datasets across different domains, which help make the whole comprehension process more robust and universal.", "Our experiments show that MTL can bring further performance boost when combined with contextual representations from pre-trained language models, e.g., ELMo (Peters et al., 2018).", "To the best of our knowledge, this is the first work that systematically explores multi-task learning for MRC.", "In previous methods that use language models and word embedding, the external embedding/language models are pre-trained separately and remain fixed during the training of the MRC model.", "Our model, on the other hand, can be trained with more flexibility on various MRC tasks.", "MTL is also faster and easier to train than embedding/LM methods: our approach requires no pre-trained models, whereas back translation and ELMo both rely on large models that would need days to train on multiple GPUs (Jozefowicz et al., 2016; Peters et al., 2018).", "We validate our MTL framework with two state-of-the-art models on four datasets from different domains.", "Experiments show that our methods lead to a significant performance gain over single-task baselines on SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2017) and Who-Did-What (Onishi et al., 2016), while achieving state-of-the-art performance on the latter two.", "For example, on NewsQA (Trischler et al., 2017), our model surpassed human performance by 13.4 (46.5 vs 59.9) and 3.2 (72.6 vs 69.4) absolute points in terms of exact match and F1.", "The contribution of this work is three-fold.", "First, we apply multi-task learning to the MRC task, which brings significant improvements over single-task baselines.", "Second, the performance gain from MTL can be easily combined with existing methods to obtain further performance gain.", "Third, the proposed sampling and re-weighting scheme can further improve the multi-task learning performance.", "Studies in machine reading comprehension mostly focus on architecture design of neural networks, such as bidirectional attention (Seo et al., 2016), dynamic reasoning (Xu et al., 2017), and parallelization (Yu et al., 2018).", "Some recent work has explored transfer learning that leverages out-domain data to learn MRC models when no training data is available for the target domain (Golub et al., 2017).", "In this work, we explore multi-task learning to make use of the data from other domains, while we still have access to target domain training data.", "Multi-task learning (Caruana, 1997) has been widely used in machine learning to improve generalization using data from multiple tasks.", "For natural language processing, MTL has been successfully applied to low-level parsing tasks (Collobert et al., 2011), sequence-to-sequence learning (Lu-ong et al., 2015), and web search (Liu et al., 2015).", "More recently, (McCann et al., 2018) proposes to cast all tasks from parsing to translation as a QA problem and use a single network to solve all of them.", "However, their results show that multi-task learning hurts the performance of most tasks when tackling them together.", "Differently, we focus on applying MTL to the MRC task and show significant improvement over single-task baselines.", "Our sample re-weighting scheme bears some resemblance to previous MTL techniques that assign weights to tasks (Kendall et al., 2018).", "However, our method gives a more granular score for each sample and provides better performance for multitask learning MRC.", "We call our model Multi-Task-SAN (MT-SAN), which is a variation of SAN (Liu et al., 2018c) model with two main differences:", "i) we add a highway network layer after the embedding layer, the encoding layer and the attention layer;", "ii) we use exponential moving average (Seo et al., 2016) during evaluation.", "The SAN architecture and our modifications are briefly described below and in Section 5.2, and detailed description can be found in (Liu et al., 2018c).", "For most tasks we consider, our MRC model takes a triplet ( Q, P, A ) as input, where Q = ( q 1 , ..., q m ) , P = ( p 1 , ..., p n ) are the word index representations of a question and a passage, respectively , and A = ( a begin , a end ) is the index of the answer span.", "The goal is to predict A given ( Q, P ) .", "We map the word indices of P and Q into their 300-dim Glove vectors (Pennington et al., 2014).", "We also use the following additional information for embedding words:", "i) 16-dim part-of-speech (POS) tagging embedding;", "ii) 8-dim named-entity-recognition (NER) embedding;", "iii) 3-dim exact match embedding: f exact match ( p i ) = I ( p i Q ) , where matching is determined based on the original word, lower case, and lemma form, respectively;", "iv) Question enhanced passage word embeddings: f align ( p i ) = (cid:80) j i,j h ( GloVe ( q j )) , where i,j = exp( h ( GloVe ( p j )) ,h ( GloVe ( q i ))) (cid:80) j (cid:48) exp( h ( GloVe ( p j (cid:48) )) ,h ( GloVe ( q i ))) (1) is the similarity between word p j and q i , and g ( ) is a 300-dim single layer neural net with Recti-fied Linear Unit (ReLU) g ( x ) = ReLU ( W 1 x ) ;", "v) Passage-enhanced question word embeddings: the same as", "iv) but computed in the reverse direction.", "To reduce the dimension of the input to the next layer, the 624-dim input vectors of passages and questions are passed through a ReLu layer to reduce their dimensions to 125.", "After the ReLU network, we pass the 125-dim vectors through a highway network (Srivastava et al., 2015), to adapt to the multi-task setting: g i = sigmoid ( W 2 p ti ) , p ti = ReLU ( W 3 p ti ) (cid:12) g i + g i (cid:12) p t i , where p t i is the vector after ReLU transformation.", "Intuitively, the highway network here provides a neuron-wise weighting, which can potentially handle the large variation in data introduced by multiple datasets.", "Both the passage and question encodings go through a 2-layer Bidirectional Long-Short Term Memory (BiLSTM, Hochreiter and Schmidhuber, 1997) network in this layer.", "We append a 600-dim CoVe vector (McCann et al., 2017) to the output of the lexicon encoding layer as input to the contextual encoders.", "For the experiments with ELMo, we also append a 1024-dim ELMo vector.", "Similar to the lexicon encoding layer, the outputs of both layers are passed through a highway network for multi-tasking.", "Then we concatenate the output of the two layers to obtain H q R 2 d m for the question and H p = R 2 d n the passage, where d is the dimension of the BiLSTM.", "We fuse H p and H q through cross attention and generate a working memory in this layer.", "We adopt the attention function from (Vaswani et al., 2017) and compute the attention matrix as C = dropout (cid:16) f attention ( H q , H p ) (cid:17) R m n .", "We then use C to compute a question-aware passage representation as U p = concat ( H p , H q C ) .", "Since a passage usually includes several hundred tokens, we use the method of (Lin et al., 2017) to apply self attention to the representations of passage to rearrange its information: U p = U p drop diag ( f attention ( U p , U p )) , where drop diag means that we only drop diagonal elements on the similarity matrix (i.e., attention with itself).", "Then, we concatenate U p and U p and pass them through a BiLSTM: M = BiLSTM ([ U p ]; U p ]) .", "Finally, output of the BiLSTM (after concatenating two directions) goes through a highway layer to produce the memory.", "3.5 Answer Module The base answer module is the same as SAN, which computes a distribution over spans in the passage.", "Firstly, we compute an initial state s 0 by self attention on H q : s 0 Highway (cid:18)(cid:80) j exp( w 4 H qj ) (cid:80) j (cid:48) exp w 4 H qj (cid:48) H qj (cid:19) .", "The final answer is computed through T time steps.", "At step t { 1 , ..., T 1 } , we compute the new state using a Gated Recurrent Unit (GRU, Cho et al., 2014) s t = GRU ( s t 1 , x t ) , where x t is computed by attention between M and s t 1 : x t = (cid:80) j j M j , j = softmax ( s t 1 W 5 M ) .", "Then each step produces a prediction of the start and end of answer spans through a bilinear function: P begin t = softmax ( s t W 6 M ) , P end t = softmax ( s t W 7 M ) .", "The final prediction is the average of each time step: P begin = 1 T (cid:80) t P begin t , P end = 1 T (cid:80) t P end t .", "We randomly apply dropout on the step level in each time step during training, as done in (Liu et al., 2018c).", "During training, the objective is the log-likelihood of the ground truth: l ( Q, P, A ) = log P begin ( a begin ) + log P end ( a end ) .", "We describe our MTL training algorithms in this section.", "We start with a very simple and straightforward algorithm that samples one task and one mini-batch from that task at each iteration.", "To improve the performance of MTL on a target dataset, we propose two methods to re-weight samples according to their importance.", "The first proposed method directly lowers the probability of sampling from a particular auxiliary task; however, this probability has to be chosen using grid search.", "We then propose another method that avoids such search by using a language model.", "Suppose we have K different tasks, the simplest version of our MTL training procedure is shown in Algorithm", "1. In each epoch, we take all the mini-batches from all datasets and shuffle them for Algorithm 1 Multi-task Learning of MRC Input: k different datasets D 1 , ..., DK , max epoch 1: Initialize the model M 2: for epoch = 1 , 2 , ... , max epoch do 3: Divide each dataset D k into N k mini-batches D k = { b k 1 , ..., b kN k } , 1 k K 4: Put all mini-batches together and randomly shuffle the order of them, to obtain a sequence B = ( b 1 , ..., b L ) , where L = (cid:80) k N k 5: for each mini-batch b B do 6: Perform gradient update on M with loss l ( b ) = (cid:80) ( Q,P,A ) b l ( Q, P, A ) 7: end for 8: Evaluate development set performance 9: end for Output: Model with best evaluation performance model training, and the same set of parameters is used for all tasks.", "Perhaps surprisingly, as we will show in the experiment results, this simple baseline method can already lead to a considerable improvement over the single-task baselines.", "One observation is that the performance of our model using Algorithm 1 starts to deteriorate as we add more and more data from other tasks into our training pool.", "We hypothesize that the external data will inevitably bias the model towards auxiliary tasks instead of the target task.", "To avoid such adverse effect, we introduce a mixture ratio parameter during training.", "The training algorithm with the mixture ratio is presented in Algorithm 2, with D 1 being the target dataset.", "In each epoch, we use all mini-batches from D 1 , while only a ratio of mini-batches from external datasets are used to train the model.", "In our experiment, we use hyperparameter search to find the best for each dataset combination.", "This method resembles previous methods in multi-task learning to weight losses differently (e.g., Kendall et al., 2018), and is very easy to implement.", "In our experiments, we use Algorithm 2 to train our network when we only use 2 datasets for MTL.", "The mixture ratio (Algorithm 2) dramatically improves the performance of our system.", "However, it requires to find an ideal ratio by hyperparameter search which is time-consuming.", "Furthermore, Algorithm 2 Multi-task Learning of MRC with mixture ratio, targeting D 1 Input: K different datasets D 1 , ..., DK , max epoch, mixture ratio 1: Initialize the model M 2: for epoch = 1 , 2 , ... , max epoch do 3: Divide each dataset D k into N k mini-batches D k = { b k 1 , ..., b kN k } , 1 k K 4: S { b 11 , ..., b 1 N 1 } 5: Randomly pick (cid:98) N 1 (cid:99) mini-batches from (cid:83) Kk =2 D k and add to S 6: Assign mini-batches in S in a random order to obtain a sequence B = ( b 1 , ..., b L ) , where L = N 1 + (cid:98) N 1 (cid:99) 7: for each mini-batch b B do 8: Perform gradient update on M with loss l ( b ) = (cid:80) ( Q,P,A ) b l ( Q, P, A ) 9: end for 10: Evaluate development set performance 11: end for Output: Model with best evaluation performance the ratio gives the same weight to every auxiliary data, but the relevance of every data point to the target task can vary greatly.", "We develop a novel re-weighting method to resolve these problems, using ideas inspired by data selection in machine translation (Axelrod et al., 2011; van der Wees et al., 2017).", "We use ( Q k , P k , A k ) to represent a data point from the k th task for 1 k K , with k = 1 being the target task.", "Since the passage styles are hard to evaluate, we only evaluate data points based on Q k and A k .", "Note that only data from auxiliary task ( 2 k K ) is re-weighted; target task data always have weight", "1. Our scores consist of two parts, one for questions and one for answers.", "For questions, we create language models (detailed in Section 5.2) using questions from each task, which we represent as LM k for the k -th task.", "For each question Q k from auxiliary tasks, we compute a cross-entropy score: H C,Q ( Q k ) = 1 m (cid:88) w Q k log( LMC ( w )) , (2) where C { 1 , k } is the target or auxiliary task, m is the length of question Q k , and w iterates over all words in Q k .", "It is hard to build language models for answers since they are typically very short (e.g., answers Dataset SQuAD(v1) NewsQA MS MARCO(v1) WDW # Training Questions 87,599 92,549 78,905 127,786 Text Domain Wikipedia CNN News Web Search Gigaword Corpus Avg.", "on SQuAD includes only one or two words in most cases).", "We instead just use the length of answers as a signal for scores.", "Let l ka be the length of A k , the cross-entropy answer score is defined as: H C,A ( A k ) = log freq C ( l ka ) , (3) where freq C is the frequency of answer lengths in task C { 1 , k } .", "The cross entropy scores are then normalized over all samples in task C to create a comparable metric across all auxiliary tasks: H (cid:48) C,Q ( Q k ) = H C,Q ( Q k ) min( H C,Q ) max( H C,Q ) min( H C,Q ) (4) H (cid:48) C,A ( A k ) = H C,A ( A k ) min( H C,A ) max( H C,A ) min( H C,A ) (5) for C { 1 , 2 , ..., K } .", "For C { 2 , ..., K } , the maximum and minimum are taken over all samples in task k .", "For C = 1 (target task), they are taken over all available samples.", "Intuitively, H (cid:48) C,Q and H (cid:48) C,A represents the similarity of text Q, A to task C ; a low H (cid:48) C,Q (resp.", "H (cid:48) C,A ) means that Q k (resp. A k ) is easy to predict and similar to C , and vice versa.", "We would like samples that are most similar from data in the target domain (low H (cid:48) 1 ), and most different (infor-mative) from data in the auxiliary task (high H (cid:48) k ).", "We thus compute the following cross-entropy difference for each external data: CED ( Q k , A k ) =( H (cid:48) 1 ,Q ( Q k ) H (cid:48) k,Q ( Q k ))+ ( H (cid:48) 1 ,A ( A k ) H (cid:48) k,A ( A k )) (6) for k { 2 , ..., K } .", "Note that a low CED score indicates high importance.", "Finally, we transform the scores to weights by taking negative, and normalize between [0 , 1] : CED (cid:48) ( Q k , A k ) = 1 CED ( Q k , A k ) min( CED ) max( CED ) min( CED ) .", "Here the maximum and minimum are taken over all available samples and task.", "Our training algorithm is the same as Algorithm 1, but for mini-batch b we instead use the loss l ( b ) = (cid:88) ( P,Q,A ) b CED (cid:48) ( Q, A ) l ( P, Q, A ) (8) in step 6.", "Our experiments are designed to answer the following questions on multi-task learning for MRC:", "1. Can we improve the performance of existing MRC systems using multi-task learning?", "2. How does multi-task learning affect the performance if we combine it with other external data?", "3. How does the learning algorithm change the performance of multi-task MRC?", "4. How does our method compare with existing MTL methods?", "We first present our experiment details and results for MT-SAN.", "Then, we provide a comprehensive study on the effectiveness of various MTL algorithms in Section 5.4.", "At last, we provide some additional results on combining MTL with DrQA (Chen et al., 2017) to show the flexibility of our approach 1 .", "We conducted experiments on SQuAD (Rajpurkar et al., 2016), NewsQA(Trischler et al., 2017), MS MARCO (v1, Nguyen et al.,2016) and WDW (On-ishi et al., 2016).", "Dataset statistics is shown in Table", "1. Although similar in size, these datasets are quite different in domains, lengths of text, and types of task.", "In the following experiments, we will validate whether including external datasets as additional input information (e.g., pre-trained language model on these datasets) helps boost the performance of MRC systems.", "We mostly focus on span-based datasets for MT-SAN, namely SQuAD, NewsQA, and MS MARCO.", "We convert MS MARCO into an answer-span dataset to be consistent with SQuAD and NewsQA, following (Liu et al., 2018c).", "For each question, we search for the best span using ROUGE-L score in all passage texts and use the span to train our model.", "We exclude questions with maximal ROUGE-L score less than 0.5 during training.", "For evaluation, we use our model to find a span in all passages.", "The prediction score is multiplied with the ranking score, trained following Liu et al. (2018a)'s method to determine the final answer.", "We train our networks using algorithms in Section 4, using SQuAD as the target task.", "For experiments with two datasets, we use Algorithm 2; for experiments with three datasets we find the re-weighting mechanism in Section 4.2 to have a better performance (a detailed comparison will be presented in Section 5.4).", "For generating sample weights, we build a LSTM language model on questions following the implementation of Merity et al. (2017) with the same hyperparameters.", "We only keep the 10,000 most frequent words, and replace the other words with a special out-of-vocabulary token.", "Parameters of MT-SAN are mostly the same as in the original paper (Liu et al., 2018c).", "We utilize spaCy 2 to tokenize the text and generate part-of-speech and named entity labels.", "We use a 2-layer BiLSTM with 125 hidden units as the BiLSTM throughout the model.", "During training, we drop the activation of each neuron with 0.3 probability.", "For optimization, we use Adamax (Kingma and Ba, 2014) with a batch size of 32 and a learning rate of 0.002.", "For prediction, we compute an exponential moving average (EMA, Seo et al. 2016) of model parameters with a decay rate of 0.995 and use it to compute the model performance.", "For experiments with ELMo, we use the model implemented by AllenNLP 3 .", "We truncate passage to contain at most 1000 tokens during training and eliminate those data with answers located after the 1000th token.", "The training converges in around 50 epochs for models without ELMo (similar to the single-task SAN); For models with ELMo, the convergence is much faster (around 30 epochs).", "In the following sub-sections, we report our results on SQuAD and MARCO development sets,", "The multi-task learning results of SAN on SQuAD are summarized in Table", "2. By using MTL on SQuAD and NewsQA, we can improve the exact-match (EM) and F1 score by (2%, 1.5%), respectively, both with and without ELMo.", "The similar gain indicates that our method is orthogonal to ELMo.", "Note that our single-model performance is slightly higher than the original SAN, by incorporating EMA and highway networks.", "By incorporating with multi-task learning, it further improves the performance.", "The performance gain by adding MARCO is relatively smaller, with 1% in EM and 0.5% in F1.", "We conjecture that MARCO is less helpful due to its differences in both the question and answer style.", "For example, questions in MS MARCO are real web search queries, which are short and may have typos or abbreviations; while questions in SQuAD and NewsQA are more formal and well written.", "Using 3 datasets altogether provides another marginal improvement.", "Our model obtains the best results among existing methods that do not use a large language model (e.g., ELMo).", "Our ELMo version also outperforms any other models which are under the same setting.", "We note that BERT (Devlin et al., 2018) uses a much larger model than ours(around 20x), and we leave the performance of combining BERT with MTL as interesting future work.", "The results of multi-task learning on NewsQA are in Table", "3. The performance gain with multitask learning is even larger on NewsQA, with over 2% in both EM and F1.", "Experiments with and without ELMo give similar results.", "What is worth noting is that our approach not only achieves new state-of-art results with a large margin but also surpasses human performance on NewsQA.", "Finally we report MT-SAN performance on MS MARCO in Table", "4. Multi-tasking on SQuAD and NewsQA provides a similar performance boost in terms of BLEU-1 and ROUGE-L score as in the case of NewsQA and SQuAD.", "Our method does not achieve very high performance compared to previous work, probably because we do not apply common techniques like yes/no classification 4 The official submission for SQuAD v1.1 and MARCO v1.1 are closed, so we report results on the development set.", "According to their leaderboards, performances on development and test sets are usually similar.", "We also test the robustness of our algorithm by performing another set of experiments on SQuAD and WDW.", "WDW is much more different than the other three datasets (SQuAD, NewsQA, MS MARCO): WDW guarantees that the answer is always a person, whereas the percentage of Model SQuAD WDW MT-SAN (Single Task) 76.8, 84.5 77.5 MT-SAN (S+W) 77.6, 85.1 78.5 SOTA(Yang et al., 2016).", "such questions in SQuAD is 12.9%.", "Moreover, WDW is a cloze dataset, whereas in SQuAD and NewsQA answers are spans in the passage.", "We use a task-specific answer layer in this experiment and use Algorithm 2; the WDW answer module is the same as in AS Reader (Kadlec et al., 2016), which we describe in the appendix for completeness.", "Despite these large difference between datasets, our results (Table 5) show that MTL can still provide a moderate performance boost when jointly training on SQuAD (around 0.7%) and WDW (around 1%).", "Comparison of methods using external data.", "As a method of data augmentation, we compare our approach to previous methods for MRC in Table 6.", "Our model achieves better performance than back translation.", "We also observe that language models such as ELMo obtain a higher performance gain than multi-task learning, however, combining it with multi-task learning leads to the most significant performance gain.", "This validates our assump-tion that multi-task learning is more robust and is different from previous methods such as language modeling.", "In this section, we provide ablation studies as well as comparisons with other existing algorithms on the MTL strategy.", "We focus on MT-SAN without Model Performance SQuAD + MARCO EM,F1 Simple Combine (Alg. 1) 77.1, 84.6 Loss Uncertainty 77.3, 84.7 Mixture Ratio 77.8, 85.2 Sample Re-weighting 77.9,85.3 SQuAD + NewsQA + MARCO Simple Combine (Alg. 1) 77.6, 85.2 Loss Uncertainty 78.2, 85.6 Mixture Ratio 78.4, 85.7 Sample Re-weighting 78.8 , 86.0 Table 7: Comparison of different MTL strategies on MT-SAN.", "Table 7 compares different multi-task learning strategies for MRC.", "Both the mixture ratio (Sec 4.1) and sample re-weighting (Sec 4.2) improves over the naive baseline of simply combining all the data (Algorithm 1).", "On SQuAD+MARCO, they provide around 0.6% performance boost in terms of both EM and F1, and around 1% on all 3 datasets.", "We note that this accounts for around a half of our overall improvement.", "Although sample re-weighting performs similar as mixture ratio, it significantly reduces the amount of training time as it eliminates the need for a grid searching the best ratio.", "Kendal et al., (2018) use task uncertainty to weight tasks differently for MTL; our experiments show that this has some positive effect, but does not perform as well as our proposed two techniques.", "We note that Kendal et al. (as well as other previous MTL methods) optimizes the network to perform well for all the tasks, whereas our method focuses on the target domain which we are interested in, e.g., SQuAD.", "Sensitivity of mixture ratio.", "We also investigate the effect of mixture ratio on the model performance.", "We plot the EM/F1 score on SQuAD dev set vs. mixture ratio in Figure 1 for MT-SAN when trained on all three datasets.", "The curve peaks at = 0 .", "4 ; however if we use = 0 .", "2 or = 0 .", "5 , the performance drops by around 0 .", "5% , well behind the performance of sample re-weighting.", "This shows that the performance of MT-SAN is sensitive to changes in , making the hyperparameter search even more difficult.", "Such sensitivity suggests a preference for using our sample reweighting technique.", "On the other hand, the ratio Samples/Groups CED (cid:48) HQHA Examples (NewsQA) Q: Where is the drought hitting?", "implement.", "Analysis of sample weights.", "Dataset comparisons in Table 1 and performance in Table 2 suggests that NewsQA share more similarity with SQuAD than MARCO.", "Therefore, a MTL system should weight NewsQA samples more than MARCO samples for higher performance.", "We try to verify this in Table 8 by showing examples and statistics of the sample weights.", "We present the CED (cid:48) scores, as well as normalized version of question and answer scores (resp.", "( H (cid:48) 1 ,Q H (cid:48) k,Q ) and ( H (cid:48) 1 ,A H (cid:48) k,A ) in (6), and then negated and normalized over all samples in NewsQA and MARCO in the same way as in (7)).", "A high HQ score indicates high importance of the question, and HA of the answer; CED (cid:48) is a summary of the two.", "We first show one example from NewsQA and one from MARCO.", "The NewsQA question is a natural question (similar to SQuAD) with a short answer, leading to high scores both in questions and answers.", "The MARCO question is a phrase, with a very long answer, leading to lower scores.", "From overall statistics, we also find samples in NewsQA have a higher score than those in MARCO.", "However, if we look at MARCO questions that start with when or who (i.e., probability natural questions with short answers), the scores go up dramatically.", "We proposed a multi-task learning framework to train MRC systems using datasets from different domains and developed two approaches to re-weight the samples for multi-task learning on MRC tasks.", "Empirical results demonstrated our approaches outperform existing MTL methods and the single-task baselines as well.", "Interesting future directions include combining with larger language models such as BERT, and MTL with broader tasks such as language inference (Liu et al., 2019) and machine translation.", "Yichong Xu has been partially supported by DARPA (FA8750-17-2-0130)." ]
[ "objective", "objective", "result", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "method", "method", "abstain", "result", "result", "objective", "objective", "abstain", "abstain", "other", "other", "objective", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "objective", "objective", "abstain", "other" ]
[ "Sentence embeddings are an important component of many natural language processing (NLP) systems.", "Like word embeddings, sentence embeddings are typically learned on large text corpora and then transferred to various downstream tasks, such as clustering and retrieval.", "Unlike word embeddings, the highest performing solutions for learning sentence embeddings require labelled data, limiting their usefulness to languages and domains where labelled data is abundant.", "In this paper, we present DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations.", "Inspired by recent advances in deep metric learning (DML), we carefully design a self-supervised objective for learning universal sentence embeddings that does not require labelled training data.", "When used to extend the pretraining of transformer-based language models, our approach closes the performance gap between unsupervised and supervised pretraining for universal sentence encoders.", "Importantly, our experiments suggest that the quality of the learned embeddings scale with both the number of trainable parameters and the amount of unlabelled training data.", "Our code and pretrained models are publicly available and can be easily adapted to new domains or used to embed unseen text.", "1 1 Introduction Due to the limited amount of labelled training data available for many natural language processing (NLP) tasks, transfer learning has become ubiquitous (Ruder et al., 2019).", "For some time, transfer learning in NLP was limited to pretrained word embeddings (Mikolov et al., 2013; Pennington et al., 1 https://github.com/JohnGiorgi/DeCLUTR 2014).", "Recent work has demonstrated strong transfer task performance using pretrained sentence embeddings.", "These fixed-length vectors, often referred to as universal sentence embeddings, are typically learned on large corpora and then transferred to various downstream tasks, such as clustering (e.g. topic modelling) and retrieval (e.g. semantic search).", "Indeed, sentence embeddings have become an area of focus, and many supervised (Con-neau et al., 2017), semi-supervised (Subramanian et al., 2018; Phang et al., 2018; Cer et al., 2018; Reimers and Gurevych, 2019) and unsupervised (Le and Mikolov, 2014; Jernite et al., 2017; Kiros et al., 2015; Hill et al., 2016; Logeswaran and Lee, 2018) approaches have been proposed.", "However, the highest performing solutions require labelled data, limiting their usefulness to languages and domains where labelled data is abundant.", "Therefore, closing the performance gap between unsupervised and supervised universal sentence embedding methods is an important goal.", "Pretraining transformer-based language models has become the primary method for learning textual representations from unlabelled corpora (Radford et al., 2018; Devlin et al., 2019; Dai et al., 2019; Yang et al., 2019; Liu et al., 2019; Clark et al., 2020).", "This success has primarily been driven by masked language modelling (MLM).", "This self-supervised token -level objective requires the model to predict the identity of some randomly masked tokens from the input sequence.", "In addition to MLM, some of these models have mechanisms for learning sentence -level embeddings via self-supervision.", "In BERT (Devlin et al., 2019), a special classification token is prepended to every input sequence, and its representation is used in a binary classification task to predict whether one textual segment follows another in the training corpus, denoted Next Sentence Prediction (NSP).", "However, recent work has called into question the effectiveness of NSP (Conneau and Lample, 2019; You et al., 1904; Joshi et al., 2020).", "In RoBERTa (Liu et al., 2019), the authors demonstrated that removing NSP during pretraining leads to unchanged or even slightly improved performance on downstream sentence-level tasks (including semantic text similarity and natural language inference).", "In ALBERT (Lan et al., 2020), the authors hypothesize that NSP conflates topic prediction and coherence prediction, and instead propose a Sentence-Order Prediction objective (SOP), suggesting that it better models inter-sentence coherence.", "In preliminary evaluations, we found that neither objective produces good universal sentence embeddings (see Appendix A).", "Thus, we propose a simple but effective self-supervised, sentence-level objective inspired by recent advances in metric learning.", "Metric learning is a type of representation learning that aims to learn an embedding space where the vector representations of similar data are mapped close together, and vice versa (Lowe, 1995; Mika et al., 1999; Xing et al., 2002).", "In computer vision (CV), deep metric learning (DML) has been widely used for learning visual representations (Wohlhart and Lepetit, 2015; Wen et al., 2016; Zhang and Saligrama, 2016; Bucher et al., 2016; Leal-Taixe et al., 2016; Tao et al., 2016; Yuan et al., 2020; He et al., 2018; Grabner et al., 2018; Yelamarthi et al., 2018; Yu et al., 2018).", "Generally speaking, DML is approached as follows: a pretext task (often self-supervised, e.g. colouriza-tion or inpainting) is carefully designed and used to train deep neural networks to generate useful feature representations.", "Here, useful means a representation that is easily adaptable to other downstream tasks, unknown at training time.", "Downstream tasks (e.g. object recognition) are then used to evaluate the quality of the learned features (inde-pendent of the model that produced them), often by training a linear classifier on the task using these features as input.", "The most successful approach to date has been to design a pretext task for learning with a pair-based contrastive loss function.", "For a given anchor data point, contrastive losses attempt to make the distance between the anchor and some positive data points (those that are similar) smaller than the distance between the anchor and some negative data points (those that are dissimilar) (Had-sell et al., 2006).", "The highest-performing methods generate anchor-positive pairs by randomly augmenting the same image (e.g. using crops, flips and colour distortions); anchor-negative pairs are randomly chosen, augmented views of different images (Bachman et al., 2019; Tian et al., 2020; He et al., 2020; Chen et al., 2020).", "In fact, Kong et al., 2020 demonstrate that the MLM and NSP objectives are also instances of contrastive learning.", "Inspired by this approach, we propose a self-supervised, contrastive objective that can be used to pretrain a sentence encoder.", "Our objective learns universal sentence embeddings by training an encoder to minimize the distance between the embeddings of textual segments randomly sampled from nearby in the same document.", "We demonstrate our objective's effectiveness by using it to extend the pretraining of a transformer-based language model and obtain state-of-the-art results on SentEval (Conneau and Kiela, 2018) a benchmark of 28 tasks designed to evaluate universal sentence embeddings.", "Our primary contributions are: We propose a self-supervised sentence-level objective that can be used alongside MLM to pretrain transformer-based language models, inducing generalized embeddings for sentenceand paragraph-length text without any labelled data (subsection 5.1).", "We perform extensive ablations to determine which factors are important for learning high-quality embeddings (subsection 5.2).", "We demonstrate that the quality of the learned embeddings scale with model and data size.", "Therefore, performance can likely be improved simply by collecting more unlabelled text or using a larger encoder (subsection 5.3).", "We open-source our solution and provide detailed instructions for training it on new data or embedding unseen text.", "2 2 Related Work Previous works on universal sentence embeddings can be broadly grouped by whether or not they use labelled data in their pretraining step(s), which we refer to simply as supervised or semi-supervised and unsupervised , respectively.", "2 https://github.com/JohnGiorgi/DeCLUTR Supervised or semi-supervised The highest performing universal sentence encoders are pretrained on the human-labelled natural language inference (NLI) datasets Stanford NLI (SNLI) (Bowman et al., 2015) and MultiNLI (Williams et al., 2018).", "NLI is the task of classifying a pair of sentences (de-noted the hypothesis and the premise) into one of three relationships: entailment , contradiction or neutral .", "The effectiveness of NLI for training universal sentence encoders was demonstrated by the supervised method InferSent (Conneau et al., 2017).", "Universal Sentence Encoder (USE) (Cer et al., 2018) is semi-supervised, augmenting an unsupervised, Skip-Thoughts-like task (Kiros et al. 2015, see section 2) with supervised training on the SNLI corpus.", "The recently published Sentence Transformers (Reimers and Gurevych, 2019) method fine-tunes pretrained, transformer-based language models like BERT (Devlin et al., 2019) using labelled NLI datasets.", "Unsupervised Skip-Thoughts (Kiros et al., 2015) and FastSent (Hill et al., 2016) are popular unsupervised techniques that learn sentence embeddings by using an encoding of a sentence to predict words in neighbouring sentences.", "However, in addition to being computationally expensive, this generative objective forces the model to reconstruct the surface form of a sentence, which may capture information irrelevant to the meaning of a sentence.", "QuickThoughts (Logeswaran and Lee, 2018) addresses these shortcomings with a simple discriminative objective; given a sentence and its context (adjacent sentences), it learns sentence representations by training a classifier to distinguish context sentences from non-context sentences.", "The unifying theme of unsupervised approaches is that they exploit the distributional hypothesis, namely that the meaning of a word (and by extension, a sentence) is characterized by the word context in which it appears.", "Our overall approach is most similar to Sentence Transformers we extend the pretraining of a transformer-based language model to produce useful sentence embeddings but our proposed objective is self-supervised.", "Removing the dependence on labelled data allows us to exploit the vast amount of unlabelled text on the web without being restricted to languages or domains where labelled data is plentiful (e.g. English Wikipedia).", "Our objective most closely resembles QuickThoughts; some distinctions include: we relax our sampling to Figure 1: Overview of the self-supervised contrastive objective.", "textual segments of up to paragraph length (rather than natural sentences ), we sample one or more positive segments per anchor (rather than strictly one), and we allow these segments to be adjacent, overlapping or subsuming (rather than strictly adjacent; see Figure 1, B).", "Our method learns textual representations via a contrastive loss by maximizing agreement between textual segments (referred to as spans in the rest of the paper) sampled from nearby in the same document.", "Illustrated in Figure 1, this approach comprises the following components: A data loading step randomly samples paired anchor-positive spans from each document in a minibatch of size N .", "Let A be the number of anchor spans sampled per document, P be the number of positive spans sampled per anchor and i { 1 . . . AN } be the index of an arbitrary anchor span.", "We denote an anchor span and its corresponding p { 1 . . . P } positive spans as s i and s i + pAN respectively.", "This procedure is designed to maximize the chance of sampling semantically similar anchor-positive pairs (see subsection 3.2).", "An encoder f ( ) maps each token in the input spans to an embedding.", "Although our method places no constraints on the choice of encoder, we chose f ( ) to be a transformer-based language model, as this represents the state-of-the-art for text encoders (see subsection 3.3).", "A pooler g ( ) maps the encoded spans f ( s i ) and f ( s i + pAN ) to fixed-length embeddings e i = g ( f ( s i )) and its corresponding mean positive embedding e i + AN = 1 PP (cid:88) p =1 g ( f ( s i + pAN )) Similar to Reimers and Gurevych 2019, we found that choosing g ( ) to be the mean of the token-level embeddings (referred to as mean pooling in the rest of the paper) performs well (see Appendix, Table 4).", "A contrastive loss function defined for a contrastive prediction task.", "Given a set of embedded spans { e k } including a positive pair of examples e i and e i + AN , the contrastive prediction task aims to identify e i + AN in { e k } k (cid:54) = i for a given e i (cid:96) ( i, j ) = log exp( sim ( e i , e j ) / ) (cid:80) 2 ANk =1 1 [ i (cid:54) = k ] exp( sim ( e i , e k ) / ) where sim ( u , v ) = u T v / || u || 2 || v || 2 denotes the cosine similarity of two vectors u and v , 1 [ i (cid:54) = k ] { 0 , 1 } is an indicator function evaluating to 1 if i (cid:54) = k , and > 0 denotes the temperature hyperparameter.", "We pair each anchor embedding with the mean of multiple positive embeddings.", "This strategy was proposed by Saunshi et al. 2019, who demonstrated theoretical and empirical improvements compared to using a single positive example for each anchor.", "During training, we randomly sample mini-batches of N documents from the train set and define the contrastive prediction task on anchor-positive pairs e i , e i + AN derived from the N documents, resulting in 2 AN data points.", "As proposed in (Sohn, 2016), we treat the other 2( AN 1) instances within a minibatch as negative examples.", "The cost function takes the following form L contrastive = AN (cid:88) i =1 (cid:96) ( i, i + AN ) + (cid:96) ( i + AN, i ) This is the InfoNCE loss used in previous works (Sohn, 2016; Wu et al., 2018; Oord et al., 2018) and denoted normalized temperature-scale cross-entropy loss or NT-Xent in (Chen et al., 2020).", "To embed text with a trained model, we simply pass batches of tokenized text through the model, without sampling spans.", "Therefore, the computational cost of our method at test time is the cost of the encoder, f ( ) , plus the cost of the pooler, g ( ) , which is negligible when using mean pooling.", "We start by choosing a minimum and maximum span length; in this paper, (cid:96) min = 32 and (cid:96) max = 512 , the maximum input size for many pretrained transformers.", "Next, a document d is tokenized to produce a sequence of n tokens x d = ( x 1 , x 2 . . . x n ) .", "To sample an anchor span s i from x d , we first sample its length (cid:96) anchor from a beta distribution and then randomly (uniformly) sample its starting position s start i (cid:96) anchor = (cid:4) p anchor ( (cid:96) max (cid:96) min ) + (cid:96) min (cid:5) s start i { 0 , . . . , n (cid:96) anchor } s end i = s start i + (cid:96) anchor s i = x ds start i : s end i We then sample p { 1 . . . P } corresponding positive spans s i + pAN independently following a similar procedure (cid:96) positive = (cid:4) p positive ( (cid:96) max (cid:96) min ) + (cid:96) min (cid:5) s start i + pAN { s start i (cid:96) positive , . . . , s end i } s end i + pAN = s start i + pAN + (cid:96) positive s i + pAN = x ds start i + pAN : s end i + pAN where p anchor Beta ( = 4 , = 2) , which skews anchor sampling towards longer spans, and p positive Beta ( = 2 , = 4) , which skews positive sampling towards shorter spans (Figure 1, C).", "In practice, we restrict the sampling of anchor spans from the same document such that they are a minimum of 2 (cid:96) max tokens apart.", "In Appendix B, we show examples of text that has been sampled by our method.", "We note several carefully considered decisions in the design of our sampling procedure: Sampling span lengths from a distribution clipped at (cid:96) min = 32 and (cid:96) max = 512 encourages the model to produce good embeddings for text ranging from sentenceto paragraph-length.", "At test time, we expect our model to be able to embed up-to paragraph-length texts.", "We found that sampling longer lengths for the anchor span than the positive spans improves performance in downstream tasks (we did not find performance to be sensitive to the specific choice of and ).", "The rationale for this is twofold.", "First, it enables the model to learn global-to-local view prediction as in (Hjelm et al., 2019; Bachman et al., 2019; Chen et al., 2020) (referred to as subsumed view in Figure 1, B).", "Second, when P > 1 , it encourages diversity among positives spans by lowering the amount of repeated text.", "Sampling positives nearby to the anchor exploits the distributional hypothesis and increases the chances of sampling valid (i.e. semantically similar) anchor-positive pairs.", "By sampling multiple anchors per document, each anchor-positive pair is contrasted against both easy negatives (anchors and positives sampled from other documents in a minibatch) and hard negatives (anchors and positives sampled from the same document).", "In conclusion, the sampling procedure produces three types of positives: positives that partially overlap with the anchor, positives adjacent to the anchor, and positives subsumed by the anchor (Fig-ure 1, B) and two types of negatives: easy negatives sampled from a different document than the anchor, and hard negatives sampled from the same document as the anchor.", "Thus, our stochastically generated training set and contrastive loss implicitly define a family of predictive tasks which can be used to train a model, independent of any specific encoder architecture.", "We use our objective to extend the pretraining of a transformer-based language model (Vaswani et al., 2017), as this represents the state-of-the-art encoder in NLP.", "We implement the MLM objective as described in (Devlin et al., 2019) on each anchor span in a minibatch and sum the losses from the MLM and contrastive objectives before backpropagating L = L contrastive + LMLM This is similar to existing pretraining strategies, where an MLM loss is paired with a sentence-level loss such as NSP (Devlin et al., 2019) or SOP (Lan et al., 2020).", "To make the computational requirements feasible, we do not train from scratch, but rather we continue training a model that has been pretrained with the MLM objective.", "Specifically, we use both RoBERTa-base (Liu et al., 2019) and DistilRoBERTa (Sanh et al., 2019) (a distilled version of RoBERTa-base) in our experiments.", "In the rest of the paper, we refer to our method as DeCLUTR-small (when extending DistilRoBERTa pretraining) and DeCLUTR-base (when extending RoBERTa-base pretraining).", "Dataset We collected all documents with a minimum token length of 2048 from OpenWebText (Gokaslan and Cohen, 2019) an open-access subset of the WebText corpus (Radford et al., 2019), yielding 497,868 documents in total.", "For reference, Google's USE was trained on 570,000 human-labelled sentence pairs from the SNLI dataset (among other unlabelled datasets).", "InferSent and Sentence Transformer models were trained on both SNLI and MultiNLI, a total of 1 million human-labelled sentence pairs.", "Implementation We implemented our model in PyTorch (Paszke et al., 2017) using AllenNLP (Gardner et al., 2018).", "We used the NT-Xent loss function implemented by the PyTorch Metric Learning library (Musgrave et al., 2019) and the pretrained transformer architecture and weights from the Transformers library (Wolf et al., 2020).", "All models were trained on up to four NVIDIA Tesla V100 16 or 32GB GPUs.", "Training Unless specified otherwise, we train for one to three epochs over the 497,868 documents with a minibatch size of 16 and a temperature = 5 10 2 using the AdamW optimizer (Loshchilov and Hutter, 2019) with a learning rate (LR) of 5 10 5 and a weight decay of 0 .", "1 .", "For every document in a minibatch, we sample two anchor spans ( A = 2 ) and two positive spans per anchor ( P = 2 ).", "We use the Slanted Triangular LR scheduler (Howard and Ruder, 2018) with a number of train steps equal to training instances and a cut fraction of 0 .", "1 .", "The remaining hyperparameters of the underlying pretrained transformer (i.e. DistilRoBERTa or RoBERTa-base) are left at their defaults.", "All gradients are scaled to a vector norm of 1 .", "0 before backpropagating.", "Hyperparameters were tuned on the SentEval validation sets.", "We evaluate all methods on the SentEval benchmark, a widely-used toolkit for evaluating general-purpose, fixed-length sentence representations.", "SentEval is divided into 18 downstream tasks representative NLP tasks such as sentiment analysis, natural language inference, paraphrase detection and image-caption retrieval and ten probing tasks, which are designed to evaluate what linguistic properties are encoded in a sentence representation.", "We report scores obtained by our model and the relevant baselines on the downstream and probing tasks using the SentEval toolkit 3 with default parameters (see Appendix C for details).", "Note that 3 https://github.com/facebookresearch/ SentEval all the supervised approaches we compare to are trained on the SNLI corpus, which is included as a downstream task in SentEval.", "To avoid train-test contamination, we compute average downstream scores without considering SNLI when comparing to these approaches in Table 2.", "4.2.1 Baselines We compare to the highest performing, most popular sentence embedding methods: InferSent, Google's USE and Sentence Transformers.", "For InferSent, we compare to the latest model.", "4 We use the latest large USE model 5 , as it is most similar in terms of architecture and number of parameters to DeCLUTR-base.", "For Sentence Transformers, we compare to roberta-base-nli-mean-tokens 6 , which, like DeCLUTR-base, uses the RoBERTa-base architecture and pretrained weights.", "The only difference is each method's extended pretraining strategy.", "We include the performance of averaged GloVe 7 and fastText 8 word vectors as weak baselines.", "Trainable model parameter counts and sentence embedding dimensions are listed in Table 1.", "Despite our best efforts, we could not evaluate the pretrained QuickThought models against the full SentEval benchmark.", "We cite the scores from the paper directly.", "Finally, we evaluate the pretrained transformer model's performance before it is subjected to training with our contrastive objective, denoted Transformer-*.", "We use mean pooling on the pretrained transformers token-level output to produce sentence embeddings the same pooling strategy used in our method.", "In subsection 5.1, we compare the performance of our model against the relevant baselines.", "In the remaining sections, we explore which components contribute to the quality of the learned embeddings.", "4 https://dl.fbaipublicfiles.com/ infersent/infersent2.pkl 5 https://tfhub.dev/google/ universal-sentence-encoder-large/5 6 https://www.sbert.net/docs/ pretrained_models.html 7 http://nlp.stanford.edu/data/glove.", "840B.300d.zip 8 https://dl.fbaipublicfiles.com/ fasttext/vectors-english/crawl-300d-2M.vec.zip Table 2: Results on the downstream tasks from the test set of SentEval.", "and RoBERTa-base, DeCLUTR-small and DeCLUTR-base obtain large boosts in average downstream performance, +4% and +6% respectively (Table 2).", "DeCLUTR-base leads to improved or equivalent performance for every downstream task but one (SST5) and DeCLUTR-small for all but three (SST2, SST5 and TREC).", "Compared to existing methods, DeCLUTR-base matches or even outperforms average performance without using any hand-labelled training data.", "Surprisingly, we also find that DeCLUTR-small outperforms Sentence Transformers while using 34% less trainable parameters.", "Probing task performance With the exception of InferSent, existing methods perform poorly on the probing tasks of SentEval (Table 3).", "Sentence Transformers, which begins with a pretrained transformer model and fine-tunes it on NLI datasets, scores approximately 10% lower on the probing tasks than the model it fine-tunes.", "In contrast, both DeCLUTR-small and DeCLUTR-base perform comparably to the underlying pretrained model in terms of average performance.", "We note that the purpose of the probing tasks is not the development of ad-hoc models that attain top performance on them (Conneau et al., 2018).", "However, it is still interesting to note that high downstream task performance can be obtained without sacrificing probing task performance.", "Furthermore, these results suggest that fine-tuning transformer-based language models on NLI datasets may discard some of the linguistic information captured by the pretrained model's weights.", "We suspect that the inclusion of MLM in our training objective is responsible for DeCLUTR's relatively high performance on the probing tasks.", "The downstream evaluation of SentEval includes supervised and unsupervised tasks.", "In the unsupervised tasks, the embeddings of the method to evaluate are used as-is without any further training (see Appendix C for details).", "Interestingly, we find that USE performs particularly well across the unsupervised evaluations in SentEval (tasks marked with a * in Table 2).", "Given the similarity of the USE architecture to Sentence Transformers and DeCLUTR and the similarity of its supervised NLI training objective to InferSent and Sentence Transformers, we suspect the most likely cause is one or more of its additional training objectives.", "These include a conversational response prediction task (Henderson et al., 2017) and a Skip-Thoughts (Kiros et al., 2015) like task.", "We ablate several components of the sampling procedure, including the number of anchors sampled per document A , the number of positives sampled per anchor P , and the sampling strategy for those positives (Figure 2).", "We note that when A = 2 , the model is trained on twice the number of spans and twice the effective batch size ( 2 AN , where N is the number of documents in a minibatch) as compared to when A = 1 .", "To control for this, all experi-Figure 2: Effect of the number of anchor spans sampled per document", "(a), the number of positive spans sampled per anchor", "(b), and the sampling strategy", "(c).", "Averaged downstream task scores are reported from the validation set of SentEval.", "Performance is computed over a grid of hyperparameters and plotted as a distribution.", "The grid is defined by all permutations of number of anchors A = { 1 , 2 } , number of positives P = { 1 , 2 , 4 } , temperatures = { 5 10 3 , 1 10 2 , 5 10 2 } and learning rates = { 5 10 5 , 1 10 4 } .", "P = 4 is omitted for DeCLUTR-base as these experiments did not fit into GPU memory.", "ments where A = 1 are trained for two epochs (twice the number of epochs as when A = 2 ) and for two times the minibatch size ( 2 N ).", "Thus, both sets of experiments are trained on the same number of spans and the same effective batch size ( 4 N ), and the only difference is the number of anchors sampled per document ( A ).", "We find that sampling multiple anchors per document has a large positive impact on the quality of learned embeddings.", "We hypothesize this is because the difficulty of the contrastive objective increases when A > 1 .", "Recall that a minibatch is composed of random documents, and each anchor-positive pair sampled from a document is contrasted against all other anchor-positive pairs in the minibatch.", "When A > 1 , anchor-positive pairs will be contrasted against other anchors and positives from the same document, increasing the difficulty of the contrastive objective, thus leading to better representations.", "We also find that a positive sampling strategy that allows positives to be adjacent to and subsumed by the anchor outperforms a strategy that only allows adjacent or subsuming views, suggesting that the information captured by these views is complementary.", "Finally, we note that sampling multiple positives per anchor ( P > 1 ) has minimal impact on performance.", "This is in contrast to (Saunshi et al., 2019), who found both theoretical and empirical improvements when Figure 3: Effect of training objective, train set size and model capacity on SentEval performance.", "multiple positives are averaged and paired with a given anchor.", "To determine the importance of the training objectives, train set size, and model capacity, we trained two sizes of the model with 10% to 100% (1 full epoch) of the train set (Figure 3).", "Pretraining the model with both the MLM and contrastive objectives improves performance over training with either objective alone.", "Including MLM alongside the contrastive objective leads to monotonic improvement as the train set size is increased.", "We hypothesize that including the MLM loss acts as a form of regularization, preventing the weights of the pretrained model (which itself was trained with an MLM loss) from diverging too dramatically, a phenomenon known as catastrophic for-getting (McCloskey and Cohen, 1989; Ratcliff, 1990).", "These results suggest that the quality of embeddings learned by our approach scale in terms of model capacity and train set size; because the training method is completely self-supervised, scaling the train set would simply involve collecting more unlabelled text.", "In this paper, we proposed a self-supervised objective for learning universal sentence embeddings.", "Our objective does not require labelled training data and is applicable to any text encoder.", "We demonstrated the effectiveness of our objective by evaluating the learned embeddings on the SentEval benchmark, which contains a total of 28 tasks designed to evaluate the transferability and linguistic properties of sentence representations.", "When used to extend the pretraining of a transformer-based language model, our self-supervised objective closes the performance gap with existing methods that require human-labelled training data.", "Our experiments suggest that the learned embeddings' quality can be further improved by increasing the model and train set size.", "Together, these results demonstrate the effectiveness and feasibility of replacing hand-labelled data with carefully designed self-supervised objectives for learning universal sentence embeddings.", "We release our model and code publicly in the hopes that it will be extended to new domains and non-English languages.", "This research was enabled in part by support provided by Compute Ontario (https://computeontario.ca/), Compute Canada (www.computecanada.ca) and the CIFAR AI Chairs Program and partially funded by the US National Institutes of Health (NIH) [U41 HG006623, U41 HG003751)." ]
[ "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "objective", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "method", "objective", "objective", "objective", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "method", "method", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "objective", "objective", "result", "abstain", "objective", "other" ]
[ "Long-range semantic coherence remains a challenge in automatic language generation and understanding.", "We demonstrate that large language models have insufficiently learned the effect of distant words on next-token prediction.", "We present coherence boosting , an inference procedure that increases a LM's focus on a long context.", "We show the benefits of coherence boosting with pretrained models by distributional analyses of generated ordinary text and dialog responses.", "It is also found that coherence boosting with state-of-the-art models for various zero-shot NLP tasks yields performance gains with no additional training.", "Language models (LMs) are commonly evaluated for their ability to generate, rank, or classify coherent spans of text.", "Long-range semantic coherence is a unifying feature of modern NLP benchmarks and applications, whether they are about producing short answers to questions, ranking answer choices by their consistency with world knowledge, or generating long responses.", "Large nonspecialized LMs, such as GPT-2 and -3 (Radford et al., 2019; Brown et al., 2020), sometimes fail to understand or use the semantic link between a text and its prompt or long-range context (Fig. 1).", "Samples from these LMs have an unnaturally low density of words that require many tokens of context to predict (4.1), and the scores that the models give to completions of prompts indicate that they are oversensitive to recent context (5).", "We hypothesize that these failures arise from modeling choices and distribution shift.", "Specifically, autoregressive LMs are typically fit to a multi-objective problem: simultaneously maximizing token likelihoods conditioned on many lengths of truncated context (2.1).", "Yet, at generation or Code: github.com/zhenwang9102/coherence-boosting.", "scoring time, likelihoods are conditioned on the entire prompt or previously generated string, specifically selected to be coherent or even guaranteed to influence the output.", "The two common solutions finetuning models on one or multiple tasks (Khashabi et al., 2020; Sanh et al., 2022) and improving models or prompts to facilitate in-context learning (Brown et al., 2020; Schick and Schtze, 2021) do not directly target the problem of long-range coherence.", "This paper proposes coherence boosting , a simple inference-time procedure that increases the effect of distant words on predicted token distributions and is applicable in both generation and ranking settings.", "A pretrained model is viewed as an ensemble of experts that produce token distributions conditioned on varying lengths of context.", "These experts are log-linearly mixed to form a predictor that is superior to the base model (2).", "Coherence boosting greatly improves prediction of words that depend on a long context, as evidenced by state-of-the-art results on tasks specially meant to assess models' attention to distant words (3).", "In generation of generic text and dialog responses, we show that coherence boosting brings the frequency of occurrence of such words close to that seen in natural text (4).", "Beyond generation, we study diverse multiple-choice tasks (5), in which examples are known to be highly coherent.", "Coherence boosting does not modify the base model and depends on a single parameter than can be estimated in one pass through a validation set, yet is a competitive adaptation algorithm.", "Balance between satisfaction of short-range statistical constraints and maintenance of long-range structure was a central question of language generation long before neural language modeling.", "To compensate for the sparsity of the learning signal for long-range influences, -gram models and 8214 A: I'm Natasha.", "Neural language modeling brought a need for recurrent units with better numerical properties for propagating information over long distances (Hochreiter and Schmidhuber, 1997; Cho et al., 2014) and eventually saw the reintroduction of alignment variables (Brown et al., 1993) into generation in the form of attention (Bah-danau et al., 2015; Vaswani et al., 2017).", "Attention is at the core of Transformer LMs, including GPT.", "Language models are being trained on and adapted to ever-longer input sequences (Beltagy et al., 2020; Zaheer et al., 2020; Roy et al., 2021; Press et al., 2022), but they remain undersensi-tive to distant content or syntax (Khandelwal et al., 2018; Sun et al., 2021) and are easily fooled by recency bias in few-shot prompts (Zhao et al., 2021) or multi-turn conversations (Sankar et al., 2019).", "Recent work has continued to study inference-time procedures that prevent text sampled from LMs from degenerating into nonsense.", "Most of these procedures, such as tempered sampling and top /top truncation (Fan et al., 2018; Holtzman et al., 2019), independently modify the output distribution at each generation step to decrease its entropy and diminish its low-likelihood tail.", "Holtzman et al. (2019) and Meister and Cotterell (2021) found that such local modifications increase the quality of long generated sequences; we adopt and extend their methodology in 4.1.", "For dialog systems, Li et al. (2016) propose a decoding scheme that maximizes a mutual information criterion, which explicitly optimizes for dependence of generated text on prompts a special case of coherence boosting.", "In multiple-choice tasks, where a model must choose one of several given completions of a prompt, Brown et al. (2020) observe that selecting the completion that maximizes the conditional likelihood of the completion following the prompt often favors completions having high unconditional likelihood (likelihood following an empty or dummy prompt) and, for some tasks, chooses to divide the scores of candidate answers by their unconditional likelihoods.", "This is also a special case of coherence boosting.", "Such scoring modifications are more thoroughly studied by Zhao et al. (2021); Holtzman et al. (2021).", "The latter attributes the problem to sur-face form competition': there are many variants of the correct completion that together may capture a 8215 large part of probability mass, but the form of the given answer choice alone is not the most likely.", "However, we show that other causes are at play: surface form competition is impossible when the completion is known to be a single token and the range of choices is the whole vocabulary (3), and it is not applicable to open-ended generation (4).", "In this section, is an autoregressive LM over a vocabulary with learnable parameters , taking as input a variable number of tokens (up to a maximum context length ) and producing a vector of next-token likelihoods:", "where ( ) is the probability simplex over .", "We will write the -th component of this output vector as a conditional likelihood, ( | 1 , . . . , ; ) .", "We denote by the model evaluated on only the last input tokens, ignoring earlier tokens: ( 1 , . . . , ; ) : = ( + 1 , . . . , ; ) .", "Coherence boosting for next-token prediction.", "Coherence boosting for a model selects real-valued weights = ( 1 , 2 , . . . , ) and produces a new language model , defined by ( 1 , . . . , ; ) : = softmax (cid:32) (cid:213) = 1 log ( 1 , . . . , ; ) (cid:33) , (1) where log is taken element-wise, or, equivalently, ( | 1 , . . . , ; ) (cid:214) = 1 ( | 1 , . . . , ; ) .", "This is a weighted product-of-experts model, where the experts' are copies of the base model evaluated on different context lengths.", "Because evaluating is expensive, we use sparse weights , as the expression (1) depends only on those for which 0 .", "In Fig. 1 and in the experiments, we allow to have only two nonzero entries: when computing likelihoods of words following a sequence of length , we consider weighted products of max : = (the full context) and an with (a short context, either of fixed length or decided by prompt structure as in 4.2).", "boosting for multiclass classification (Friedman et al., 2000).", "However, our weak classifiers are pretrained and share all of their parameters, not obtained by an iterative procedure of training on reweighted data, and we permit negative weights.", "1 Coherence boosting for answer selection.", "In multiple-choice problems, a LM must choose the best answer following a context, which consists of a premise or passage followed by a shorter premise-free context (either a short phrase, such as An-swer:, that incites the LM to generate an answer in the right format, or a hypothesis that depends on the premise).", "The full context is the concatenation of the premise and the premise-free context (E).", "By the autoregressive factorization, the model assigns conditional likelihoods to sequences of tokens following context.", "A typical model for answer selection ranks the candidate answers (se-quences of tokens) by ( | full context ; ) and outputs the highest-ranked .", "Coherence boosting chooses a parameter and ranks the choices by: log ( | full context ; ) + + log ( | premise-free context ; ) .", "This is a log-linear combination of two models: evaluated with full context and with a partial context.", "When = 0 , ranking by (2) is equivalent to ranking by the base model.", "When = 1 , it is equivalent to dividing the base model's score by the score of each answer conditioned on the prompt (short context), and thus to maximizing pointwise mutual information between the premise and the answer conditional on the premise-free context.", "Unlike Brown et al. (2020); Holtzman et al. (2021), our formulation allows the premise-free context to include information specific to the example, not only a domain-specific dummy prompt.", "We expect coherence boosting to correct for an oversensitivity to the premise-free context, and thus the optimal will typically be negative (see 5).", "1 As for the first half of the term coherence boosting', Howcroft et al. (2020); Belz et al. (2020) found that very incoherent definitions of the word coherence' abound in the natural language evaluation literature.", "The reader is asked to forgive us for the loose definition of long-range semantic coherence' adopted in this paper.", "the predictors , which share parameters .", "Each training iteration samples a sequence (or batch of sequences) of a chosen maximum length + 1 from the data distribution D and minimizes the average negative log-likelihood (NLL) of all words following the parts of the sequence that precede them: the optimization criterion is: E 1 ... + 1 D 1 (cid:213) = 1 log ( + 1 | 1 , . . . , ; ) .", "If D is uniform over all length-( + 1 ) subsequences of a training corpus, any given word is equally to likely to appear in all positions within a sampled sequence 2 , and the criterion is equal to (cid:213) = 1 1 E [ log ( + 1 | 1 , . . . , ; )] (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) L ( ) , (3) This is a uniform scalarization of an -task problem: the -th objective L ( ) is the expected NLL of a word in the corpus following context words.", "This situation is different from that seen at generation time.", "If the text generated so far is 1 2 . . . , the distribution from which the next word + 1 is sampled is ( 1 , . . . , ; ) only the ensemble member using full context is used.", "However, if the string 1 . . . + 1 had been seen in training, would have been trained to predict + 1 given all partial contexts , with equal weight given to all prediction losses.", "Thus, is trained to make predictions on data it never sees in evaluation, and may be prevented from optimally learning to use long context: parameters that locally optimize (3) are locally Pareto-optimal for the set of prediction losses L 1 , . . . , L , but not necessarily optimal for any individual L .", "An ensemble of the ( ) may be a better predictor than alone.", "(See A for further analysis of when this occurs.)", "Undertraining.", "The parameters are shared by the predictors , and modeling power must be spread among the losses L ( ) .", "The short-context predictors are easier to fit, while sequences in which long context affects the prediction are rare.", "We expect sensitivity to long context, and precision in modeling its effect, to be especially diminished if the model is undertrained.", "2 Many authors leave unspecified the way in which training batches are formed from a corpus of input documents.", "Here we assume that all training documents are concatenated into one (very long) document separated by end-of-text tokens and ignore minute effects near the start and end of this document.", "Distribution shift.", "While the training procedure causes a bias against the influence of longer contexts on generation, we see the opposite bias in downstream tasks (question answering, natural language inference, adversarial probes for common sense): Many modern NLP benchmarks try to challenge models to use long context (3, 5).", "The LAMBADA dataset (Paperno et al., 2016) tests LMs' understanding of long-range dependencies by measuring the prediction of the final words in passages of several sentences.", "The task explicitly requires reasoning over a broad context: humans can reliably guess the last word when given a whole passage, but not when given only the last sentence.", "We perform experiments with the GPT family of models, closely replicating the evaluation setting of Radford et al. (2019).", "3 We predict the final word as the top-ranked token under the boosted model max , where max is the model taking the full available context and , are the chosen length and coefficient of the short context.", "To choose and , we do a grid search on the validation set and apply the best values to the testing set.", "Results.", "Table 1 shows the accuracies and optimal parameter values , .", "Coherence boosting vastly reduces prediction error for all models.", "In particular, the boosted GPT-2 Small performs better than the original GPT-3 2.7B.", "The boosted GPT-3 175B achieves a new state of the art. 3 Certain details are omitted by Radford et al. (2019).", "Based on https://github.com/openai/gpt-2/ issues/131 , we nearly match baseline accuracy by predicting the last subword token, rather than the last word.", "Other than the impressive performance gain, we highlight two observations.", "(1) The optimal is always negative, indicating that the optimal mixture of models penalizes the influence of short-range context relative to long-range context.", "(2) With increasing model size, the optimal and become closer to 0.", "This means that bigger models capture long-range coherence better than small models, as they have less need to penalize the effect of short context.", "(Fig. 2 shows the accuracy curves for all models by sweeping with a fixed . The peak clearly moves to the left as model size grows.) 4 Experiments: Language generation 4.1 Generic text The experiment in this section extends that of Holtzman et al. (2019).", "A selection of 5000 articles from WebText (Radford et al., 2019) is taken as a reference corpus of human-written text.", "A language model (for us, GPT-2 Large) is prompted to generate text conditioned only on the first sentence of each of these articles, up to a maximum of 200 tokens, yielding 5000 machine-generated texts.", "The human-written and machine-generated texts are compared by four automatic metrics: perplexity under the base LM, self-BLEU-4 (Zhu et al. (2018); the mean BLEU-4 score of a generated text with respect to all other generated texts as references), Zipf coefficient (the linear regression coefficient between log-rank and log-frequency of generated tokens) and repetition (the fraction of generated texts that end in a repeating sequence of tokens).", "It is desirable for a model and inference procedure to produce text that is as close as possible in these metrics to the human-written reference.", "To measure long-range semantic coherence in the generated text, we define three new metrics: Long-range repetition (LR ): For a whole number and document , let ( ) be the number of distinct tokens in , and let ( ) be the number of distinct tokens for which the distance between their first and last occurrence in is at least positions.", "The long-range repetition score LR of a corpus { 1 , . . . , 5000 } is a macro-average: LR : = (cid:205) 5000 = 1 ( ) (cid:205) 5000 = 1 ( ) .", "This simple measure of lexical coherence favors repetition of words long after they are first used, but gives lower weight to documents that degenerate into repetition of a short span.", "Long-dependent token frequency (LTF): A long-dependent token is one to which the base LM assigns a likelihood of at least 20% given its full context, but a likelihood of less than 5% given only the 20 tokens of context preceding it.", "We compute the frequency of long-dependent tokens among all generated tokens.", "Long-short likelihood difference ( ): The mean difference in likelihoods assigned to tokens by the base LM conditioned on full context and conditioned on 20 tokens of context.", "Although some choices of constants are needed to define LTF and , we intend them to be intuitive summaries of long-range coherence in the absence of established metrics.", "In particular, 20 tokens is close to the length of one sentence in typical English text.", "We sample 5000 document completions from GPT-2 Large following sampling procedures with a range of boosting schemes.", "We consider models of the form 1 max , for { 8 , 16 , 32 , 64 } and { 0 .", "4 , 0 .", "2 , 0 .", "1 , 0 .", "05 , 0 .", "025 , 0 } .", "(Such a parametrization of boosting parameters was chosen to ensure that when the context has length less than or the distant context has very little effect on the next word the boosted model becomes equivalent to the untempered max .)", "Top truncation with = 0 .", "95 is applied to all models.", "coherence (4.1).", "(Nearest-to-human values in bold , boosting models better than top sampling alone in italics .)", "Results.", "Metrics of two of the best models, with = 32 , = 0 .", "05 and = 64 , = 0 .", "1 , are shown in Table", "2. In particular, the latter model generates text that is closer to the human reference, or equally close, to the pure top sampling ( = 0 ) baseline in all metrics, with the greatest improvement seen in the coherence measures.", "Fig. 3 shows the dependence of selected metrics on and .", "Coherence boosting brings all metrics closer to those of human text.", "As increases, the optimal grows in magnitude.", "This is expected: the predictive effect of tokens more than positions away decreases with ( approaches max ).", "We also note that a simple sampling with temperature 0.9 performs better than top sampling in most of the coherence metrics.", "This suggests that the improvements accomplished by top truncation come at the cost of introducing a bias towards tokens that are predictable from a short context.", "Coherence boosting corrects this bias without sac-rificing the gains in other measures.", "An example of human, top , and coherence boosting outputs is shown in Table D.1.", "This experiment is based on the Dialog System Technology Challenge 7 (DSTC7) (Galley et al., 2019), which benchmarks generation of dialog responses", "responses conditioned on one or more turns of conversation context.", "As a base model, we use DialoGPT (Zhang et al., 2020c), a GPT-2 Small variant that demonstrated strong results on this task.", "Dialog systems' responses to the 2208 conversation prompts 4 are scored against human-written reference responses (five for each example).", "Following Zhang et al. (2020c), we use the -gram overlap metrics NIST (Doddington, 2002), BLEU (Papineni et al., 2002), and METEOR (Lavie and Agarwal, 2007), as well as two intrinsic measures of -gram diversity from Li et al. (2016); Zhang et al. (2018): Distinct and Entropy .", "It is desirable for a dialog system to reach scores close to those of the human responses in all metrics.", "In addition to the decoding algorithms considered by (Zhang et al., 2020c) beam search and greedy decoding we consider greedy decoding with a coherence boosting model.", "As long and short predictors, we use DialoGPT conditioned on the full conversation context and on only the (context-free) response generated so far .", "That is, if the conversation context is and the text generated so far is 1 . . . , then + 1 is predicted using the model max + 1 , evaluated on the string (cid:104) sep (cid:105) 1 . . . , where (cid:104) sep (cid:105) is the turn separa-4 The DSTC7 evaluation data, scraped from Reddit, is undisclosed; we reacquire it using officially released code.", "tor token.", "We consider { 0 , 0 .", "1 , . . . , 0 .", "8 } .", "Results.", "Table 3 shows the metrics of the boosting models that reach the peak average NIST and BLEU scores ( = 0 . 3 and = 0 . 7 ).", "Increasing the magnitude of leads to responses that are more relevant to the prompt (higher BLEU and NIST) and more diverse than those from greedy decoding.", "As grows large, the boosting model favors creative responses that are relevant to the prompt (high NIST), but simple responses that are common in the reference data become unlikely (low BLEU).", "5 We observed that the responses with = 0 .", "7 , despite the superior metrics, are more likely to be ungrammatical and innovate words in an effort to use tokens relevant to the prompt.", "In practice, improving dialog systems with coherence boosting may require techniques to prevent these side effects, such as repetition penalties or relaxation of greedy decoding to low-temperature sampling.", "Finally, we note that the learning of DialoGPT was initialized with a pretrained GPT-2 and uses GPT-2's end-of-text token as the turn separator.", "This choice may reduce DialoGPT's attention to past turns, as tokens preceding the end-of-text token are never informative in GPT-2's training data.", "We evaluate coherence boosting on zero-shot language understanding and inference tasks, where examples are expected to be highly coherent.", "We study 15 datasets in 5 categories of tasks.", "(1) Cloze tasks : StoryCloze (Mostafazadeh et al., 2016), HellaSwag (Zellers et al., 2019), and COPA (Roemmele et al., 2011).", "(2) Question answering : CommonsenseQA (CsQA) (Talmor et al., 2019), OpenBookQA (OBQA) (Mihaylov et al., 5 Galley et al. (2019) argue that NIST and diversity metrics are more informative measures than BLEU for multi-reference scoring, since BLEU favors systems that often produce responses with little relation to the prompt (e.g., I don't know).", "2018), ARC Easy / Challenge (ARC-E/C) (Clark et al., 2018), and PIQA (Bisk et al., 2020).", "(3) Text classification : SST-2/5 (Socher et al., 2013), TREC (Voorhees and Tice, 2000), AGNews (Zhang et al., 2015).", "(4) Natural language inference : RTE (Dagan et al., 2005), CB (De Marneffe et al., 2019), and BoolQ (Clark et al., 2019).", "(5) Fact knowledge retrieval : LAMA (Petroni et al., 2019).", "All tasks except LAMA are formulated as multiple-choice problems.", "We convert text classification and inference tasks to multiple-choice tasks by choosing meaningful answer words, e.g., True/False.", "The prediction is made by selecting the choice with the highest LM likelihood.", "For in-context learning of GPT models, prompt formats greatly impact performance.", "We follow previous work (Brown et al., 2020; Zhao et al., 2021; Holtzman et al., 2021) to create natural prompts to enlarge the effectiveness of in-context learning, but we do not aim to optimize the full and context-free prompt format: our goal is to evaluate coherence boosting models with a fixed prompt.", "The prompt formats we use are listed in Table E.1.", "As described in 2, within each prompt we identify a premise-free context , which is used as the context for the short-range model in coherence boosting.", "For each dataset, we pick the optimal value of the parameter on the validation set and report the accuracy on testing set.", "(If no testing set is publicly available, we choose on a subset of the training set and report the final number on the validation set.)", "Across all experiments, we do not put any few-shot examples in the prompt.", "For the knowledge retrieval task, we follow Zhao et al. (2021)'s data split of LAMA and evaluate GPT models on facts whose missing answers are at the end of the sentence (to fit the nature of autoregressive language models).", "We limit the prompt length to be larger than 5 tokens and rerun the model from Zhao et al. (2021) on the new data.", "Results: Multiple-choice tasks.", "Results of three representative base models on all multiple-choice tasks are presented in Table", "4. (Results for all models are in Tables F.1 and F.2.) We compare our best model with two baselines, = 0 ( max ) and = 1 .", "The former one is the original full-context model, while the latter is, for most tasks, a form of unconditional probability normalization as performed by Brown et al. (2020); Holtzman et al. (2021).", "We also compare our best model with other inference methods (Holtzman et al., 2021; Min et al., 2021) in Tables F.3 and F.4.", "By comparing the third column with the first two columns within each model in Table 4, we can see that our method with the selected generally improves the accuracy on all tasks.", "Some of the improvements are dramatic, where boosted GPT-2 Small outperforms GPT-2 XL's base model (e.g., CsQA, OBQA, ARC-C) and is even comparable with GPT-3 175B's base model (e.g., SST-2, SST-5, RTE).", "We make similar conclusions when comparing coherence boosting with other inference methods in Tables F.3 and F.4.", "We observe that the optimal depends on tasks and models (fourth column within each model), which means that cannot be heuristically set to 0 or 1 as in past work.", "This finding suggests the necessity of searching for an optimal .", "We visualize the accuracy curve by varying in the testing set of all datasets.", "We show the curve for StoryCloze in Fig. 4 and present similar figures for all tasks in Figs.", "F.1 and F.2.", "Consistent with the results on LAMBADA (3), the optimal is usually negative, and its absolute value tends to decrease with the model size.", "We selected the optimal by the validation set, but future work may explore automatic and adaptive methods for setting this parameter.", "Notice that all experiments required only a single pass through the data to compute answer likelihoods conditioned 8221 GPT-2 GPT-3 125M 350M 760M 1.6B 2.7B 6.7B 13B 175B max 8.48 14.78 13.88 14.29 17.33 19.42 22.06 26.76 Zhao et al. (2021) 17.45 22.87 23.90 23.97 26.30 30.57 31.96 34.78 CB ( = ) 19.85 22.87 25.74 25.43 28.75 32.25 35.02 37.57 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.4 1 2 3 3 1 1 1 2 Table 5: Accuracies (%) of GPT models on LAMA.", "Results: Knowledge retrieval.", "Unlike LAMBADA, where long contexts are required for inferring the last word, LAMA contains much shorter sentences for knowledge facts, i.e., (subject, relation, object).", "A recent study (Cao et al., 2021) shows that the prediction is biased by the relation in the short context, i.e., the answer to a prompt (e.g., Dante was born in ___) can be induced by the relation (was born in) without the subject.", "Coherence boosting mitigates the influence of those short contexts by making the prediction dependent on a longer context containing the subject.", "We present results for all models on LAMA in Table", "5. We also compare our model with contextual calibration (CC) (Zhao et al., 2021), which processes the LM's output probabilities with a log-linear model.", "6 Coherence boosting with the selected and outperforms both the base model and CC by significant margins.", "We suggest three promising research directions:", "Coherence tuning.", "The need to evaluate the base LM with multiple contexts in coherence boosting introduces cost and complexity at inference time.", "It may be desirable instead to modify the weights of the base model to improve long-range coherence properties.", "In B, we describe a self-tuning' algorithm that achieves this without training on any data created for this purpose .", "text, but future work should consider other architectures and target domains.", "In C, we give preliminary results on the text summarization domain.", "Although we expect recency bias to be less pronounced in LMs that use separate attention modules to process the prompt and the output such as encoder-decoder models for translation or summarization procedures inspired by coherence boosting may prove effective in domains where a strong causal link between prompt and output is known to exist.", "Such domains include language generation conditioned on structured data (Yao et al., 2020; Mager et al., 2020; Moosavi et al., 2021) and model-guided reasoning in formal languages, such as proof or program synthesis (Polu and Sutskever, 2020; Chen et al., 2021; Li et al., 2022).", "Efficient search proposals.", "Procedures that force LMs to be more focused on a prompt, or a specific part of it, when generating or ranking tokens can benefit algorithms that search for combinations of words through sampling.", "It would be interesting to use coherence boosting in non-autoregressive text generation algorithms, such as to accelerate the mixing of MCMC methods for constrained text generation (Miao et al., 2019; Zhang et al., 2020b; Malkin et al., 2021).", "We have illustrated the hyposensitivity of pretrained language models to long-range context and proposed a simple inference-time remedy.", "We hope to see coherence boosting used as a simple alternative or complement to finetuning procedures in zero-shot applications of pretrained LMs.", "The authors are grateful to Sudha Rao, Matt Richardson, and Huan Sun for valuable discussions about this project.", "We thank the anonymous reviewers for their comments and suggestions.", "We hope and expect to see a nonnegative net societal impact from better text generation and ranking algorithms in general and from this work in particular.", "As we have shown, there is room to improve the inference procedures used with small language models, which incur lower costs than training and evaluation of large models.", "However, researchers should bear in mind the risks and potential misuse of automatic generation of long-form text." ]
[ "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "result", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "result", "other", "other", "abstain", "abstain", "abstain" ]
[ "Highlighting while reading is a natural behavior for people to track salient content of a document.", "It would be desirable to teach an extractive summarizer to do the same.", "However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth.", "Manual annotation of extraction units is cost-prohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results.", "In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards.", "We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts.", "The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering.", "Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.", "Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017), these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018).", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer.", "The victim filed a complaint after seeing images of herself on his phone last year.", "Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "Table 1 : An example extractive summary bolded in the article (top).", "Highlighted sections indicate salient segments useful for answering fill-in-the-blank questions generated from human abstracts (bottom).", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1.", "A primary challenge faced by extractive sum-marizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Wood-send and Lapata, 2010).", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can contain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions ( human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011).", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013, 2014; Hong et al., 2014; Yogatama et al., 2015).", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018).", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarn-pradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018).", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018).", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite extractive.", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018).", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; Song et al., 2018).", "It thus raises concerns as to whether such systems can be used in real-world scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al. (2016) who seek to identify rationales from textual input to support sentiment classifica-tion and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth question-answer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Let S be an extractive summary consisting of text segments selected from a source document x .", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 1 We have made our code and models available at https: //github.com/ucfnlp/summ_qa_rewards 3.1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014), among others.", "A recent study by Khan-delwal et al. (2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t -th source word h et = [ h et || h et ] is the concatenation of the hidden states in both directions.", "A chunk is similarly denoted by h et = [ h et || h et + n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector ( h et R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of { 1,3,5,7 } words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t -th source word is represented by the concatenation of feature maps (an m -dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n ).", "In the following we use h et to denote the vector representation of the t -th extraction unit, may it be a word or a chunk, generated using either encoder.", "It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t -th extraction unit ( y t ) depends on all previous labels ( y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t -th source unit is characterized by its informativeness (encoded in h et ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding ( g t ) to signify the position of the t -th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t -th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al. (2017).", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t -1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t -th input to the network is represented as y t 1 h et 1 where y t 1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit ( h et 1 ) is to be included in the summary ( corresponds to elementwise product).", "During training, we apply teacher forcing and y t 1 is the ground-truth extraction label for the ( t 1 )-th unit; at test time, g t \u0000 1 g t g t +1 g t +2 s t +2 s t +1 s t s t \u0000 1 h et \u0000 1 h et h et +1 h et +2 Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation ( h et ), its positional embedding ( g t ), and the partial summary representation ( s t ) to determine if the t -th text unit is to be included in the summary.", "Best viewed in color.", "y t 1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and La-pata (2016) and Nallapati et al. (2017), similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "Given the partial summary representation ( s t ), and representation of the text unit ( h et ) and its positional encoding ( g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1.", "Our model parameters include { W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summary-worthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the question-answer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions ( human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods:", "(a) we extract a set of entities with tag { PER , LOC , ORG , MISC } from each sentence using the Stanford CoreNLP toolkit (Manning et al., 2014);", "(b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is { NSUBJ , CSUBJ , OBJ , IOBJ } (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, sub-ject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of question-answer pairs P = { ( Q k , e k ) } Kk =1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation ( q k ).", "This is achieved by concatenating the last hidden states of the for-ward/backward passes of a bidirectional LSTM (Eq.", "(6)).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k -th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define t,k to be the semantic relatedness between the t -th source text unit and the k -th question.", "Following Chen et al. (2016a), we introduce a bilinear term to characterize their relationship ( t,k h et W q k ; see Eq.", "(7)).", "In this process, we consider only those source text units selected in summary S .", "Using t,k as weights, we then compute a context vector c k condensing summary content related to the k -th question (Eq.", "(8)).", "To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector ( c k ), question vector ( q k ), abso-lute difference ( | c k q k | ) and element-wise product ( c k q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: P ( e k |S , Q k ) = softmax ( W e f ReLU ( W u u k + b u )) .", "Such a fully-connected output layer has achieved success on natural language inference (Mou et al., 2016; Chen et al., 2018); here we test its efficacy on answer selection.", "The model parameters include { W , W e , W u , b u } and those of f Bi-LSTM 4 .", "In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate,", "fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights , , and are tuned on the dev set.", "We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary ( y ).", "A high-quality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system ( y ) and reference summary ( y ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., | y t y t 1 | ).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold (Eq.", "(13)).", "QA R c ( y )= 1 KK (cid:88) k =1 log P ( e k | y ,Q k ) (10) Adequ.", "R a ( y )= 1 | y |U ( y , y ) (11) Fluency R f ( y )= | y | (cid:88) t =2 | y t y t 1 | (12) Length R l ( y )= (cid:12)(cid:12)(cid:12) 1 | y | (cid:88) t y t (cid:12)(cid:12)(cid:12) (13) The reward function R ( y ) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008).", "A reinforcement learning agent finds a policy P ( y | x ) to maximize the expected reward EP ( y | x ) [ R ( y )] .", "Training the system with policy gradient (Eq.", "(14)) involves repeatedly sampling an extractive summary y from the source document x .", "At time t , the agent takes an action by sampling a decision based on p ( y t | y <t , x ) (Eq.", "(5)) indicating whether the t -th source text unit is to be included in the summary.", "Once the full summary sequence y is generated, it is compared to the ground-truth sequence to compute the reward R ( y ) .", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p ( y t | y <t , x ) , we choose y t that yields the highest probability to generate the system summary y .", "This process is deterministic and no QA is required.", "EP ( y | x ) [ R ( y )] = EP ( y | x ) [ R ( y ) log P ( y | x )] 1 N (cid:80) Nn =1 R ( y ( n ) ) log P ( y ( n ) | x ) (14) 4 Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al. (2017).", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a).", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 ( ) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 ( ) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 ( ) uses multiple window sizes of { 1 , 3 , 5 , 7 } and 128 filters per window size.", "h et is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018), we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RL-based summarizers.", "We associate each article with at most 10 QA pairs ( K =10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014), where a mini-batch contains 128 CNN System #Ans.", "Table 2 : Summarization results on CNN test set.", "Summaries are evaluated at their full-length by ROUGE F 1 -scores.", "Table 3 : Summarization results on DM test set.", "To ensure a fair comparison, we follow the convention to report ROUGE recall scores evaluated at 75 bytes.", "articles and their QA pairs.", "The summary ratio is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018), we set hyperparameters = 2 ; and are tuned on the dev set using grid search.", "Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004), SumBasic (Vanderwende et al., 2007), and KLSum (Haghighi and Vanderwende, 2009).", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "then performing extraction based on the learned representations.", "Cheng et al. (2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoder-decoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3, evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004), which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from ground-truth labels ( 3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( 3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3, our QASumm methods with reinforcement learning (+ROOT, NoText QASumm+NoQ GoldSumm FullText Train Dev Gap Train Dev Gap Train Dev Gap Train Dev Gap SUBJ/OBJ 49.7 24.4 25.3 55.9 31.2 24.7 69.3 48.6 20.7 67.6 43.3 24.3 ROOT 68.1 34.9 33.2 71.6 36.3 35.3 76.9 44.9 32.0 76.0 35.7 40.3 NER 61.0 15.8 45.2 66.0 32.7 33.3 85.2 54.0 31.2 82.4 46.3 36.1 Table 4 : Question-answering accuracies using different types of QA pairs (ROOT, SUBJ/OBJ, NER) and different source input (NoText, QASumm+NoQ, GoldSumm, and FullText) as the basis for predicting answers.", "+SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and La-pata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in 4.3.", "In Tables 2 and 3, we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( 3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include", "(a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and", "(b) GoldSumm , which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText , corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( 3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4, we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using FullText as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA Figure 2 : Summarization results using f LSTM 1 or f CNN 2 encoder with word/chunk as the extraction unit.", "pairs work the best for both GoldSumm and FullText , likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( 3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM ( f LSTM 1 ) and CNN ( f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e. ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob-Summary Time Accuracy Inform.", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al. (2017), and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "method", "objective", "abstain", "abstain", "objective", "objective", "objective", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "objective", "objective", "abstain", "objective", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective" ]
[ "Pre-trained multilingual language models, e.g. , multilingual-BERT, are widely used in cross-lingual tasks, yielding the state-of-the-art performance.", "However, such models suffer from a large performance gap between source and target languages, especially in the zero-shot setting, where the models are fine-tuned only on English but tested on other languages for the same task.", "We tackle this issue by incorporating language-agnostic information, specifi-cally, universal syntax such as dependency relations and POS tags, into language models, based on the observation that universal syntax is transferable across different languages.", "Our approach, named COunterfactual SYntax (COSY), includes the design of SYntax-aware networks as well as a COunterfactual training method to implicitly force the networks to learn not only the semantics but also the syntax.", "To evaluate COSY, we conduct cross-lingual experiments on natural language inference and question answering using mBERT and XLM-R as network backbones.", "Our results show that COSY achieves the state-of-the-art performance for both tasks, without using auxiliary dataset.", "1 1 Introduction With the emergence of BERT (Devlin et al., 2019), large-scale pre-trained language models have become an indispensable component in the solutions to many natural language processing (NLP) tasks.", "Recently, large-scale multilingual transformer-based models, such as mBERT (Devlin et al., 2019), XLM (Lample and Conneau, 2019) and XLM-R (Conneau et al., 2020a), have been widely deployed as backbones in cross-lingual NLP tasks (Wu and Dredze, 2019; Pires et al., 2019; Keung et al., 2019).", "However, these models trained 1 Our code is publicly available on GitHub: https:// github.com/PluviophileYU/COSY English : I bought two new laptops yesterday .", "on a single resource-rich language, e.g. , English, all suffer from a large drop of performance when tested on different target languages, e.g. , Chinese and Germanwhere the setting is called zero-shot cross-lingual transfer .", "For example, on the XQUAD dataset, mBERT achieves a 24 percentage points lower exact match score on the target language Chinese than on the training language English (Hu et al., 2020).", "This indicates that this model has seriously overfitted English.", "An intuitive way to tackle this is to introduce language-agnostic informationthe most transferable feature across languages, which is lacking in existing multilingual language models (Choenni and Shutova, 2020).", "In our work, we propose to exploit reliable language-agnostic information syntax in the form of universal dependency relations and universal POS tags (de Marneffe et al., 2014; Nivre et al., 2016; Zhou et al., 2019, 2021).", "As illustrated in Figure 1, the sentences in Chinese and English share the same meaning but have differ-Factual Syntax Dependency Relation-Level Counterfactual Syntax POS Tag-Level Counterfactual Syntax bought I laptops yesterday .", "ent word orders.", "The order difference hampers the transferability between English and Chinese in conventional language models (with sequential words as input).", "In contrast, it is clear from Figure 1 that the two sentences share identical dependency relations and POS tags.", "Thus, we can incorporate such universal syntax 2 information to enhance the transferability across different languages.", "To achieve this learning objective in deep models, we design syntax-aware networks that incorporate the encodings of dependency relations and POS tags into the encoding of semantics.", "However, we find that empirically the conventional attention-based incorporation of syntax, e.g. , relational graph attention networks (Ishiwatari et al., 2020), has little effect on improving the model.", "One possible reason is that the learning process may be dominated by the pre-trained language models due to their strength in semantic representation learning, which leads to an overfitted model.", "This raises the question of how to induce the model to focus more on syntax while maintaining its original capability of representing semantics?", "To this end, we propose a novel COunterfactual SYntax (COSY) method, inspired by causal inference (Roese, 1997; Pearl et al., 2009) and contrastive learning (He et al., 2020).", "The intuition behind COSY is to create copies of training instances with their syntactic features altered (see the counterfactual syntax in Figure 2), and to force the encodings of the counterfactual in-2 In the rest of this paper, syntax denotes universal syntax for simplicity.", "stances to be different from the encodings of their corresponding factual instances.", "In this way, the model would learn to put more emphasis on the syntactic information when learning how to encode an instance, and such encodings are likely to perform well across languages.", "We evaluate our COSY method on both question answering (QA) and natural language inference (NLI) under cross-lingual settings.", "Experimental results show that, without using any additional data, COSY is superior to the state-of-the-art methods.", "Contributions : 1) we develop a syntax-aware network that incorporates transferable syntax in language models; 2) we propose a novel counterfactual training method that addresses the technical challenge of emphasizing syntax; and 3) extensive experiments on three benchmarks demonstrate the effectiveness of our method for cross-lingual tasks.", "Cross-lingual Transfer.", "Large-scale pre-trained language models (Devlin et al., 2019; Liu et al., 2019) have achieved sequential success in various natural language processing tasks.", "Recent studies (Lample and Conneau, 2019; Conneau et al., 2020a) extend the pre-trained language models to multilingual tasks and demonstrate their prominent capability on cross-lingual knowledge transfer, even under zero-shot scenario (Wu and Dredze, 2019; Pires et al., 2019; Hsu et al., 2019).", "Motivated by the success of multilingual language models on cross-lingual transfer, several works explore how these models work and what their bottleneck is.", "On the one hand, some studies find that the shared sub-words (Wu and Dredze, 2019; Dufter and Schutze, 2020) and the parameters of top layers (Conneau et al., 2020b) are crucial for cross-lingual transfer.", "On the other hand, the bottleneck is attributed to two issues:", "(i) catastrophic forgetting (Keung et al., 2020; Liu et al., 2020), where knowledge learned in the pre-training stage is forgotten in downstream fine-tuning;", "(ii) lack of language-agnostic features (Choenni and Shutova, 2020; Zhao et al., 2020) or linguistic discrepancy between the source and the target languages (Wu and Dredze, 2019; Lauscher et al., 2020).", "In this work, we aim to tackle zero-shot and few-shot cross-lingual transfer by focusing on the second issue.", "Existing works can be roughly divided into two groups.", "The first proposes to modify the lan-579 guage model by aligning languages with parallel data (Zhao et al., 2020) or strengthening sentence-level representation (Wei et al., 2020).", "The second group focuses on the learning paradigm for fine-tuning on downstream tasks.", "For instance, some methods adopt meta-learning (Nooralahzadeh et al., 2020; Yan et al., 2020) or intermediate tasks training (Phang et al., 2020) to learn cross-lingual knowledge.", "Our COSY belongs to the second group and fills the blank of using the syntactic information in zero-shot (few-shot) cross-lingual understanding.", "Counterfactual Analysis.", "Counterfactual analysis aims to evaluate the causal effect of a variable by considering its counterfactual scenario.", "Counterfactual analysis has been widely studied in epidemiology (Rothman and Greenland, 2005) and social science (Steel, 2004).", "Recently, counterfactual reasoning has motivated studies in applications.", "In the community of computer vision, counterfactual analysis has been successfully applied in explanation (Goyal et al., 2019a,b), long-tailed classification (Tang et al., 2020a), scene graph generation (Tang et al., 2020b), and visual question answering (Chen et al., 2020; Niu et al., 2020; Ab-basnejad et al., 2020).", "In the community of natural language processing, counterfactual methods are also emerging recently in text classification (Choi et al., 2020), story generation (Qin et al., 2019), dialog systems (Zhu et al., 2020), gender bias (Vig et al., 2020; Shin et al., 2020), question answering (Yu et al., 2020), and sentiment bias (Huang et al., 2020).", "To the best of our knowledge, we are the first to conduct counterfactual analysis in cross-lingual understanding.", "Different from previous works (Zhu et al., 2020; Qin et al., 2019) that generate word-level or sentence-level counterfactual samples, our counterfactual analysis dives into syntax level that is more controllable than text and free from complex language generation module.", "COSY aims to leverage the syntactic information, e.g. , dependency relations and POS tags, to in-crease the transferability of cross-lingual language models.", "Specifically, COSY implicitly forces the networks to learn to encode the input not only based on semantic features but also based on syntactic features through syntax-aware networks and a counterfactual training method.", "As illustrated in Figure 3, COSY consists of three branches with each branch based on syntax-aware networks (SAN) indicated by a distinct color.", "The main branch (in black) is the factual branch that uses factual syntax as input.", "The red and blue branches are counterfactual branches using counterfactual dependency relations and counterfactual POS tags as input, respectively.", "The counterfactual training method guides the black branch to put more emphasis on syntactic information with the help of other two branches.", "Note that the red and blue branches work for counterfactual training, and only the prediction from the black branch is used in testing.", "Below, we first elaborate the modules of SAN in Section 3.1, and then introduce the counterfactual training method in Section 3.2.", "As shown in Figure 3, SAN contains four major modules: a set of feature extractors, a relational graph attention network (RGAT), fusion projection, and a classifier.", "In this section, we use the route in the black branch as an example to elaborate each module.", "The set of feature extractors include three components: a pre-trained language model, a dependency graph constructor and a POS tags extractor.", "Pre-trained Language Model.", "Following previous work (Hu et al., 2020), we deploy a pre-trained multi-lingual language model, e.g. , mBERT (De-vlin et al., 2019), to encode each input sentence into contextual features.", "Given a sequence of tokens with a length of S , we denote the derived contextual features as H =[ h 1 , ..., h S ] RS d , where d is the dimensionality of each hidden vector.", "Dependency Graph Constructor.", "We use it to construct the (factual) dependency graph for each input sentence.", "In this work, the Stanza toolkit (Qi et al., 2020) is used to extract the universal dependency relations as the first step.", "Then, the dependency graph can be represented as G = { V, R, E } , where the nodes V are tokens, the edges E denote the existence of dependency relations, and the set R contains the relation types for E .", "Each edge e ij E consists of a triplet ( v i , v j , r ) where v 1 , v 2 V and r R .", "As shown in Figure 3, we define three kinds of relation types in R : 1) a forward syntactic relation, e.g. , love OBJ apples; 2) an inverse syntactic relation, e.g. , apples OBJ 1 love; and 3) a self loop \"I love apples .\"", "SELF that allows the information to flow from a node to itself.", "Note that we regard the ROOT relation as a self-loop.", "In this way, we obtain 75 different types of relations in total, and thus denote the embedding matrix as R R 75 d (cid:2) .", "POS Tags Extractor.", "We deploy the same Stanza toolkit (Qi et al., 2020) to assign (factual) POS tags P for all tokens.", "We obtain 17 different types of POS tags and denote the embedding matrix as T R 17 d (cid:2) .", "Relational Graph Attention Networks (RGAT).", "RGAT is one of the standard backbones to incorporate the dependency graph (Ishiwatari et al., 2020; Linmei et al., 2019).", "Given the (factual) dependency graph G with the contextual features of each node, RGAT can generate the relation-aware features (for each node).", "Details are given below.", "Suppose e ij is the directed edge from node v i to node v j and the dependency relation r .", "The importance score of v j from v i is computed as: s ( v i , v j ) = Concat( e sij , e rij ) W Attn , (1) where W Attn R ( d/ 2+ d (cid:2) ) 1 maps a vector to a scalar, e rij is the embedding of the dependency relation between v i and v j from R , and e sij is computed by element-wise multiplication between v i and v j : e sij = ( h i WQ ) ( h j WK ) , (2) where WK R d d/ 2 and WQ R d d/ 2 are the learnable parameters for key and query projections (Vaswani et al., 2017), and h i and h j denote their contextual features extracted from pre-trained language models.", "Then, the importance scores are normalized across N j to obtain the attention score of v j from v i : ( v i , v j ) = exp( s ( v i , v j )) (cid:2) k N j exp( s ( v k , v j )) , (3) where N j denotes the set of nodes pointing to v j .", "The relation-aware features of v j is computed as the weighted sum of all nodes in N j with corresponding attention scores.", "After computing all nodes, we get the relation-aware features H =[ h 1 , ..., h S ] RS d .", "Fusion Projection.", "We fuse the relation-aware features H with the (factual) POS tags informa-581 tion before feeding them into the classifier.", "Given POS tags P , the fused features for each token are represented by f j = Concat( h j , p j ) WF , (4) where WF R ( d + d (cid:2) ) d are learnable parameters of fusion projection and p j is the corresponding embedding of the POS tag of the j -th token from T .", "The fused features of the entire sequence are denoted as F =[ f 1 , ..., f S ] RS d .", "Recall that the challenge in the effective utilization of syntax is how to induce the model to focus more on syntax while maintaining its original representation capability of semantics.", "Inspired by counterfactual analysis (Pearl et al., 2009; Pearl, 2010; Pearl and Mackenzie, 2018) and contrastive learning (Hadsell et al., 2006), we propose a counterfactual training method by incorporating counterfactual syntax (counterfactual dependency graph and counterfactual POS tags) on the red and blue branches in Figure 3.", "Each branch is designed to guide the model to focus on one type of syntax, i.e. , dependency graph or POS tags.", "Counterfactual Dependency Graph is utilized on the red branch with factual POS tags in Figure 3.", "We build a counterfactual dependency graph by maintaining graph structure and nodes, and replacing each type of relation (except for a self-loop SELF ) with a randomized (counterfactual) type.", "We name it G .", "We feed G and H into RGAT to obtain the counterfactual relation-aware features denoted as H .", "Then, we fuse H with the factual POS tags to derive the counterfactual features F cf 1 = [ f cf 1 1 , ..., f cf 1 S ] on the red branch.", "Finally, we can calculate the similarity between the factual and the counterfactual features, by leveraging the dot-product operation, as follows, L cf 1 = 1 SS (cid:3) i f i f cf 1 i .", "Counterfactual POS Tags are utilized with the factual dependency graph on the blue branch in Figure 3.", "We create counterfactual POS tags P from factual POS tags P by randomly selecting a POS tag for each token.", "Accordingly, we replace each embedding p i by p i .", "Given the relation-aware features H from the black branch, we then feed the embeddings of counterfactual POS tags in Eq.", "4 and get the counterfactual features as F cf 2 = [ f cf 2 1 , ..., f cf 2 S ] .", "Finally, we can calculate the similarity between the factual and the counterfactual features (on the blue branch) by leveraging the dot-product operation, as follows, L cf 2 = 1 SS (cid:3) i f i f cf 2 i .", "This counterfactual loss forces the model to emphasize the syntactic information related to POS tags.", "The overall loss function used in training is as follows, L = L task + ( L cf 1 + L cf 2 ) , (7) where L task is the task-specific loss, i.e. , a cross-entropy loss, and is a scale to balance between the task-specific loss and our proposed counterfactual losses.", "In this section, we evaluate our COSY method for cross-lingual understanding under both zero-shot and few-shot settings.", "For the zero-shot setting, we use English for training and evaluate the model on different target languages.", "For the few-shot setting, we follow the implementation in (Nooralahzadeh et al., 2020) and use the development set of the target languages for model fine-tuning 3 .", "We evaluate our method on the natural language inference (NLI) and the question answering (QA) tasks.", "We briefly introduce the datasets used in our experiments as follows.", "Natural Language Inference (NLI).", "Given two sentences, NLI asks for the relationship between the two sentences, which can be entailment, contradiction or neutral.", "We conduct experiments on XNLI (Conneau et al., 2018) and evaluate our method on 13 target languages 4 .", "Question Answering (QA).", "In this paper, we consider the QA task that asks the model to locate the 3 All the results and analyses are under the zero-shot settings by default, except for Table 2.", "answer from a passage given a question.", "We conduct experiments on MLQA (Lewis et al., 2019) and XQUAD (Artetxe et al., 2020).", "COSY is evaluated on 7 languages on MLQA and 10 languages on XQUAD (with Thai excluded).", "In data preprocessing, we feed the same syntactic information to each of the subwords in the same word after tokenization.", "Our implementation of pre-trained language models (mBERT and XLM-R) is based on HuggingFaces's Transformers (Wolf et al., 2020).", "We select the checkpoint and set hyper-parameters, e.g. , learning rate and in the loss function, based on the performance on the corresponding development sets.", "We select learning rate amongst { 7 .", "5 e 6 , 1 e 5 , 3 e 5 } and fix the batch size to 32 .", "We select dimension d (cid:3) amongst { 100 , 300 } .", "in counterfactual loss is set to 0 .", "1 (see Figure 4).", "A linear warm up strategy for learning rate is adopted with first 10% optimization steps.", "Adam (Kingma and Ba, 2014) is adopted as the optimizer.", "All experiments are conducted on a workstation with dual NVIDIA V100 32GB GPUs.", "We compare our method with naive fine-tuning and the state-of-the-art methods.", "The overall results on three benchmarks are presented in Table 1 (zero-Method en.", "Comparison with Naive Fine-tuning.", "Naive Fine-tuning (Wu and Dredze, 2019; Liang et al., 2020; Hu et al., 2020) is to directly fine-tune the pre-trained language model on downstream tasks as in (Devlin et al., 2019).", "From Table 1 and Table 2, we can observe that COSY consistently outperforms the naive fine-tuning method on all datasets, e.g. , by average 1.9 percentage points (ac-curacy) and 2.9 percentage points (F1) on XNLI and XQUAD with XLM-R large in the zero-shot setting.", "These observations demonstrate the effectiveness of COSY and suggest that universal syntax as language-agnostic features can enhance the transferability for cross-lingual understanding.", "Fur-583 thermore, the results show that COSY is able to work with different backbones and thus is model-agnostic.", "Comparison with the State of the Art.", "We first outline the SOTA zero-shot (few-shot) cross-lingual methods we compared with as follows: (1) XMAML-one (Nooralahzadeh et al., 2020) borrows the idea from meta-learning.", "Specifically, XMAML-one utilizes an auxiliary language development data in training, e.g. , using the development set of Spanish in training to assist German on MLQA.", "XMAML-One reports the results based on the most beneficial auxiliary language.", "(2) STILT (Phang et al., 2020) augments intermediate task training before fine-tuning on the target task, e.g. , adding training of HellaSwag (Zellers et al., 2019) before training on the NLI task.", "STILT also reports results with the most beneficial intermediate task.", "(3) LAKM (Yuan et al., 2020) first mines knowledge phrases along with passages from the Web.", "Then these Web data are used to enhance the phrase boundaries through a masked language model objective.", "Note that LAKM is only evaluated on three languages of MLQA.", "On the one hand, we observe that COSY surpasses the compared SOTA methods over all evaluation metrics.", "Although meta-learning methods (Finn et al., 2017; Gu et al., 2018; Sun et al., 2019) advance the state-of-the-art performance for few-shot learning, our COSY still outperforms the meta-learning-based method, i.e. , XMAML-One, with 1.1 percentage points in the few-shot setting.", "On the other hand, the superiority of COSY is also reflected in other aspects, which are shown in Table 1.", "Specifically, COSY does not require additional datasets and cumbersome data selection process, which is more convenient and resources saving.", "Ablation Study.", "In Table 3, we show the MLQA, XQUAD and XNLI results in 4 ablative settings, to evaluate the approach when we (1) only utilize the SAN-Black branch; (2) utilize the SAN-Black branch with an intuitive gate mechanism to control the information of pre-trained language model and syntax; (3) utilize the SAN-Black branch and SAN-Red branch; (4) utilize the SAN-Black branch and SAN-Blue branch.", "formance in all settings.", "Syntax features are incorporated into the models in (1)-(5) and all of them outperform the naive fine-tuning method, which demonstrates the effectiveness of universal syntax.", "By analyzing the settings one by one, we can observe that SAN-Black only attains limited improvement compared to naive fine-tuning since syntax is incorporated in the model by overlooked.", "Gate mechanism (2) fails to solve the overlooking issue.", "Both of (3) and (4) with counterfactual training are able to bring gains compared to (1), and the results indicate that dependency relations are more effective compared to POS labels.", "We also observe that our full method (5) does not accumulate the gains from (3) and (4).", "One explanation could be that part of the information provided by the dependency relations and POS labels overlaps.", "For instance, if we see an edge of relation, word a AMOD word b , we may infer that word a is NOUN and word b is ADJ .", "Effect of .", "We now study the impact of the scale value with counterfactual losses.", "For clarity, we show the results with different values of log in Figure", "4. We can observe that COSY attains the Figure 5: F1-measure drop (%) with a standard normal distribution perturbation on MLQA and XQUAD (mBERT).", "highest results when = 0 .", "1 on both MLQA and XNLI.", "As the value drops, the effect of counterfactual loss is also smaller and the performance is getting closer to that from naive fine-tuning (red dotted line).", "If a large value of is applied, e.g. , =1 , the model begins to over-emphasize the syntax and semantics are overlooked, which leads to significant decrease on performance.", "Effect of COSY.", "In this part, we first study whether counterfactual training method indeed guides the model to focus more on syntactic information.", "We conduct analysis on the COSY and SAN-Black.", "Since it is non-trivial to measure the utilization of syntax in a straightforward way, we adopt a standard way to measure the importance of the neurons in deep models (Kadar et al., 2017).", "Specifically, we perturb the syntactic features with a Gaussian noise to test data and check whether our model would be more easily affected by the syntax perturbation.", "If so, then it verifies that our model indeed relies more on", "syntax..", "The results are shown in Figure", "5. We can discover that the performance drop of COSY is larger compared to that with SAN-Black.", "Meanwhile, we also explore whether COSY is beneficial for yielding more meaningful syntax embedding than SAN-Black.", "Specifically, we compute the correlation score (absolute cosine similarity) between the embedding of syntactic relation and the corresponding inverse relation from the MLQA XQUAD EM F1 EM F1 (1) 44.8 61.7 52.2 67.3 (2) 45.1 62.0 53.1 68.1 (3) 44.9 61.9 52.7 67.8 (4) 45.0 62.0 53.2 68.0 Current 45.2 62.1 53.2 68.1 Table 4: Results of different generation ways for generating counterfactual syntax with mBERT as backbone.", "same type.", "For COSY, we observe that the score of the related types are 42 .", "4 larger than that of two randomly selected embeddings (average over 10000 times).", "However, for SAN-Black, its score is only 1 .", "4 larger than that of two randomly selected embeddings.", "It demonstrates that COSY attains more meaningful syntax representations than SAN-Black.", "Counterfactual Syntax Generation.", "Here we analyze other alternative ways of counterfactual syntax generation.", "Specifically, we design the following variants and report the results in Table 4: (1) we not only replace edge types, but also replace connections for counterfactual dependency graph construction; (2) for each input sequence, we create 5 counterfactual dependency graphs, 5 sets of counterfactual POS tags, and the counterfactual loss is the average over the 5 sets; (3) we replace the factual syntax with a fixed type, e.g. , a type of padding instead of a random type from all types; (4) in each generating process, we only replace 50% of the factual syntax.", "Comparing (1) with the result of SAN-Black,Blue in Table 3, we can see that (1) does not work.", "We believe that randomly changing connections in G , e.g. , an edge is created from the first token to the last token in a long passage, may have a significant effect to H , it is undesirable for further optimization of counterfactual loss.", "Results from (2) and (4) suggest that the number of the generated counterfactual syntax and ratio of randomizing do not play an important role in COSY.", "It is also discovered that randomizing with all types is better than simple replacement with a fixed type.", "We study how to effectively plug in syntactic information for cross-lingual understanding.", "Specifically, we propose a novel counterfactual-syntax-based approach to emphasize the importance of syntax in cross-lingual models.", "We conduct extensive experiments on three cross-lingual benchmarks, and show that our approach can outperform the SOTA methods without additional dataset.", "For future work, we will combine our approach with other orthogonal methods, e.g. , meta-learning, to further improve its effectiveness.", "This research is supported by the National Research Foundation, Singapore under its Strategic Capabilities Research Centres Funding Initiative, and partially supported by the Agency for Science, Technology and Research (A*STAR) under its AME YIRG Grant (Project No. A20E6c0101), and its AME Programmatic Fund (Project No: A18A1b0045 and No: A18A2b0046).", "Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore." ]
[ "abstain", "abstain", "method", "method", "method", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "objective", "result", "result", "other", "other" ]
[ "The variational autoencoder (VAE) imposes a probabilistic distribution (typically Gaussian) on the latent space and penalizes the KullbackLeibler (KL) divergence between the posterior and prior.", "In NLP, VAEs are extremely difficult to train due to the problem of KL collapsing to zero.", "One has to implement various heuristics such as KL weight annealing and word dropout in a carefully engineered manner to successfully train a VAE for text.", "In this paper, we propose to use the Wasserstein autoencoder (WAE) for probabilistic sentence generation, where the encoder could be either stochastic or deterministic.", "We show theoretically and empirically that, in the original WAE, the stochastically encoded Gaussian distribution tends to become a Dirac-delta function, and we propose a variant of WAE that encourages the stochasticity of the encoder.", "Experimental results show that the latent space learned by WAE exhibits properties of continuity and smoothness as in VAEs, while simultaneously achieving much higher BLEU scores for sentence reconstruction.", "1 1 Introduction Natural language sentence generation in the deep learning regime typically uses a recurrent neural network (RNN) to predict the most probable next word given previous words (Mikolov et al., 2010).", "Such RNN architecture can be further conditioned on some source information, for example, an input sentence, resulting in a sequence-to-sequence (Seq2Seq) model.", "Traditionally, sentence generation is accomplished in a deterministic fashion, i.e., the model uses a deterministic neural network to encode an 1 Our code is availabe at https://github.com/ HareeshBahuleyan/probabilistic_nlg A preliminary version of this paper was preprinted at https://arxiv.org/abs/1806.08462 input sentence to some hidden representations, from which it then decodes an output sentence using another deterministic neural network.", "Bowman et al. (2016) propose to use the variational autoencoder (VAE, Kingma and Welling, 2014) to map an input sentence to a probabilistic continuous latent space.", "VAE makes it possible to generate sentences from a distribution, which is desired in various applications.", "For example, in an open-domain dialog system, the information of an utterance and its response is not necessarily a one-to-one mapping, and multiple plausible responses could be suitable for a given input.", "Probabilistic sentence generation makes the dialog system more diversified and more meaningful (Serban et al., 2017; Bahuleyan et al., 2018).", "Besides, probabilistic modeling of the hidden representations serves as a way of posterior regularization (Zhang et al., 2016), facilitating interpolation (Bowman et al., 2016) and manipulation of the latent representation (Hu et al., 2017).", "However, training VAEs in NLP is more difficult than the image domain (Kingma and Welling, 2014).", "The VAE training involves a reconstruction loss and a KullbackLeibler (KL) divergence between the posterior and prior of the latent space.", "In NLP, the KL term tends to vanish to zero during training, leading to an ineffective latent space.", "Previous work has proposed various engineering tricks to alleviate this problem, including KL annealing and word dropout (Bowman et al., 2016).", "In this paper, we address the difficulty of training VAE sentence generators by using a Wasserstein autoencoder (WAE, Tolstikhin et al., 2018).", "WAE modifies VAE in that it requires the integration of the posterior to be close to its prior, where the closeness is measured with empirical samples drawn from the distributions.", "In this way, the encoder could be either stochastic or deterministic, but the model still retains probabilistic properties.", "Moreover, we show both theoretically and empirically that the stochastic Gaussian encoder in the original form tends to be a Dirac-delta function.", "We thus propose a WAE variant that encourages the encoder's stochasticity by penalizing an auxiliary KL term.", "Experiments show that the sentences generated by WAE exhibit properties of continuity and smoothness as in VAE, while achieving a much higher reconstruction performance.", "Our proposed variant further encourages the stochasticity of the encoder.", "More importantly, WAE is robust to hyperparameters and much easier to train, without the need for KL annealing or word dropout as in VAE.", "In a dialog system, we demonstrate that WAEs are capable of generating better quality and more diverse sentences than VAE.", "Base Model: Deterministic Autoencoder (DAE).", "DAE encodes an input sentence with a recurrent neural network (RNN) and then decodes the same sentence through another RNN.", "For the encoder, the hidden state of the last word is represented as the latent space of the input sentence x .", "The latent representation is denoted as z .", "We feed z to the decoder RNN, which predicts one word at a time using a softmax layer, given by p (x t | z , x <t )", ".The training objective for DAE is the sequence-aggregated cross-entropy loss, given by J = N (cid:88) n =1 | x ( n ) | (cid:88) t =1 log p (x ( n ) t | z ( n ) , x ( n ) <t ) (1) where superscript ( n ) indicates the n th data point among 1 , , N .", "In DAE, the latent space is encoded and then decoded in a deterministic way, i.e., there is no probabilistic modeling of the hidden space.", "The hidden representations of data may be located on an arbitrary manifold (Figure 1a), which is not suitable for probabilistic generation.", "Variational Autoencoder (VAE).", "VAE extends DAE by imposing a prior distribution p ( z ) on the latent variable z , which is typically set to the standard normal N ( 0 , I ) (Kingma and Welling, 2014).", "Given an input sentence x , we would like to model the posterior of z by another normal distribution, q ( z | x ) = N ( post , diag 2 post ) , where post and 2 post are the outputs of the encoder.", "In the training of VAE, z is sampled from q ( z | x ) , and the training objective is to maximize a variational lower bound of the likelihood of data.", "This is equivalent to minimizing the (expected) reconstruction loss similar to (1), while being regularized by the KL divergence between q ( z | x ) and p ( z ) , given by J = N (cid:88) n =1 (cid:104) E z ( n ) q | x ( n ) | (cid:88) t =1 log p (x ( n ) t | z ( n ) , x ( n ) <t ) + VAE KL( q ( z ( n ) | x ( n ) ) (cid:107) p ( z )) (cid:105) (2) where in the expectation z ( n ) is sampled from q ( z | x ( n ) ) and VAE is a hyperparameter balancing the two terms.", "Since VAE penalizes the divergence of z 's posterior from its prior, it serves as a way of posterior regularization, making it possible to generate sentences from the continuous latent space.", "However, the two objectives in (2) are contradictory to each other, as argued by Tolstikhin et al. (2018).", "VAE pushes the posterior of z , given any input x ( n ) , to be close to its prior, i.e., every blue ellipse in Figure 1b should be close to the red one.", "This makes perfect reconstruction impossible.", "Further, VAE is difficult to train in NLP due to the problem of KL collapse, where the KL term tends to be zero, meaning that the encoder captures no information and the decoder learns an unconditioned language model.", "This phenomenon is observed in variational auto-regressive decoders using RNN.", "To alleviate this problem, existing tricks include KL annealing and word dropout (Bowman et al., 2016), but both require extensive engineering.", "alternative way of posterior regularization is to impose a constraint that the aggregated posterior of z should be the same as its prior (Tolstikhin et al.,", "2018), i.e., q ( z ) def = (cid:80) x q ( z | x ) p D ( x ) set = p ( z ) , where p D is the data distribution.", "This is also demonstrated in Figure 1c.", "By contrast, VAE requires that q ( z | x ) should be close to p ( z ) for every input sentence x .", "For computational purposes, Tolstikhin et al. (2018) relax the above constraint by penalizing the Wasserstein distance between q ( z ) and p ( z ) .", "In particular, it is computed by the Maximum Mean Discrepancy (MMD), defined as MMD = (cid:13)(cid:13)(cid:13)(cid:13)(cid:90) k ( z , ) d P ( z ) (cid:90) k ( z , ) d Q ( z ) (cid:13)(cid:13)(cid:13)(cid:13) H k where P ( z ) and Q ( z ) are cumulative density functions.", "H k refers to the reproducing kernel Hilbert space defined by the kernel k , which is often chosen as the inverse multiquadratic kernel k ( x, y ) = C C + (cid:107) x y (cid:107) 22 for high-dimensional Gaus-sians.", "One advantage of the Wasserstein distance is that it can be estimated by empirical samples as (cid:92) MMD = 1 N ( N 1) (cid:88) n (cid:54) = m k ( z ( n ) , z ( m ) ) (3) + 1 N ( N 1) (cid:88) n (cid:54) = m k ( (cid:101) z ( n ) , (cid:101) z ( m ) ) 1 N 2 (cid:88) n,m k ( z ( n ) , (cid:101) z ( m ) ) where (cid:101) z ( n ) is a sample from the prior p ( z ) , and z ( n ) is a sample from the aggregated posterior q ( z ) , which is obtained by sampling x ( n ) from the data distribution and then sampling z ( n ) from q ( z | x ( n ) ) .", "In summary, the training objective of WAE is JWAE = N (cid:88) n =1 | x ( n ) | (cid:88) t =1 log p (x ( n ) t | z ( n ) , x ( n ) <t ) + WAE (cid:92) MMD (4) where WAE balances the MMD penalty and the reconstruction loss.", "Alternatively, the dual form (adversarial loss) can also be used for WAE (Zhao et al., 2018).", "In our preliminary experiments, we found MMD similar to but slightly better than the adversarial loss.", "The difference between our work and Zhao et al. (2018)who extend the original WAE to sequence generationis that we address the KL annealing problem of VAE and further analyze the stochasticity of WAE from a theoretical perspective, as follows.", "WAE with Auxiliary Loss.", "In WAE, the aggregated posterior q ( z ) involves an integration of data distribution, which allows using a deterministic function to encode z as z = f encode ( x ) as suggested by Tolstikhin et al. (2018).", "This would largely alleviate the training difficulties as in VAE, because backpropagating gradient into the encoder no longer involves a stochastic layer.", "The stochasticity of the encoder, however, is still a desired property in some applications, for example, generating diverse responses in a dialog system.", "We show both theoretically and empirically that a dangling Gaussian stochastic encoder could possibly degrade to a deterministic one.", "Theorem 1. Suppose we have a Gaussian family N ( , diag 2 ) , where and are parameters.", "The covariance is diagonal, meaning that the variables are independent.", "If the gradient of completely comes from sample gradient and is small at the beginning of training, then the Gaussian converges to a Dirac delta function with stochastic gradient descent, i.e., 0 .", "(See Appendix A for the proof.)", "To alleviate this problem, we propose a simple heuristic that encourages the stochasticity of the encoder.", "In particular, we penalize, for every data point, a KL term between the predicted posterior q ( z | x ) = N ( post , diag 2 post ) and a Gaussian with covariance I centered at the predicted mean, i.e., N ( post , I ) .", "This is shown in Figure 1d, where each posterior is encouraged to stretch with covariance I .", "Formally, the loss is J = J rec + WAE (cid:92) MMD + KL (cid:88) n KL (cid:16) N ( ( n ) post , diag( ( n ) post ) 2 ) (cid:13)(cid:13) N ( ( n ) post , I ) (cid:17) (5) While our approach appears heuristic, the next theorem shows its theoretical justification.", "Theorem 2. Objective (5) is a relaxed optimization of the WAE loss (4) with a constraint on post .", "(See Appendix B for the proof.)", "We will show empirically that such auxiliary loss enables us to generate smoother and more diverse sentences in WAE.", "It, however, does not suffer from KL collapse as in VAEs.", "The auxiliary KL loss that we define for stochastic WAE is computed against a target distribution N ( ( n ) post , I ) for each data sample x ( n ) .", "Here, the predicted posterior mean itself is used in the target distribution.", "As a result, this KL term does not force the model to learn the same posterior for all data samples (as in VAE), and thus, the decoder does not degrade to an unconditioned language model.", "We evaluate WAE in sentence generation on the Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015) as well as dialog response generation.", "All models use single-layer RNN with long short term memory (LSTM) units for both the encoder and decoder.", "Appendix C details our experimental settings.", "VAE training.", "VAE is notoriously difficult to train in the RNN setting.", "While different researchers have their own practice of training VAE, we follow our previous experience (Bahuleyan et al., 2018) and adopt the following tricks to stabilize the training: (1) VAE was annealed in a sigmoid manner.", "We monitored the value of KL and stop annealing once it reached its peak value, known as peaking annealing .", "(2) For word dropout, we started with no dropout, and gradually increased the dropout rate by 0 .", "05 every epoch until it reached a value of 0 .", "5 .", "The effect of KL annealing is further analyzed in Appendix D. 3.1 SNLI Generation The SNLI sentences are written by crowd-sourcing human workers in an image captioning task.", "It is a massive corpus but with comparatively simple sentences (examples shown in Table 4).", "This task could be thought of as domain-specific sentence generation, analogous to hand written digit generation in computer vision.", "In Table 1, we compare all methods in two aspects.", "(1) We evaluate by BLEU (Papineni et al., 2002) how an autoencoder preserves input information in a reconstruction task.", "(2) We also evaluate the quality of probabilistic sentence generation from the latent space.", "Although there is no probabilistic modeling of the latent space in DAE, we nevertheless draw samples from N ( 0 , I ) , which could serve as a non-informative prior.", "Perplexity ( PPL ) evaluates how fluent the generated sentences are.", "This is given by a third-party n -gram language model trained on the Wikipedia dataset.", "The unigram-KL ( UniKL ) evaluates if the word distribution of the generated sentences is close to that of the training corpus.", "Other surface metrics (entropy of the word distribution and average sentence length) also measure the similarity of the latent space generated sentence set to that of the corpus.", "We see that DAE achieves the best BLEU score, which is not surprising because DAE directly optimizes the maximum likelihood of data as a surrogate of word prediction accuracy.", "Consequently, DAE performs poorly for probabilistic sentence generation as indicated by the other metrics.", "VAE and WAE have additional penalties that depart from the goal of reconstruction.", "However, we see that WAEs, when trained with appropriate hyperparameters ( WAE , KL ), achieve close performance to DAE, outperforming VAE by 40 BLEU points.", "This is because VAE encodes each input's posterior to be close to the prior, from which it is impossible to perfectly reconstruct the data.", "Comparing the deterministic and stochastic encoders in WAE, we observe the same trade-off between reconstruction and sampling.", "However, our proposed stochastic encoder, with KL = 0 .", "1 for WAE, consistently outperforms VAE in the contradictory metrics BLEU and PPL.", "The hyperparameters WAE = 10 .", "0 and KL = 0 .", "01 appear to have the best balance between reconstruction, sentence fluency, as well as similarity to the original corpus.", "Moreover, all our WAEs are trained without annealing or word dropout.", "It is significantly simpler than training a VAE, whose KL annealing typically involves a number of engineering tricks, such as the time step when KL is included, the slope of annealing, and the stopping criterion for annealing.", "We extend WAE to an encoder-decoder framework (denoted by WED) and evaluate it on the DailyDialog corpus (Li et al., 2017).", "2 We follow Bahuleyan et al. (2018), using the encoder to capture an utterance and the decoder to generate a reply.", "Table 2 shows that WED with a deterministic encoder (WED-D) is better than the variational encoder-decoder (VED) in BLEU scores, but the generated sentences lack variety, which is measured by output entropy and the percentage of distinct unigrams and bigrams (Dist-1/Dist-2, Li et al., 2016), evaluated on the generated test set responses.", "We then applied our stochastic encoder for WED and see that, equipped with our KL-penalized stochastic encoder, WED-S outperforms DED, VED, and WED-D in all diversity measures.", "WED-S also outperforms VED in generation quality, consistent with the results in Table 1. 4 Conclusion In this paper, we address the difficulty of training VAE by using a Wasserstein autoencoder (WAE) for probabilistic sentence generation.", "WAE implementation can be carried out with either a deterministic encoder or a stochastic one.", "The deterministic version achieves high reconstruction performance, but lacks diversity for generation.", "The stochastic encoder in the original form may collapse to a Dirac delta function, shown by both a theorem and empirical results.", "We thus propose to encourage stochasticity by penalizing a heuristic 2 In our pilot experiment, we obtained a BLEU-4 score of 6 by training a pure Seq2Seq model with LSTM units for 200 epochs, whereas Li et al. (2017) report 0.009 BLEU-4 and Luo et al. (2018) report 2.84 BLEU-4.", "Due to our unreasonably high performance, we investigated this in depth and found that the training and test sets of the DailyDialog corpus have overlaps.", "For the results reported in our paper, we have removed duplicate data in the test set, which is also available on our website (Footnote 1).", "To the best of our knowledge, we are the first to figure out the problem, which, unfortunately, makes comparison with previous work impossible.", "KL loss for WAE, which turns out to be a relaxed optimization of the Wasserstein distance with a constraint on the posterior family.", "We evaluated our model on both SNLI sentence generation and dialog systems.", "We see that WAE achieves high reconstruction performance as DAE, while retaining the probabilistic property as VAE.", "Our KL-penalty further improves the stochasticity of WAE, as we achieve the highest performance in all diversity measures.", "We would like to acknowledge Yiping Song and Zhiliang Tian for their independent investigation on the DailyDialog corpus.", "We also thank Yanran Li, one of the authors who released DailyDialog, for discussion on this issue.", "This work was supported in part by the NSERC grant RGPIN-261439-2013 and an Amazon Research Award." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "other", "other", "other" ]
[ "Fast and reliable evaluation metrics are key to R&D progress.", "While traditional natural language generation metrics are fast, they are not very reliable.", "Conversely, new metrics based on large pretrained language models are much more reliable, but require significant computational resources.", "In this paper, we propose FrugalScore, an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance.", "Experiments with BERTScore and MoverScore on summarization and translation show that FrugalScore is on par with the original metrics (and sometimes better), while having several orders of magnitude less parameters and running several times faster.", "On average over all learned metrics, tasks, and variants, FrugalScore retains 96.8% of the performance, runs 24 times faster, and has 35 times less parameters than the original metrics.", "We make our trained metrics publicly available 1 and easily accessible via Hugging Face, to benefit the entire NLP community and in particular researchers and practitioners with limited resources.", "Automatic evaluation metrics are the only way to monitor the training of, evaluate, and compare across models in a systematic, large-scale way, and are thus a critical component of the research and development ecosystem in machine learning.", "To get adopted in practice, evaluation metrics need to be both reliable and affordable, i.e., fast and easy to compute.", "While some metrics meet these criteria, such as precision and recall in information retrieval, root mean square error in regression, etc., finding suitable metrics is still an open problem in the field of Natural Language Generation (NLG) (Novikova et al., 2017).", "*Equal contribution 1 https://github.com/moussaKam/FrugalScore Indeed, historical n -gram matching metrics such as ROUGE (Lin, 2004) for summarization, BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) for translation, while affordable, are not very reliable, as they are based on surface-form matching only, i.e., lexical similarity, and have thus no sense of semantic similarity.", "For instance, it makes little sense to use ROUGE for the evaluation of abstractive summarization systems (which are becoming the norm), or whenever the generated text paraphrases the original text.", "Following the advent of transfer learning in NLP, new NLG metrics based on large pretrained language models have recently been proposed, such as BERTScore (Zhang et al., 2019) and MoverScore (Zhao et al., 2019).", "By relying on contextual embeddings, these metrics capture semantics and are therefore much more reliable.", "However, due to the sheer size of the underlying models, these metrics pose environmental issues (Strubell et al., 2019), take time to compute, and require access to significant computational resources, so they are not accessible by everyone in the NLP community.", "For example, we were not able to run some of the best variants of BERTScore 2 , based on DeBERTa-Large and DeBERTa-XLarge (He et al., 2020) on a 12GB GPU.", "Even when enough GPU memory is available, relying on such large models is still associated with extended runtimes, which can impede the progress of experiments when used once or more per epoch for validation and monitoring purposes.", "To address this problem, we propose in this paper FrugalScore, an approach to learn a lightweight version of BERTScore, MoverScore, and more generally any metric based on a large pretrained language model.", "Our contributions can be summarized as follows: 1) Our compact models have several orders of magnitude less parameters than the original metrics and 2 From BERTScore's authors: https://tinyurl.com/8cwyter2 1305 run several times faster, while retaining most of their original performance.", "We even outperform the original metrics in some cases 3 .", "2) Our metrics are not only faster because of the much smaller amount of parameters, but also because they do not rely on any similarity function.", "3) Regardless of how expensive the original metric is, querying our trained metrics always has the same low, fixed cost.", "This decoupling is a major advantage as the size of the pretrained language models has recently been growing tremendously (e.g., Brown et al. (2020)).", "Related work falls into two categories: unsupervised and supervised metrics.", "To address the limitations of ROUGE and BLEU, variants based on static word embeddings (Mikolov et al., 2013) were developed, e.g., ROUGE-WE (Ng and Abrecht, 2015), BLEU2VEC (Tttar and Fishel, 2017), and MEANT 2.0 (Lo, 2017).", "While using word vectors is a progress over strict n -gram matching, static embeddings are still very limited as they do not capture polysemy, i.e., the fact that words have different meanings in different contexts.", "More recently, the focus has shifted to harnessing the power of the contextualized embeddings produced by large pretrained language models.", "For instance, the Sentence Mover's Similarity (Clark et al., 2019) represents sentences as the average of their ELMo word embeddings (Peters et al., 2018) and measures the minimum cost of transforming one summary into the other, using a modified version of the Word Mover's Distance (Kusner et al., 2015).", "BERTR (Mathur et al., 2019) computes approximate recall based on the pairwise cosine similarity between the BERT embeddings (Devlin et al., 2018) of the words in automatic and reference translations.", "Mark-Evaluate (Mordido and Meinel, 2020) is a family of metrics that consider contextualized word or sentence embeddings derived from BERT as population samples, to evaluate language generation with population estimation methods used in ecology.", "Finally, the recently introduced BERTScore (Zhang et al., 2019) and MoverScore (Zhao 3 Hence the name FrugalScore, as frugal engineering is defined as achieving more with fewer resources. et al., 2019) are general-purpose NLG evaluation metrics that are becoming widely used.", "The main difference between BERTScore and MoverScore lies in the function used to compute the similarity between the representations of the two sequences x = x 1 , ..., x k and y = y 1 , ..., y l .", "We experimented with these two metrics, so we provide more details about them in what follows.", "BERTScore first computes the pairwise cosine similarity between the representations of the tokens in each sequence, and uses greedy matching to match each token to the most similar one in the other sequence.", "Given two pre-normalized vector sequences x and y , BERTScore computes: RBERT = 1 | x | (cid:88) x i x max y j y x Ti y j (1) and: PBERT = 1 | y | (cid:88) y i y max x j x y Ti x j (2) The F1-score is classically obtained as: FBERT = 2 PBERTRBERTPBERT + RBERT (3) MoverScore uses an n -gram generalization of the Word Mover's Distance (WMD) (Kusner et al., 2015) as their (dis)similarity function.", "More specifically, they solve for the optimal transportation flow matrix F R | x || y | between the two weighted sequences of n -grams: W MD ( x , y ) = min F C, F (4) s.t. F 1 = f x , FT 1 = f y Where C is the transportation cost matrix ( C ij is the Euclidean distance between x i and y j ) and f x R | x | + and f y R | y | + are the n -gram weight vectors.", "Note that by directly learning BERTScore's and MoverScore's full internal mapping (from sequence pairs to final scalar scores), FrugalScore internalizes their similarity functions.", "This does not only provide a speedup at inference time, but also improves performance, as shown in section 5.", "Related to our work are also supervised metrics, which are directly trained on human evaluations.", "ROSE (Conroy and Dang, 2008) is a linear combination model of different variants of ROUGE using canonical correlation.", "BEER (Stanojevic and Sima'an, 2014) is a learning-to-rank approach using word and character n-gram matching, and token ordering, as features to maximize correlation with human rankings of machine translation systems.", "S 3 (Peyrard et al., 2017) trains a regression model that takes the evaluation scores of several existing metrics and many hand-crafted features as input, and learns the best combination of them to approximate human summary judgments.", "DPMFcomb (Yu et al., 2015) and Blend (Ma et al., 2017) are combined metrics incorporating a vast amount of lexical, syntactic and semantic based translation evaluation metrics using ranking and regression SVMs respectively.", "RUSE (Shimanaka et al., 2018) evaluates machine translation with a neural regressor based on universal sentence embeddings (e.g., InferSent (Conneau et al., 2017)).", "NUBIA (Kane et al., 2020) consists of three modules: a feature extractor based on RoBERTa (Liu et al., 2019) and GPT-2 (Rad-ford et al., 2019) fine-tuned on language evaluation tasks, an aggregator trained to predict the quality of the hypothesis given the reference using the extracted features, and a calibrator mapping all predictions between 0 and", "1. Differences .", "Like the aforementioned efforts, FrugalScore is a learned metric.", "However, it does not rely on any intermediate or handcrafted features, and, most importantly, it does not require training on human annotations.", "Supervision in FrugalScore is conducted on a synthetic dataset, as a trick to expose and learn the internal mapping of the unsupervised metrics to be learned.", "Last but not least, unlike all aforementioned methods, compression is central to FrugalScore, which is based on miniature versions of the models used by the original metrics.", "Knowledge distillation (KD) (Hinton et al., 2015) is the process of transferring knowledge from a large teacher model to a smaller student model to accomplish model compression (Bucilua et al., 2006).", "It was originally proposed in the domain of computer vision and speech recognition, then successfully adapted to NLP (Sanh et al., 2019).", "Distillation can be accomplished in three ways: (1) offline, where a teacher is first pre-trained, then a student is trained under the guidance of the teacher (Hinton et al., 2015); (2) online, where the student and the teacher are trained simultaneously (Zhang et al., 2018); and (3) self, where the same model plays the role of student and teacher, e.g., transferring the knowledge of a later exit layer into an earlier one of the same multi-exit network (Phuong and Lampert, 2019).", "Previous studies on KD mainly focused on classification problems (Gou et al., 2021).", "A few attempts have been made on regression problems (Chen et al., 2017; Saputra et al., 2019; Takamoto et al., 2020), in which special losses were proposed to train the student with respect to both the teacher's regression outputs and ground truth scores.", "Different from conventional distillation, our work is more similar to data-free KD (Kang and Kang, 2021), where the student is trained in the absence of the dataset used to train the teacher.", "To transfer knowledge, we first create a synthetic dataset by annotating sequence pairs with a large model (teacher), and then train a miniature model (student) on that dataset, in an offline and regression setting.", "A work closely related to ours is BLEURT (Sellam et al., 2020).", "However, there are a number of significant differences with our approach.", "First, BLEURT continues the pretraining of an already pretrained BERT-based model on a synthetic dataset in a self-supervised way, whereas FrugalScore is directly trained to learn the scores of the metric of interest, in a supervised fashion.", "Also, BLEURT's synthetic dataset is made by perturbing Wikipedia sentences with mask-filling, backtranslation, and word dropping, whereas we use other data sources than Wikipedia such as summarization and translation datasets, and only NLG models to induce perturbations.", "When creating its synthetic dataset, BLEURT automatically annotates the (original, perturbed) sequence pairs with numerical and categorical sig-nals: BLEU, ROUGE, BERTscore, backtranslation likelihood, textual entailment (probability of three labels: entail, contradict, and neutral, given by BERT fine-tuned on MNLI), and backtranslation flag.", "On the other hand, FrugalScore simply and directly annotates the sequence pairs with the metric to be learned.", "metrics described in subsection 2.2.", "BLEURT does not learn to generate a scalar until that final fine-tuning phase, so it cannot be used as a metric before that.", "Conversely, FrugalScore is trained from the start to be a metric, and the fine-tuning phase is optional.", "Also, BLEURT was designed for the evaluation of translation.", "The authors only test whether it can be applied to a different task by experimenting on the WebNLG (data-to-text) dataset (Gardent et al., 2017).", "Conversely, we focus on learning general text similarity metrics (e.g., BERTscore and MoverScore), so FrugalScore is task-agnostic by design.", "Finally, and above all, the objective of FrugalScore is model compression, whereas that of BLEURT is metric learning.", "of which is optional.", "Phase 1 .", "We create a synthetic dataset (see subsection 3.1) by sampling pairs of more or less related sequences and annotating them with the expensive metrics to be learned.", "This is a one-time operation that does not need to be repeated regardless of the model used in Phase", "2. Phase 2 .", "We continue the pretraining (see subsection 3.2) of a miniature pretrained language model on the synthetic dataset built by Phase", "1. Here, the miniature model learns the internal mapping of the expensive metric, including any similarity function applied to the representations.", "Note that a different miniature is trained for each metric to be learned (we leave learning metric combinations as future work).", "The miniature can then be used in inference mode to generate scores for any never-seen pair of sequences.", "Phase 3 (optional).", "We fine-tune the miniature on human annotations, which, as shown in section 6, can boost performance.", "The objective here was to generate pairs of sequences mimicking the (reference, candidate) pairs found in NLG datasets, which are usually semantically related and in many cases paraphrasing one another.", "We sampled our sequences from a variety of data sources, listed next.", "Summarization .", "For each document in the wellknown CNN/DailyMail dataset (Nallapati et al., 2016), our goal was to generate several summaries differing in terms of structure and quality.", "To this purpose, we used different pretrained seq2seq summarization models: BART-base and BART-large (Lewis et al., 2019), mBART (Liu et al., 2020), and BARThez (Kamal Eddine et al., 2021).", "BART is a seq2seq autoencoder with a Transformer architecture.", "The four models were fine-tuned for one epoch on 50k examples randomly sampled from the training set of CNN/DM, and were used to generate summaries for the whole training set of 287,112 documents, using greedy decoding.", "Note that we kept the 50K documents used for fine-tuning in the final generation pool, in order to create quality differences among summaries.", "Indeed, models are expected to better summarize the documents used for training than never-seen documents.", "We also used the human reference summaries, so that in the end, each document was associated with 5 summaries, resulting in 10 pairs of summaries per document.", "Backtranslation .", "We also generated paraphrases with backtranslation, by sampling sentences from the OpenSubtitle English monolingual corpus (Li-son and Tiedemann, 2016), and translating them to French, Arabic and German with OPUS-MT (Tiedemann and Thottingal, 2020), before translating them back to English.", "We used OPUS-MT because of its ready-to-use checkpoints available for many language pairs.", "We ended up with 4 variations for each sentence (including the original one), resulting in 6 paraphrase pairs per sentence.", "Denoising .", "To avoid bias towards summarization and translation, we also generated pairs of related sequences such that the first element in the pair was a Wikipedia segment and the second element was a BART-denoised version of it (Lewis et al., 2019).", "More precisely, we sampled 2M segments from Wikipedia such that the number of unigrams in these segments was uniformly distributed between 1 and 200.", "Our assumption was that enforcing variations in sequence length would help the learned metric to generalize.", "We then applied BART's text infilling and sentence permutation perturbation strategies to each segment.", "That is, multiple text spans were sampled and replaced with a [MASK] special token.", "The 1308 lengths of the spans were sampled from a Poisson distribution ( = 3 ).", "50% of the tokens within the input segment were masked and 20% of the masked text was replaced with random tokens (cre-ating pathological examples to increase the robustness of the learned metric).", "The sentences in the input segment were then shuffled.", "We finally used a BART-Base checkpoint 4 from the Fairseq library (Ott et al., 2019) to try to reconstruct the perturbed versions of the original sequences, hence creating variants of them.", "Annotating pairs .", "We sampled 4.5M sequence pairs uniformly from each aforelisted source.", "These pairs were then annotated with the metrics to be learned.", "Note that this is a one-time operation that does not need to be repeated regardless of which models are trained downstream.", "In this work, we experimented with two recent expensive NLG metrics that rely on large pretrained language models, BERTScore (Zhang et al., 2019) and MoverScore (Zhao et al., 2019), presented in section", "2. However, it is important to note that our method can be used with any other NLG metric.", "Note that for BERTScore, we used the F-1 score FBERT , as recommended by the authors (Zhang et al., 2019).", "For MoverScore, still following the authors (Zhao et al., 2019), we used the variant operating on unigrams and the IDF to compute the vectors of weights.", "We continue the pretraining of three BERT miniatures 5 on our synthetic dataset: BERT-Tiny ( L = 2 , H = 128 ), BERT-Small ( L = 4 , H = 512 ) and BERT-Medium ( L = 8 , H = 512 ), where L is the number of layers and H is the dimension of the embedding space.", "These models have respectively 25 times, 3.78 times, and 2.64 times less parameters than BERT-base.", "The concept of BERT miniatures was introduced by Turc et al. (2019) to test whether pretraining small models from scratch was competitive to distilling very large models.", "The miniature models have already been pretrained on masked language model and next sentence prediction objectives.", "We continue pretraining using the standard method introduced by Devlin et al. (2018).", "We concatenate the two sequences x = x 1 , ..., x k and y = y 1 , ..., y l in a given pair, separating them 4 https://dl.fbaipublicfiles.com/fairseq/models/bart.base.tar.gz 5 https://huggingface.co/google with a special [SEP] token.", "A special [CLS] token is also added at the beginning of the resulting sequence.", "The sequence of contextualised embeddings z [ CLS ] , x 1 , ... x k , z [ SEP ] , y 1 , ..., y l is then obtained.", "We finally add a fully connected layer on top, that linearly projects the z [ CLS ] vector to a scalar s .", "The model is trained to minimize the mean square error (MSE) loss between the learned metric s i and the metric to be learned s i (i.e., the annotation of the pair): l = 1 NN (cid:88) i =1 || s i s i || 2 (5) When pretraining is over, the models can be further fine-tuned on smaller human-annotated datasets as shown in section 6, or directly used to generate scores for unseen examples as shown in section 4.", "Setup .", "We use a batch size of 32 and the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 3 10 5 , linear decay, and a warm-up for 6% of the total training steps, and we train the model for three epochs.", "We conducted the pretraining on a single TITAN RTX GPU (24GB).", "It took 10, 24 and 33 hours, respectively for the tiny, small, and medium miniatures.", "We rely on the Transformers library (Wolf et al., 2019) for all pretraining and fine-tuning experiments.", "In this section, FrugalScore is used in inference mode to generate scores directly after pretraining, i.e., no fine-tuning is performed (see section 6 for fine-tuning results).", "We evaluate on two text generation tasks: summarization and translation.", "We use evaluation datasets containing (reference, candidate) sequence pairs annotated with human scores assessing the quality of the candidates given the references.", "We measure the effectiveness of FrugalScore by measuring the Pearson correlation of its scores with the human judgments and comparing it to that of the original metrics.", "We also take the number of parameters and the runtime into account.", "Text summarization .", "We use 4 different multi-document summarization datasets from the Text Analysis Conference (TAC) 6 : TAC-2008, TAC-2009, TAC-2010 and TAC-2011.", "These datasets respectively contain 48, 44, 46 and 44 clusters of documents and 58, 55, 43 and 51 systems are used to generate summaries.", "Each cluster forms a topic to be summarized and has 4 reference summaries.", "There are approximately 10k pairs in each dataset.", "Each pair is annotated with two human judgment scores: the Pyramid Score (Harnly et al., 2005) and the Responsiveness (Dang et al., 2008).", "The former measures the proportion of important semantic units (SCUs) in the reference summaries captured by the system summary, while the latter reflects the content coverage and the readability of each summary.", "Machine translation .", "Our evaluation corpus is from the WMT-2019 7 shared task (Li et al., 2019).", "We consider all the to-English pairs: Chinese, Czech, German, Finnish, Russian, Lithuanian and Kazakh to English.", "For each language, we use the test set that contains several thousands of reference-candidate pairs annotated with human ratings that assess the translation quality.", "Table 1 reports the results averaged over the 4 TAC datasets and the 7 WMT to-English language pairs.", "Details are provided in Appendices A and B. We benchmarked the metrics in terms of Pearson correlations with human scores, runtimes, and numbers of parameters.", "We used two approaches to compute the Pearson correlations: summary-level (or segment-level) and system-level.", "In the former approach, a score is attributed to each of the output candidates, while in the latter approach, one single overall score is attributed to the system (by averaging its individual scores).", "Rows a to c correspond to BERTScore with BERT miniatures as the underlying model.", "They are simple baselines added for the sake of comparison, to see what we get when BERTScore is used with the same number of parameters as FrugalScore.", "Rows d to g correspond to the expensive metrics that are learned by FrugalScore (in the respective sections of the bottom half of the table).", "They 1310 are BERTScore and MoverScore metrics where the underlying model is a large pretrained language model: BERT-Base ( L = 12 , H = 512 ), RoBERTa-Large ( L = 24 , H = 1024 ) (Liu et al., 2019), and DeBERTa-XLarge ( L = 24 , H = 1536 ) (He et al., 2020).", "Finally, rows i to xii correspond to FrugalScore.", "Subscripts refer to row labels and indicate which metric-model combination was used to annotate pairs.", "I.e., FrugalScore d learned the metric of row d , i.e., BERTScore with BERT-Base.", "First, results show that all FrugalScores, regardless of which metric they learned, significantly outperform the BERTScores with miniature models.", "These results suggest that FrugalScore is a better approach than using an existing metric with a lightweight underlying model.", "The reason for this is probably that in FrugalScore, the knowledge of the original unsupervised metric (based on a large model) is explicitly transferred to the miniature via the continuation of its pretraining on the synthetic dataset.", "That is, the miniature is actually learning a metric.", "Whereas, on the other hand, plugging a compressed version of a general-purpose language model into the original unsupervised metric just makes it lose expressiveness and capacity.", "Second, we can clearly see that FrugalScore retains most of the performance of the original metric, while running several times faster and reducing the number of parameters by several orders of magnitude.", "On average over all metrics, tasks, and miniatures, FrugalScore retains 96.8% of the original performance, runs 24 times faster, and has 35 times less parameters.", "More precisely, on average across all metrics, FrugalScore-Tiny retains 97.7/94.7% of the original performance on TAC (pyramid score/responsiveness), while running 54 times faster and having 84 times less parameters.", "Its small and medium versions retain near full performance in terms of responsiveness (98 and 97.7%) and even slightly outperform the original metrics in terms of pyramid score, while at the same time reducing the runtime and the number of parameters by 32 (resp. 21) and 13 (resp. 9) times.", "On WMT, FrugalScore-Tiny retains 88.58% of the performance of the original metrics, while running 14 times faster (and still having 84 times less parameters), while the small and medium versions of FrugalScore retain 95.71 % and 98.06% of the original performance while still offering a 32 times (resp. 21) speedup and having 13 times (resp. 9) less parameters, on average.", "Interestingly, FrugalScore even improves the performance of the original metrics in some cases.", "For example, on TAC, FrugalScore g with BERT-Tiny (row x ) improves the performance of the original MoverScore metric based on BERT-Base (row g ) from 66.5 to 67.3 in terms of pyramid score, while reducing the number of parameters by 25 and running 50 times faster.", "Other examples, also for TAC with the pyramid score, include FrugalScore f with BERT-Small (row viii , +1.5 point) and FrugalScore f with BERT-medium (row ix , +1 point).", "Finally, the results of FrugalScore for different miniature sizes show that, on WMT, using larger models always improves performance (e.g., row x xi xii ).", "But interestingly, on TAC, this observation does not hold (e.g., row vi viii ix ), and sometimes, FrugalScore with the smallest miniature (BERT-Tiny) is superior (e.g. rows i and x ).", "This finding suggests that the impact of the pretrained language model size is task-dependent.", "To sum up, results clearly show the effectiveness of FrugalScore in learning a cheaper, lighter, and faster version of the original metrics, while retaining most of their original performance.", "The system-level correlations, provided in Appendices C and D, corroborate these positive results.", "We also provide the correlations between the original and the learned metrics in Appendices E and F. It is interesting to note that a greater correlation with the original metric is not always associated with a better performance.", "E.g., the tiny version of FrugalScore g is the best (row x ), while it is the less correlated with the original metric.", "We test two hypotheses in this section: (1) whether fine-tuning on a human-annotated dataset is beneficial, and (2) when fine-tuning on human annotations, whether continuing pretraining on our synthetic dataset is useful.", "Because we cannot use the same dataset for fine-tuning and evaluation, we fine-tune a BERT-Small on each year of TAC 2008-2011 for 4 epochs, using two other years as the validation set, and the remaining year as the test set.", "The best epoch is selected based on validation performance.", "We use a batch size of 32 and a learning rate of 2e-5 that linearly decreases to zero.", "Finally, we experi-1311 PretrainingContinued TAC-2008 TAC-2009 TAC-2010 TAC-2011 Average TAC-2008 no -67.7 0 .", "ment with two scenarios: fine-tuning the miniature directly without continuing its pretraining on our synthetic dataset, and fine-tuning it after the pretraining continuation (with annotations generated by BERTScore-BERT-Base).", "Results .", "Results are reported in Table 2 in terms of summary-level Pearson correlations with human evaluations (Pyramid), averaged over 3 runs with different random seeds.", "First, it is obvious that everywhere, continuing the pretraining on our synthetic dataset leads to a significant boost in performance.", "This is in accordance with Sellam et al. (2020), who found that pretraining was beneficial even in a supervised setting.", "Second, even if a direct comparison is not possible, we can remark when looking at the TAC Pyramid score of row ii) in Table 1 (FrugalScore d -BERT-Small) that fine-tuning after pretraining seems very beneficial too.", "Indeed, after fine-tuning, we reach on average 71, 67.5, 68.7, and 69.2 (de-pending on the split), which represents overall a gain of 4.4 points over the non-fine-tuned model (score of 64.7).", "To test the importance of each data source introduced in subsection 3.1, we created a training set containing sequence pairs uniformly and equally sampled from each source.", "We annotated these pairs with the BERTScore-BERT-Base metric and we used them to continue the pretraining of a BERT-Small miniature.", "We also considered pairs drawn at random from the pairs generated with the other strategies.", "The motivation for random pairs was to sample negative examples, as seeing only positive examples (pairs of related sequences) could bias the learned n o _ s u m m .", "We then continued the pretraining of the BERT-Small miniature four times, excluding each time the pairs coming from a specific data source.", "We evaluated the learned metric on TAC-2008 to 2011 and on WMT-2019.", "Figure 1 shows the average improvements in the Pearson correlation with human judgments relative to training a model on all sources.", "Note that when training on all four sources, we sampled 30k pairs from each source (120k to-tal), and when excluding a source, we sampled 40k pairs from each source (120k total).", "We can clearly see that excluding the random pairs improves performance while excluding any of the other data sources decreases performance.", "In other words, all our data sources are beneficial, and it is not necessary to add negative examples.", "We hypothesise that this is due to the fact that NLG datasets typically do not contain completely unrelated pairs of sentences.", "Interestingly, the pairs generated with the backtranslation strategy have the greatest impact on performance.", "We proposed FrugalScore, an approach to learn a fixed, low-cost version of any expensive NLG evaluation metric.", "Experiments on summarization and translation tasks show that our FrugalScore versions of BERTScore and MoverScore retain most of the original performance in terms of the correlation with human judgments, while running several times faster and having several orders of magnitude less parameters.", "On average over all learned metrics, tasks, and variants, FrugalScore retains 96.8% of the performance, runs 24 times faster, and has 35 times less parameters than the original metrics.", "This work was supported by the SUMM-RE project (ANR-20-CE23-0017)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "result", "abstain", "method", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "abstain", "other", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "result", "abstain", "other" ]
[ "There has been growing interest in parameter-efficient methods to apply pre-trained language models to downstream tasks.", "Building on the PROMPTTUNING approach of Lester et al. (2021), which learns task-specific soft prompts to condition a frozen pre-trained model to perform different tasks, we propose a novel prompt-based transfer learning approach called SPOT : S oft P r o mpt T ransfer.", "SPOT first learns a prompt on one or more source tasks and then uses it to initialize the prompt for a target task.", "We show that SPOT significantly boosts the performance of PROMPTTUNING across many tasks.", "More remarkably, across all model sizes, SPOT matches or outperforms standard MODELTUNING (which fine-tunes all model parameters) on the SUPERGLUE benchmark, while using up to 27,000 fewer task-specific parameters.", "To understand where SPOT is most effective, we conduct a large-scale study on task transferability with 26 NLP tasks in 160 combinations, and demonstrate that many tasks can benefit each other via prompt transfer.", "Finally, we propose an efficient retrieval approach that interprets task prompts as task embeddings to identify similar tasks and predict the most transferable source tasks for a novel target task.", "The past few years have seen the rapid development of ever larger pre-trained language models, where it has repeatedly been shown that scaling up the model size is a key ingredient for achieving the best performance (Devlin et al., 2019; Raffel et al., 2020; Brown et al., 2020).", "While this trend has continued to push the boundaries of possibility across various NLP benchmarks, the sheer size of these models presents a challenge for their practical application.", "For 100B+ parameter models, fine-tuning and deploying a separate instance (cid:70) Work done during an internship at Google Research.", "of the model for each downstream task would be prohibitively expensive.", "To get around the infeasibility of fine-tuning, Brown et al. (2020) propose PROMPTDESIGN , where every downstream task is cast as a language modeling task and the frozen pretrained model performs different tasks by conditioning on manual text prompts provided at inference time.", "They demonstrate impressive few-shot performance with a single frozen GPT-3 model, although its performance depends highly on the choice of the prompt (Zhao et al., 2021) and still lags far behind state-of-the-art fine-tuning results.", "More recent work explores methods for learning soft prompts (Liu et al., 2021b; Qin and Eisner, 2021; Li and Liang, 2021; Lester et al., 2021), which can be seen as additional learnable parameters injected into the language model.", "Lester et al. (2021) propose PROMPTTUNING , a simple method 5039 Source Prompt Library Source Task Embeddings Keys Values Source Prompts Target Task Embedding Query Source Prompt Value Target Prompt Task A Task B Task C Task A Task B Task C Unsupervised Task Source Prompt Pre-trained Model Target Prompt Pre-trained Model Initialization Target Prompt Tuning Source Prompt Tuning tuned frozen Initialization Target Task Target Task Target Task Figure 2: An illustration of our generic (left) and targeted (right) SPOT approaches.", "that learns a small task-specific prompt (a sequence of tunable tokens prepended to each example) for each downstream task during adaptation to condition the frozen language model to perform the task.", "Strikingly, as model capacity increases, PROMPTTUNING becomes competitive with MODELTUNING , which fine-tunes the entire model on each downstream task.", "Nevertheless, at smaller model sizes (below 11B parameters), there are still large gaps between PROMPTTUNING and MODELTUNING .", "In this paper, we propose SPOT : S oft P r o mpt T ransfer, a novel transfer learning approach in the context of prompt tuning.", "SPOT first trains a prompt on one or more source tasks, and then uses the resulting prompt to initialize the prompt for a target (downstream) task.", "Our experiments show that SPOT offers significant improvements over PROMPTTUNING across tasks and model sizes.", "For instance, on the SUPERGLUE benchmark (Wang et al., 2019b), we obtain +10.1 and +2.4 point average accuracy improvements using the T5 BASE (220M parameter) and T5 XXL (11B parameter) models (Raffel et al., 2020), respectively.", "More importantly, SPOT is competitive with or outperforms MODELTUNING across all model sizes (see Figure 1).", "Motivated by these results, we investigate transferability between tasks, through the lens of soft task prompts.", "Our goal is to answer two questions:", "(a) For a given target task, when does initializing the prompt from a source task boost performance?", "(b) Can we use task prompts to efficiently predict which source tasks will transfer well onto a novel target task?", "To answer", "(a), we conduct a systematic study of the T5 model using 26 NLP tasks in 160 combinations of source and target tasks.", "Our results indicate that many tasks can benefit each other via prompt transfer.", "To address", "(b), we interpret the learned task prompts as task embeddings to construct a semantic space of tasks and formalize the similarity between tasks.", "We design an efficient retrieval algorithm that measures task embedding similarity, allowing practitioners to identify source tasks that will likely yield positive transfer.", "To summarize, our main contributions are: (1) We propose SPOT , a novel prompt-based transfer learning approach, and show that scale is not necessary for PROMPTTUNING to match the performance of MODELTUNING ; on SUPERGLUE , SPOT matches or beats MODELTUNING across all model sizes.", "(2) We conduct a large-scale and systematic study on task transferability, demonstrating conditions under which tasks can benefit each other via prompt transfer.", "(3) We propose an efficient retrieval method that interprets task prompts as task embeddings to construct a semantic space of tasks, and measures task embedding similarity to identify which tasks could benefit each other.", "(4) To facilitate future work on prompt-based learning, we will release our library of task prompts and pretrained models, and provide practical recommendations for adapting our library to NLP practitioners at https://github.com/google-research/ prompt-tuning/tree/main/prompt_tuning/ spot .", "To improve performance of PROMPTTUNING on a target task, SPOT introduces source prompt tuning , an intermediate training stage between language model pre-training and target prompt tuning (Fig-ure 2, left), to learn a prompt on one or more source tasks (while still keeping the base model frozen),", "which is then used to initialize the prompt for the target task.", "1 Our approach retains all the computational benefits of PROMPTTUNING : for each target task, it only requires storing a small task-specific prompt, enabling the reuse of a single frozen pretrained model across all tasks.", "In this section, we present a generic SPOT approach where a single transferred prompt is reused for all target tasks.", "In 3, we explore a targeted approach that retrieves different source prompts for different target tasks.", "Our frozen models are built on top of the pretrained T5 checkpoints of all sizes: SMALL , BASE , LARGE , XL , XXL with 60M, 220M, 770M, 3B, and 11B parameters, respectively.", "In our experiments with SPOT , we leverage the LM adapted version of T5 2 , which was found to be easier to optimize for PROMPTTUNING (Lester et al., 2021).", "PROMPTTUNING : The vanilla prompt tuning approach of Lester et al. (2021), where an indepen-dent prompt is directly trained on each target task.", "MODELTUNING & MULTI-TASKMODELTUNING : We compare prompt tuning approaches to MODELTUNING , the standard fine-tuning approach (Devlin et al., 2019; Raffel et al., 2020), where all model parameters are fine-tuned on each target task separately.", "For an apples-to-apples comparison, we include MULTI-TASKMODELTUNING , a more competitive baseline that first fine-tunes the entire model on the same mixture of source tasks used for SPOT before fine-tuning it on individual target tasks.", "3 2.1.2 Evaluation datasets We study downstream performance on a diverse set of tasks from the GLUE (Wang et al., 2019c) and 1 The target task can be treated as one of the source tasks being mixed together.", "2 T5 1.1 checkpoints trained for an additional 100K steps using the prefix LM objective (Raffel et al., 2020), available at https://github.com/google-research/ text-to-text-transfer-transformer/blob/main/released_checkpoints.md 3 In preliminary experiments, we found that using the original version of T5 1.1 (which was pre-trained exclusively on span corruption) for model tuning approaches results in better performance than using the LM adapted version.", "We therefore report results corresponding to the original T5 1.1 for MODELTUNING and MULTI-TASKMODELTUNING .", "SUPERGLUE (Wang et al., 2019b) benchmarks.", "4 We train for a fixed number of steps and report results on the validation set associated with each dataset.", "5 2.1.3 Data for source prompt tuning As with language model pre-training, the choice of training data is crucial for successful prompt transfer.", "To investigate the impact of source training data on downstream performance, we compare a diverse set of source tasks.", "A single unsupervised learning task: We first consider training the prompt on a fraction of the C4 (Colossal Clean Crawled Corpus) dataset (Raf-fel et al., 2020) using the prefix LM objective discussed in Raffel et al. (2020).", "Although this task was used to pre-train our frozen T5 models already, it could still be helpful for learning a general-purpose prompt.", "A single supervised learning task: Alternatively, we can train the prompt using a supervised task.", "We use either MNLI (Williams et al., 2018) or SQUAD (Rajpurkar et al., 2016) as a single source task.", "MNLI was shown to be helpful for many sentence-level classification tasks (Phang et al., 2019), while SQUAD was found to generalize well to QA tasks (Talmor and Berant, 2019).", "A multi-task mixture: So far, we have considered using a single source task.", "An alternative approach is multi-task training.", "Within T5 's unified text-to-text framework, this simply corresponds to mixing different datasets together.", "We explore mixing datasets from different NLP benchmarks or families of tasks, including GLUE , SUPERGLUE , natural language inference ( NLI ), paraphrasing/semantic similarity, sentiment analysis, question answering ( QA ) on MRQA (Fisch et al., 2019), commonsense reasoning on RAINBOW (Lourie et al., 2021), machine translation, summarization, and natural lan-4 These datasets include grammatical acceptability judgments ( COLA (Warstadt et al., 2019)), sentiment analysis ( SST-2 (Socher et al., 2013)), paraphrasing/semantic similarity ( MRPC (Dolan and Brockett, 2005), STS-B (Cer et al., 2017), QQP (Iyer et al., 2017)), natural language inference ( MNLI (Williams et al., 2018), QNLI (Wang et al., 2019c), RTE (Dagan et al., 2005, et seq.), CB (De Marneffe et al., 2019)), coreference resolution ( WSC (Levesque et al., 2012)), sentence completion ( COPA (Roemmele et al., 2011)), word sense disambiguation ( WIC (Pilehvar and Camacho-Collados, 2019)), and question answering ( MULTIRC (Khashabi et al., 2018), RECORD (Zhang et al., 2018), BOOLQ (Clark et al., 2019)).", "We exclude the problematic WNLI (Levesque et al., 2012) dataset from GLUE , following Devlin et al. (2019).", "guage generation on GEM (Gehrmann et al., 2021).", "6 We create a mixture of source tasks from each of the NLP benchmarks/families of tasks above, and a mixture comprising all datasets (C4 + 55 labeled datasets), using the examples-proportional mixing strategy in Raffel et al. (2020) with an artificial dataset size limit K = 2 19 examples.", "We closely follow the training procedure in Lester et al. (2021).", "Specifically, the only new parameters introduced during both source and target prompt tuning are a shared prompt RL E prepended to each (embedded) input sequence, where L , E are the prompt length and the embedding size, respectively.", "In all cases, we set L = 100 tokens and tune the prompt for a fixed number of steps S .", "7 While S is set to 30K in Lester et al. (2021), we find that additional tuning is helpful on large datasets.", "As such, we set S to 2 18 = 262 , 144 , following Raffel et al. (2020), with the exception of ablation experiments (rows longer tuning) in Table 1 which use S = 30 K. For source prompt tuning, the prompt token embeddings are initialized from sampled vocabulary (i.e., the 5,000 most common tokens).", "During target prompt tuning, we save a checkpoint every 500 steps and report results on the checkpoint with the highest validation performance.", "Appendix C contains training details for PROMPTTUNING and model tuning approaches.", "We compare the results of SPOT and other approaches in Table 1 and Figure 1.", "Below, we summarize and analyze each of our findings in detail.", "SPOT significantly improves performance and stability of PROMPTTUNING : Our results on the GLUE and SUPERGLUE benchmarks with T5 BASE (Table 1) suggest that prompt transfer provides an effective means of improving performance for PROMPTTUNING .", "For example, the best-performing variant of SPOT outperforms the vanilla PROMPTTUNING approach on both GLUE and SUPERGLUE by a substantial margin, obtaining +4.4 and +10.1 point average accuracy improvements, respectively.", "Our 6 See Appendix B for details about datasets.", "7 We use the Adafactor optimizer (Shazeer and Stern, 2018) with default parameters except with a constant learning rate of 0.3, weight decay of 1 e 5 , and parameter scaling turned off.", "We train with a batch size of 32.", "The dropout probability is always kept at 0 .", "1 .", "All of our models are implemented using JAX (Bradbury et al., 2018) and FLAX (Heek et al., 2020).", "ablation study indicates that longer tuning is also an important ingredient for achieving the best performance, and is complementary to prompt transfer.", "Additionally, when longer tuning is omitted, we observe that SPOT improves stability across runs.", "Within SPOT , we can compare the effectiveness of different source mixtures (see Table 1).", "Source prompt tuning on GLUE performs best on both GLUE and SUPERGLUE , obtaining average scores of 82.8 and 73.2, respectively.", "8 Interestingly, unsupervised source prompt tuning on C4 (the same task used to pre-train our frozen models) still yields considerable improvements, even outperforming using SUPERGLUE for SUPERGLUE tasks.", "Using MNLI or SQUAD as a single source dataset is also particularly helpful across target tasks.", "Other source mixtures can lead to significant gains, with some families of tasks (e.g., NLI and paraphrasing/semantic similarity) showing more benefit than others.", "Mixing all the datasets together does not yield the best results, possibly due to task interference/negative transfer issues, where achieving good performance on one or more source tasks can hurt performance on a target task.", "SPOT helps close the gap with MODELTUNING across all model sizes: Figure 1 shows our SUPERGLUE results across model sizes (see Appendix A for full results).", "As shown in Lester et al. (2021), PROMPTTUNING becomes more competitive with scale, and at the XXL size, it nearly matches the performance of MODELTUNING .", "However, at smaller model sizes, there are still large gaps between the two approaches.", "We show that SPOT helps close these gaps and even exceeds MODELTUNING 's performance by a large margin at several model sizes, while retaining all the computational benefits conferred by PROMPTTUNING .", "Finally, at the XXL size, SPOT achieves the best average score of 91.2, +1.1 points better than the strong MULTI-TASKMODELTUNING baseline, despite having 27,000 fewer task-specific parameters.", "As a final test of SPOT 's effectiveness, we submitted our XXL model's predictions to the SUPERGLUE leaderboard, achieving a score of 89.2.", "This far exceeds all previous submissions using parameter-efficient adaptation, such as GPT-3 (71.8), and almost matches fully fine-tuned T5 XXL (89.3), 9 despite tuning 27,000 fewer parameters.", "To the best of our knowledge, SPOT is the first parameter-efficient adaptation approach that is competitive with methods that tune billions of parameters.", "See Appendix D for details.", "So far, we have seen that soft prompt transfer can significantly boost the performance of prompt tuning, but it is critical to pick the right source tasks for transfer.", "For instance, through an extensive search, we found that GLUE and MNLI provide excellent source tasks for transferring to individual GLUE and SUPERGLUE tasks.", "But what about a resource-constrained scenario where a user is not able to exhaustively search over a set of source tasks?", "Can we predict which tasks will best transfer onto a novel target task without testing them one by one?", "To investigate this, we conduct a large-scale empirical study with 26 NLP tasks.", "We first measure transferability across all task combinations (3.1).", "Next, we show that by interpreting task prompts as task embeddings, we can construct a semantic space of tasks, wherein similar tasks cluster together (3.2).", "Based on this observation, we pro-9 Note that the T5 submission uses the original version of T5 (which was pre-trained on a multi-task mixture of unsupervised and supervised tasks) while we use T5 1.1 (which was pre-trained on C4 only without mixing in supervised tasks).", "pose a retrieval algorithm (3.3) that leverages task embedding similarity to choose which source tasks to use for a given novel target task (Figure 2, right).", "Our proposed approach can eliminate 69 % of the source task search space while keeping 90 % of the best-case quality gain.", "We study a diverse set of 16 source datasets and 10 target datasets (see Table 2).", "10 We consider all 160 possible source-target pairs, and perform transfer from each source task to each target task.", "All source tasks are data-rich or have been shown to yield positive transfer in prior work.", "To simulate a realistic scenario, we use low-resource tasks (less than 10K training examples) as target tasks.", "11 10 Beyond the datasets from 2, we use DOCNLI (Yin et al., 2021), YELP -2 (Zhang et al., 2015), CXC (Parekh et al., 2021), DROP (Dua et al., 2019), WINOGRANDE (Sakaguchi et al., 2020), HELLASWAG (Zellers et al., 2019), COSMOSQA (Huang et al., 2019), RACE (Lai et al., 2017), and CR (Hu and Liu, 2004).", "11 The source tasks comprise one unsupervised task ( C4 ) and 15 supervised tasks covering natural language inference ( NLI ), paraphrasing/semantic similarity, sentiment analysis, question answering ( QA ), and commonsense reasoning.", "The target tasks additionally include grammatical acceptability, word sense disambiguation, and coreference resolution.", "To limit computational costs, we use T5 BASE in all of our task transferability experiments.", "We perform 262 , 144 prompt tuning steps on each source task.", "The prompt checkpoint with the highest source task validation performance is selected to initialize prompts for target tasks.", "Since the target datasets are small, we only perform 100K prompt tuning steps on each target task.", "We repeat each experiment three times with different random seeds.", "Other training details match 2.1.4.", "Tasks benefiting each other via prompt transfer: Figure 3 shows a heatmap of our results (see Appendix E for full results).", "In many cases, prompt transfer provides a significant gain on the target task.", "The transfer MNLI CB yields the largest relative error reduction of 58.9% (from an average score of 92.7 to 97.0), followed by MNLI COPA (29.1%) and RECORD WSC (20.0%).", "Using the best source prompt (out of 48) for each target task dramatically improves the average score across our 10 target tasks from 74.7 to 80.7.", "Overall, our results show effective transfer from large source tasks that involve high-level reasoning about semantic relationships among sentences (e.g., MNLI ), or when the source and target tasks are similar (e.g., CXC STS-B ).", "Interestingly, positive transfer can occur between relatively dissimilar tasks (e.g., RECORD WSC , SQUAD MRPC , CXC WIC ).", "12 3.2 Defining task similarity through prompts Since only prompt parameters are updated during prompt tuning on specific tasks, the learned prompts likely encode task-specific knowledge.", "This suggests that they could be used to reason about the nature of tasks and their relationships.", "To 12 Table 7 in Appendix E contains more cases.", "test this idea, we interpret task prompts as task embeddings and construct a semantic space of tasks.", "More concretely, we define a task's embedding as the prompt checkpoint after training for 10K steps on that task.", "13 Note that using early checkpoints allows for quick computation of task embeddings for novel target tasks.", "We estimate the similarity between two tasks t 1 , t 2 by measuring the similarity between their corresponding task embeddings e 1 , e 2 , using the following metrics: COSINESIMILARITY OFAVERAGETOKENS : We compute the cosine similarity between the average pooled representations of the prompt tokens: sim ( t 1 , t 2 ) = cos ( 1 L (cid:88) i e 1 i , 1 L (cid:88) j e 2 j ) , where e 1 i , e 2 j denote the respective prompt tokens of e 1 , e 2 , and cos denotes the cosine similarity.", "PER-TOKENAVERAGECOSINESIMILARITY : We compute the average cosine similarity between every prompt token pair ( e 1 i , e 2 j ) : sim ( t 1 , t 2 ) = 1 L 2 (cid:88) i (cid:88) j cos ( e 1 i , e 2 j ) .", "13 Our preliminary experiments with other checkpoint alternatives (in the range 1K to 100K) yielded worse performance.", "We also found that measuring task similarity using task embeddings derived from a fixed prompt checkpoint (10K steps) gave better results than those derived from the best-performing prompt checkpoint per task.", "This suggests that prompts trained for a differing number of steps may be less directly comparable than those trained for the same duration.", "Task embeddings capture task relationships: Figure 4 shows a hierarchically-clustered heatmap of cosine similarities between the task embeddings using the COSINESIMILARITY OFAVERAGETOKENS metric.", "14 We observe that our learned task embeddings capture many intuitive task relationships.", "Specifically, similar tasks group together into clusters, including QA ( SQUAD , RECORD , and DROP ; MULTIRC and BOOLQ ), sentiment analysis ( YELP -2 , SST-2 , and CR ), NLI ( MNLI and CB ; DOCNLI and RTE ), semantic similarity ( STS-B and CXC ), paraphrasing ( MRPC and QQP ), and commonsense reasoning ( WINOGRANDE , HELLASWAG , and COSMOSQA ).", "We note that QNLI , which is an NLI task built from the SQUAD dataset, is not closely linked to SQUAD ; this suggests that our task embeddings are more sensitive to the type of task than domain similarity.", "Interestingly, they also capture the unintuitive case of RECORD 's high transferability to WSC .", "Additionally, task embeddings that are derived from different prompts of the same task have high similarity scores (see Appendix F).", "We leverage our task embeddings to predict and exploit task transferability.", "Specifically, we explore methods to predict the most beneficial source tasks for a given target task and then make use of the source task prompts to improve performance on the target task.", "To enlarge our set of source prompts, we use the prompts from each of the three different prompt tuning runs on each source task, resulting in 48 source prompts.", "Given a target task t with task embedding e t , we rank all the source prompts s with associated embeddings e s in descending order by similarity, sim ( e s , e t ) .", "We denote the ranked list of source prompts as s r , where r denotes the rank ( r = 1 , 2 , . . . , 48) .", "We experiment with three methods for using the ranked source prompts: BEST OFTOPk : We select the topk source prompts and use each of them individually to initialize the target prompt.", "This procedure requires prompt tuning k times on the target task t .", "The best individual result is used for evaluating the effectiveness of this method.", "14 To obtain the highest resolution of similarity between two tasks, we use the average of cosine similarities between their task embeddings obtained with all the three different prompt tuning runs (9 combinations).", "source prompts (cid:80) kr =1 r s r so that we only perform prompt tuning on the target task t once.", "The weights r are computed as: r = sim ( e s r , e t ) (cid:80) kl =1 sim ( e s l , e t ) , where e s r denotes the corresponding task embedding of s r .", "TOPk MULTI-TASKMIXTURE : We first identify the source tasks whose prompts are in the topk prompts and mix their datasets and the target dataset together, using the examples-proportional mixing strategy of Raffel et al. (2020).", "Then, we perform source prompt tuning on this multi-task mixture and use the final prompt checkpoint to initialize the target prompt.", "We report the average score across all target tasks achieved by each method.", "For comparison, we measure the absolute and relative improvements over BASELINE prompt tuning on each target task from scratch (i.e., without any prompt transfer).", "15 Additionally, we include ORACLE the oracle results achieved by a brute-force search to identify 15 For each target task t , we report the average and standard deviation of performance across three prompt tuning runs.", "Correlation between task similarity and task transferability: Figure 5 shows how the relative error reduction on a target task changes as a function of the similarity between the source and target task embeddings.", "Overall, we observe a significant positive correlation between task embedding similarity and task transferability on four (out of 10) target tasks, including STS-B ( p < 0 . 001 ), CB ( p < 0 . 001 ), WSC ( p < 0 . 01 ), and RTE ( p < 0 . 05 ), while it is less significant on the other tasks.", "16 In some cases (e.g., on BOOLQ ), we observe a large relative error reduction (19.0%, achieved by a source prompt of MNLI ) despite a low cosine similarity (0.4).", "This suggests that factors other than task similarity (data size, task difficulty, domain similarity, etc.) may also play a role in determining transferability.", "Retrieving targeted source tasks via task embeddings is helpful: Table 3 compares different methods for identifying which source prompts could be beneficial for a given target task.", "Overall, our results show the effectiveness of BEST OFTOPk .", "Simply choosing the source prompt with the highest task embedding similarity to the target task using PER-TOKENAVERAGECOSINESIMILARITY improves over the baseline by a large margin (from an average score of 74.7 to 76.7, a 12.1% average relative error reduction).", "Trying all the top-3 (out of 48) source prompts for each target task yields an average score of 77.5.", "With larger values of k , we can retain most of the benefits of oracle selection (80% of the gain in terms of average score with k = 9 and 90% with k = 15 ), while still eliminating over 2/3 of the candidate source prompts.", "TOPk WEIGHTEDAVERAGE has similar average performance to BEST OFTOPk with k = 1 , but achieves lower variance.", "Thus, this may be an appealing alternative to BEST OFTOPk in scenarios where trying multiple prompt tuning runs on the target task is computationally prohibitive.", "Finally, TOPk MULTITASKMIXTURE also provides a means of obtaining strong performance with an average score of 77.8, even outperforming BEST OFTOPk with k 3 .", "Large-scale pre-trained language models have been shown", "to exhibit remarkable performance on many NLP tasks (Devlin et al., 2019; Liu et al., 2019b; Yang et al., 2019; Lan et al., 2020; Raffel et al., 2020; Brown et al., 2020; He et al., 2021).", "To improve practical applicability of these models, early work introduces compression techniques (Sanh et al., 2019; Jiao et al., 2020; Fan et al., 2020; Sanh et al., 2020) to obtain lightweight models.", "Other work explores updating only small parts of the model (Za-ken et al., 2021) or task-specific modules, such as adapters (Houlsby et al., 2019; Karimi Mahabadi et al., 2021) or low-rank structures (Mahabadi et al., 2021; Hu et al., 2021), while keeping the rest of the model fixed.", "Recently, Brown et al. (2020) demonstrate impressive few-shot performance with PROMPTDESIGN , where their model is conditioned on a manual text prompt at inference time to perform different tasks.", "Several efforts have since focused on developing prompt-based learning approaches with carefully handcrafted prompts (Schick and Schtze, 2021), prompt mining and paraphrasing (Jiang 5046 et al., 2020b), gradient-based search for improved prompts (Shin et al., 2020), and automatic prompt generation (Gao et al., 2021).", "The use of hard prompts, however, was found to be sub-optimal and sensitive to the choice of the prompt (Zhao et al., 2021; Liu et al., 2021b).", "As such, more recent work has shifted toward learning soft prompts (Liu et al., 2021b; Qin and Eisner, 2021; Li and Liang, 2021; Lester et al., 2021), which can be seen as learnable parameters injected into the model.", "We refer readers to Liu et al. (2021a) for a recent survey on prompt-based learning research.", "In concurrent work, Gu et al. (2021) also explore the effectiveness of prompt transfer.", "Their method uses hand-crafted pre-training tasks tailored to specific types of downstream tasks, being less extensible to novel downstream tasks.", "In contrast, we use existing tasks as source tasks and show that prompt transfer can confer benefits even when there are mismatches (e.g., in task type or input/output format) between the source and target.", "Task transferability We also build on existing work on task transferability (Wang et al., 2019a; Liu et al., 2019a; Talmor and Berant, 2019; Pruk-sachatkun et al., 2020; Vu et al., 2020, 2021).", "Prior work shows effective transfer from data-rich source tasks (Phang et al., 2019), those that require complex reasoning and inference (Pruksachatkun et al., 2020), or those that are similar to the target task (Vu et al., 2020).", "There have also been efforts to predict task transferability (Bingel and Sgaard, 2017; Vu et al., 2020; Poth et al., 2021).", "Vu et al. (2020) use task embeddings derived from either the input text or the diagonal Fisher information matrix of the model, while Poth et al. (2021) explore adapter-based alternatives.", "Here, our use of the same model (without task-specific components) with a unifying text-to-text format allows us to more easily model the space of tasks.", "Additionally, prompt-based task embeddings are comparatively cheaper to obtain.", "As other parameter-efficient adaptation methods (see 4) may outperform PROMPTTUNING in specific situations, it would be interesting to test whether an approach similar to SPOT could extend successfully to these methods.", "At the same time, we believe that PROMPTTUNING has its own merits.", "As pre-trained language models become larger and larger, some advantages of PROMPTTUNING over other methods are: (1) Among current methods with learnable parameters, PROMPTTUNING is the most parameter efficient, requiring less than 0.01% task-specific parameters for most model sizes.", "(2) PROMPTTUNING is simpler than other methods, as it does not modify the internal model architecture (cf.", "the PREFIXTUNING method of Li and Liang (2021), which adds a prefix to each layer of both the Transformer encoder and decoder); as such, PROMPTTUNING allows mixed-task inference and facilitates transfer learning between tasks.", "(3) As model capacity increases, PROMPTTUNING becomes more competitive with MODELTUNING ; to the best of our knowledge, this has not been shown for other methods.", "(4) Soft prompts could possibly be interpreted as natural language instructions.", "Additionally, since our prompt-based task embedding approach does not capture all of the factors that influence task transferability, we leave further exploration of other task embedding methods to future work.", "In this paper, we study transfer learning in the context of prompt tuning.", "We show that scale is not necessary for PROMPTTUNING to match the performance of MODELTUNING .", "On SUPERGLUE , our SPOT approach matches or even exceeds the performance of MODELTUNING by a large margin across model sizes while being more parameter-efficient.", "Our large-scale study on task transferability indicates that tasks can benefit each other via prompt transfer in various scenarios.", "Finally, we demonstrate that task prompts can be interpreted as task embeddings to formalize the similarity between tasks.", "We propose a simple yet efficient retrieval approach that measures task similarity to identify which source tasks could confer benefits to a novel target task.", "Taken as a whole, we hope that our work will spur more research into prompt-based transfer learning.", "We thank Mohit Iyyer, Sebastian Ruder, Kalpesh Krishna, Thang Luong, Quoc Le, and the members of the Descartes team and the UMass NLP group for helpful discussion and feedback.", "We would also like to thank Grady Simon, Lucas Dixon, Slav Petrov, Nader Akoury, Haw-Shiuan Chang, Katherine Thai, Marzena Karpinska, and Shufan Wang for their comments on this manuscript.", "Finally, we are grateful to Vamsi Aribandi for his work on preprocessing several datasets used in our experiments." ]
[ "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "abstain", "objective", "objective", "abstain", "objective", "abstain", "method", "result", "abstain", "objective", "method", "objective", "objective", "objective", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "objective", "method", "other", "other", "other", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "result", "abstain", "objective", "objective", "method", "other", "other", "other" ]
[ "Globally normalized neural sequence models are considered superior to their locally normalized equivalents because they may ameliorate the effects of label bias.", "However, when considering high-capacity neural parametrizations that condition on the whole input sequence, both model classes are theoretically equivalent in terms of the distributions they are capable of representing.", "Thus, the practical advantage of global normalization in the context of modern neural methods remains unclear.", "In this paper, we attempt to shed light on this problem through an empirical study.", "We extend an approach for search-aware training via a continuous relaxation of beam search (Goyal et al., 2017b) in order to enable training of globally normalized recurrent sequence models through simple backpropagation.", "We then use this technique to conduct an empirical study of the interaction between global normalization, high-capacity encoders, and search-aware optimization.", "We observe that in the context of inexact search, globally normalized neural models are still more effective than their locally normalized counterparts.", "Further, since our training approach is sensitive to warm-starting with pre-trained models, we also propose a novel initialization strategy based on self-normalization for pretraining globally normalized models.", "We perform analysis of our approach on two tasks: CCG supertagging and Machine Translation, and demonstrate the importance of global normalization under different conditions while using search-aware training.", "Neural encoder-decoder models have been tremendously successful at a variety of NLP tasks, such as machine translation (Sutskever et al., 2014; Bahdanau et al., 2015), parsing (Dyer et al., 2016, 2015), summarization (Rush et al., 2015),", "dialog generation (Serban et al., 2015), and image captioning (Xu et al., 2015).", "With these models, the target sequence is generated in a left-to-right step-wise manner with the predictions at every step being conditioned on the input sequence and the whole prediction history.", "This long-distance memory precludes exact search for the maximally scoring sequence according to the model and therefore, approximate algorithms like greedy search or beam search are necessary in practice during decoding.", "In this scenario, it is natural to resort to search-aware learning techniques for these models which makes the optimization objective sensitive to any potential errors that could occur due to inexact search in these models.", "This work focuses on comparison between search-aware locally normalized sequence models that involve projecting the scores of items in the vocabulary onto a probability simplex at each step and globally normalized/unnormalized sequence models that involve scoring sequences without explicit normalization at each step.", "When conditioned on the the full input sequence and the entire prediction history, both locally normalized and globally normalized conditional models should have same expressive power under a high-capacity neural parametrization in theory, as they can both model same set of distributions over all finite length output sequences (Smith and Johnson, 2007).", "However, locally normalized models are constrained in how they respond to search errors during training since the scores at each decoding step must sum to one.", "To let a search-aware training setup have the most flexibility, abandoning this constraint may be useful for easier optimization.", "In this paper, we demonstrate that the interaction between approximate inference and non-convex parameter optimization results in more robust training and better performance for models with global normalization compared to those with the more common locally normalized parametrization.", "We posit that this difference is due to label bias (Bottou, 1991) arising from the interaction of approximate search and search-aware optimization in locally normalized models.", "A commonly understood source of label bias in locally normalized sequence models is an effect of conditioning only on partial input (for example, only the history of the input) at each step during decoding (Andor et al., 2016; Lafferty et al., 2001; Wiseman and Rush, 2016).", "We discus another potential source of label bias arising from approximate search with locally normalized models that may be present even with access to the full input at each step.", "To this end, we train search-aware globally and locally normalized models in an end-to-end (sub)-differentiable manner using a continuous relaxation to the discontinuous beam search procedure introduced by Goyal et al. (2017b).", "This approach requires initialization with a suitable globally normalized model to work in practice.", "Hence, we also propose an initialization strategy based upon self-normalization for pre-training globally normalized models.", "We demonstrate the effect of both sources of label bias through our experiments on two common sequence tasks: CCG supertagging and machine translation.", "We find that label bias can be eliminated by both, using a powerful encoder, and using a globally normalized model.", "We observe that global normalization yields performance gains over local normalization and is able to ameliorate label bias especially in scenarios that involve a very large hypothesis space.", "We now introduce the notation that we will use in the remainder of the paper for describing locally and globally normalized neural sequence-to-sequence models.", "We are interested in the probability of output sequence, y , conditioned on input sequence, x .", "Let s ( x , y 1: i 1 , y i ) be a non negative score of output label y at time-step i for the input x and the prediction history y 1: i 1 , let V be the label space, and let Y x be the space of all finite sequences for x .", "1 A neural encoder (e.g. a bidirectional LSTM) encodes information about x and a recurrent neural decoder generates the output y 1 For notational convenience we suppress the dependence of the score s on model parameters .", "Under a locally normalized model ML , the probability of y given x is:", "p ML ( y | x ) = n (cid:89) i =1 p ( y i | x , y 1: i 1 ) = n (cid:89) i =1 s ( x , y 1: i 1 , y i ) Z L,i ( x , y 1: i 1 )", "where Z L,i ( x , y 1: i 1 ) = (cid:80) y V s ( x , y 1: i 1 , y ) , is the local normalizer at each time step and n is the number of prediction steps.", "Since, the local normalizer is easy to compute, likelihood maximization based training is a standard approach for training these models.", "where ZG ( x ) = (cid:80) y Y (cid:81) ni =1 s ( x , y 1: i 1 , y i ) , is the global log-normalizer.", "ZG ( x ) is intractable to estimate for most problems of interest due to the large search space therefore, an exact likelihood maximization training approach is intractable for these models.", "It was shown in Andor et al. (2016); Lafferty et al. (2001), locally normalized conditional models with access to only partial input , x 1: i 1 , at each decoding step are biased towards labeling decisions with low-entropy transition probabilities at each decoding step and, as a result, suffer from a weakened ability to revise previous decisions based upon future input observations.", "This phenomenon has been referred to as label bias , and presents itself as an arbitrary allocation of probability mass to unlikely or undesirable label sequences despite the presence of well-formed sequences in training data.", "Andor et al. (2016) prove that this class of locally normalized models that relies on the structural assumption of access to only left-to-right partial input at each step, n (cid:89) i =1 p ( y i | x , y 1: i 1 ) = n (cid:89) i =1 p ( y i | x 1: i 1 , y 1: i 1 ) , is strictly less expressive than its globally normalized counterpart.", "However, the standard sequence-to-sequence models used most often in practice and presented in this paper actually condition the decoder on a summary representation of the entire input sequence , x , computed by a neural encoder.", "Hence, depending on the power of the encoder, it is commonly thought that such models avoid this type of label bias.", "For these models, both locally normalized and globally normalized conditional models are equally expressive, in principle, with a suffi-ciently powerful encoder.", "However, as we suggest in the next section and show empirically in experiments, this does not necessarily mean that both parametrizations are equally amenable to gradient-based training in practice, particularly when the search space is large and search-aware training techniques are used.", "We will argue that they suffer from a related, but distinct, form of bias introduced by inexact decoding.", "To improve performance with inexact decoding methods (e.g. beam search), search-aware training techniques take into account the decoding procedure that will be used at test time and adjust the parameters of the model to maximize prediction accuracy under the decoder.", "Because of the popularity of beam search as a decoding procedure for sequence models, in this paper we focus on beam search-aware training.", "While many options are available, including beam-search optimization (BSO) (Wiseman and Rush, 2016), in Section 3.1 we will describe the particular search-aware training strategy we use in experiments (Goyal et al., 2017b), chosen for its simplicity.", "We illustrate via example how optimization of locally normalized models may suffer from a new kind of label bias when using beam search-aware training, and point to reasons why this issue might be mitigated by the use of globally normalized models.", "While the scores of successors of a single candidate under a locally normalized model are constrained to sum to one, scores of successors under a globally normalized model need only be positive.", "Intuitively, during training, this gives the globally normalized model more freedom to Figure 1: Illustrative example of bias arising in locally normalized models due to beam search.", "in order avoid search errors.", "In the example beam search decoding problem in Figure 1, we compare the behavior of locally and globally normalized models at a single time step for a beam size of two.", "In this example, we assume that the score for beams in both the models is exactly the same until the step shown in Figure", "1. Suppose that the lower item on the beam(X2) is correct, and thus, for more effective search, we would prefer the models scores to be such that only successors of the lower beam item are present on the beam at the next step.", "However, since, the scores at each step for a locally normalized model are constrained to sum to one, the upper beam item(X1) generates successors with scores comparable to those of the lower beam item.", "As we see in the example, due to the normalization constraint, search-aware training of the locally normalized model might find it difficult to set the parameters to prevent extension of the poorer candidate.", "In contrast, because the scores of a globally normalized model are not constrained to sum to one, the parameters of the neural model can be set such that all the successors of the bad candidate have a very low score and thus do not compete for space on the beam.", "This illustrates a mechanism by which search-aware training of globally normalized models in a large search spaces might Soft Beam Recurrence Row-sum Col-sum Weighted Linear Combination Soft Beam Recurrence Soft-k-argmax Candidate successor scores a b c Embedding Soft Backpointers h next 2 h next 1 h cur 1 h cur 2 h cur 1 p max 1 p max 2 D w 1 cur = a w 2 cur = b w 1 next a b c d peaked-softmax ( \u0000 ( s \u0000 m 1 1 ) 2 ) m 1 m 2 s peaked-softmax ( \u0000 ( s \u0000 m 2 1 ) 2 ) Soft Beam Computation Soft-k-argmax Figure 2: Left: Computing LSTM hidden states at a subsequent step using continuous relaxation to beam search for beam size of", "be more effective.", "However as discussed earlier, if we can perform exact search then this label bias ceases to exist because both the models have the same expressive power with a search-agnostic optimization scheme.", "In experiments, we will explore this trade-off empirically.", "In order to conduct an empirical study with meaningful comparisons, we devise an extension of the relaxed beam-search based optimization proposed by Goyal et al. (2017b) that allows us to train both the search-aware globally and locally normalized models in a similar manner with the same underlying architecture.", "Following Goyal et al. (2017b), we train a beam-search aware model by optimizing a continuous surrogate approximation to a direct loss objective, J , defined as a function of the output of beam search and the ground truth sequence y :", "Here (cid:96) is a function that computes the loss of the model's prediction produced by beam search , and M refers to the model parametrized by .", "While this objective is search-aware, it is discontinuous and difficult to optimize because beam search involves discrete k-argmax operations.", "Therefore, Goyal et al. (2017b) propose a continuous surrogate, J , by defining a continuous approximation ( soft-k-argmax ) of the discrete k-argmax and using this to compute an approximation to a composition of the loss function and the beam search function.", "min J ( x , , y ) min ( (cid:96) Beam )( x , M ( ) , y ) The soft-k-argmax procedure involves computing distances between the scores of the successors and the k th -max score and using the temperature based argmax operation (Maddison et al., 2017; Jang et al., 2016; Goyal et al., 2017a) to get an output peaked on the k th -max value as shown in the right panel of Figure", "2. The temperature is a hy-perparameter which is typically annealed toward producing low entropy distributions during optimization.", "As shown in the left panel of Figure 2, the soft candidate vectors and the soft backpointers are computed at every decoding step using this soft-k-argmax operation in order to generate the embeddings and recurrent hidden states of the LSTM at each step of the soft beam search procedure.", "With a locally decomposable loss like Hamming loss, both soft loss and soft scores for the relaxed procedure are iteratively computed so that the end-to-end objective computation can be described by a computation graph that is amenable to backpropagation.", "Goyal et al. (2017b) demonstrated empirically that optimizing the surrogate objective, J which can be accomplished via simple backpropagation for decomposable losses like Hamming distance leads to improved performance at test time.", "In experiments, for training locally normalized models, we use log-normalized successor scores.", "However, for training globally normalized models, we will directly use unnormalized scores , which are R + .", "Goyal et al. (2017b) reported that initialization with a locally normalized model pre-trained with teacher-forcing was important for their continuous beam search based approach to be stable and hence they used the locally normalized log-scores for their search-aware training model.", "In this work, we experimented with the unnormalized candidate successor scores and found that initializing the optimization for a globally normalized objective with a cross-entropy trained locally normalized model resulted in unstable training.", "This is expected because the locally normalized models are parametrized in a way such that using the scores before the softmax normalization results in a very different outcome than using scores after local normalization.", "For example, the locally normalized Machine Translation model in Table 1, that gives a BLEU score of 27 .", "62 when decoded with beam search using locally normalized scores, results in BLEU of 4 .", "30 when beam search decoding is performed with unnormalized scores.", "Pretraining a truly globally normalized model for initialization is not straghtforward because no exact likelihood maximization techniques exist for globally normalized models as the global normalizer is intractable to compute.", "Therefore, we propose a new approach to initialization for search-aware training of globally normalized models: we pre-train a locally normalized model that is parametrized like a globally normalized model.", "More specifically, we train a locally normalized model with its distribution over the output sequences denoted by p L ( Y ) such that we can easily find a globally normalized model with a distribution p G ( Y ) that matches p L ( Y ) .", "Following the notation in Section 2, for a locally normalized model, the log-probability of a sequence is: n (cid:88) i =1 [log s ( x , y 1: i 1 , y i ) log Z L,i ( x , y 1: i 1 )] and for a globally normalized model it is: (cid:34) n (cid:88) i =1 log s ( x , y 1: i 1 , y i ) (cid:35) log ZG ( x ) 3.2.1 Self Normalization One way to find a locally normalized model that is parametrized like a globally normalized model is to ensure that the local normalizer at each step, log Z L,i ( x , y 1: i 1 ) , is 0 .", "With the local normalizer being zero it is straightforward to see that the log probability of a sequence under a locally normalized model can easily be interpreted as log probability of the sequence under a globally normalized model with the global log-normalizer, log ZG ( x ) = 0 .", "This training technique is called self-normalization (Andreas and Klein, 2015) because the resulting models' unnormalized score at each step lies on a probability simplex.", "A common technique for training self-normalized models is L2-regularization of local log normalizer which encourages learning a model with log Z = 0 and was found to be effective for learning a language model by Devlin et al. (2014) 2 .", "The L2-regularized cross entropy objective is given by: min (cid:88) x , y D n (cid:88) i =1 log p ( y i | x , y 1: i 1 ) + (log Z L,i ( x , y 1: i 1 )) 2 In Table 1, we report the mean and variance of the local log normalizer on the two different tasks using L2-regularization ( L2 ) based self normalization and no self normalization ( CE ).", "We observe that L2 models are competitive performance-wise to the cross-entropy trained locally normalized models while resulting in a much smaller local log-normalizer on average.", "Although, we couldn't minimize log Z exactly to 0, we observe in Section 4 that this is sufficient to train a reasonable initializer for the search-aware optimization of globally normalized models.", "It is important to note that these approaches yield a globally normalized model that is equivalent to a locally normalized model trained via teacher-forcing and hence these are only used to warm-start the search-aware optimization of globally normalized models.", "Our search-aware training approach is 2 Noise Contrastive Estimation (Mnih and Teh, 2012; Gutmann and Hyv arinen, 2010) is also an alternative to train unnormalized models but our experiments with NCE were unstable and resulted in worse models.", "free to adjust the parameters of the models such that the final globally normalized model has a nonzero log-normalizer ZG over the data.", "Other possible approaches to project locally normalized models onto globally normalized models include distribution matching via knowledge distillation (Hinton et al., 2015).", "We leave exploration of warm-starting of search aware optimization with this approach to future work.", "To empirically analyze the interaction between label bias arising from different sources, search-aware training, and global normalization, we conducted experiments on two tasks with vastly different sizes of output space: CCG supertagging and Machine Translation.", "As described in the next section, the task of tagging allows us to perform controlled experiments which explicitly study the effect of amount of input information available to the decoder at each step, we analyze the scenarios in which search aware training and global normalization are expected to improve the model performance.", "In all our experiments, we report results on training with standard teacher forcing optimization and self-normalization as our baselines.", "We report results with both search-aware locally and globally normalized models (Section 3.1) after warm starting with both cross entropy trained models and self-normalized models to study the effects of search-aware optimization and global normalization.", "We follow Goyal et al. (2017b) and use the decomposable Hamming loss approximation with search-aware optimization for both the tasks and decode via soft beam search decoding method which involves continuous beam search with soft backpointers for the LSTM Beam search dynamics as described in Section 3, but using identifiable backpointers and labels (using MAP estimates of soft backpointers and labels) to decode.", "We tune hyperparameters like learning rate and annealing schedule by observing performance on development sets for both the tasks.", "We performed at least three random restarts for each class and report results based on best development performance.", "We used the standard splits of CCG bank (Hock-enmaier and Steedman, 2002) for training, development, and testing.", "The label space of supertags is 1,284 and the labels are correlated with each other based on their syntactic relations.", "The distribution of supertag labels in the training data exhibits a long tail distribution.", "This task is sensitive to the long range sequential decisions because it encodes rich syntactic information about the sentence.", "Hence, this task is ideal to analyze the effects of label bias and search effects.", "We perform minor preprocessing on the data similar to the preprocessing in Vaswani et al. (2016).", "For experiments related to search aware optimization, we report results with beam size of 5.", "3 4.1.1 Tagging model for ablation study We changed the standard sequence-to-sequence model to be more suitable for the tagging task.", "This change also lets us perform controlled experiments pertaining to the amount of input sequence information available to the decoder at each time step.", "In a standard encoder-decoder model with attention, the initial hidden state of the decoder is often some function of the final encoder state so that the decoder's predictions can be conditioned on the full input.", "For our tagging experiments, instead of influencing the initial decoder state with the encoder, we set it to a vector of zeros.", "Thus the information about input for prediction is only available via the attention mechanism.", "In addition to the change above, we also forced the model to attend to only the i th input representation while predicting the i th label.", "This is enforceable because the output length is equal to the input length and it is also a more suitable structure for a tagging model.", "With these changes in the decoder, we can precisely control the amount of information about the input available to the decoder at each prediction step.", "For example, with a unidirectional LSTM encoder, the decoder at i th step 3 We observed similar results with beam size 10 only has access to input till the i th token and the prediction history: p ( y i | x , y 1: i 1 ) = p ( y i | x 1: i , y 1: i 1 ) This setting lets us clearly explore the classical notion of label bias arising out of access to partial input at each prediction step (Section 2.3).", "A bidirectional LSTM encoder, however provides access to all of the input information to the decoder at all the prediction steps.", "We use the same dataset (the German-English portion of the IWSLT 2014 machine translation evaluation campaign (Cettolo et al., 2014)), preprocessing and data splits as Ranzato et al. (2016) for our Machine Translation experiments.", "The output la-bel/vocabulary size is 32000 and unlike tagging, the length of output sequences cannot be deterministically determined from the length of the input sequence.", "Moreover, the output sequence does not necessarily align monotonically with the input sequence.", "Hence the output sequence space for MT is much larger than that for tagging and the effects of inexact search on optimization are expected to be even more apparent for MT. We use a standard LSTM-based encoder/decoder model with a standard attention mechanism (Bahdanau et al., 2016) for our MT experiments.", "For search-aware opti-Init-scheme Regular Self-normalized pretrain-greedy 26.24 25.42 pretrain-beam 27.62 26.63 locally-normalized 29.28 27.71 globally-normalized 26.24 29.27 Table 4: BLEU results on de-en Machine Translation.", "The results reported in Tables 2, 3 and 4 allow us to analyze the effect of interaction of label bias, inexact search and global normalization in detail.", "First, we analyze the effect of label bias that arises from conditioning on partial input (Section 2.3) during decoding on optimization of the models.", "The unidirectional encoder based tagging experiments suggest that conditioning on partial input during decoding results in poor models when trained with cross entropy based methods.", "Interestingly, all techniques improve upon this:", "(i) search-aware locally and globally normalized models are able to train for accuracy directly and eliminate exposure bias that arises out of the mismatch between train-time and test-time prediction methods, and,", "(ii) the bidirectional tagging model which provides access to all of the input is powerful enough to learn a complex relationship between the decoder and the input representations for the search space of the CCG supertagging task and results in a much better performance.", "Next, we analyze the importance of appropriate initialization of search-aware optimization with pretrained models.", "Across all the results in Tables 2, 3 and 4, we observe that search-aware optimization for locally normalized models always improves upon the pre-trained locally 4 We observed similar results beam size of 5.", "normalized models used for initialization.", "But when the search-aware optimization for globally normalized models is initialized with locally normalized CE models, the improvement is not as pronounced and in the case of MT, the performance is actually hurt by the improper initialization for training globally normalized models probably a consequence of large search space associated with MT and incompatibility between unnormalized scores for search-aware optimization and locally normalized scores of the CE model used for pre-training.", "When the self-normalized models are used for initialization, optimization for globally normalized models always improves upon the pre-trained self-normalized model.", "It is interesting to note that we see improvements for the globally normalized models even when logZ is not exactly reduced to 0 indicating that the scores used for search-aware training initially are comparable to the scores of the pre-trained self-normalized model.", "We also observe that self-normalized models perform slightly worse than CE-trained models but search aware training for globally normalized models improves the performance significantly.", "Next, we analyze the effect of search-aware optimization on the performance of the models.", "Search-aware training with locally normalized models improves the performance significantly in all our experiments which indicates that accounting for exposure bias and optimizing for predictive performance directly is important.", "We also observe that the bidirectional model for tagging is quite powerful and seems to account for both exposure bias and label bias to a large extent.", "We reckon that this may be because the greedy decoding itself is very close to exact search for this well-trained tagging model over a search space that is much simpler than that associated with MT. Therefore, the impact of search-aware optimization on the bidirectional tagger is marginal.", "However, it is much more pronounced on the task of MT. 4.3.4 Global normalization and label bias We analyze the importance of training globally normalized models.", "In the specific setup for tagging with the unidirectional encoder, globally normalized models are actually more expressive than the locally normalized models (Andor et al., 2016) as described in Section 2.3 and this is reflected in our experiments (table 3) with tagging.", "The globally normalized model (warm started with a self-normalized model) performs the best among all the models in the unidirectional tagger case which indicates that it is ameliorating something beyond exposure bias which is fixed by the search-aware locally normalized model.", "For MT (table 4), both globally normalized and locally normalized models are equally expressive in theory because the decoder is conditioned on the full input information at each step, but we still observe that the globally normalized model improves significantly over the self-normalized pre-trained model and the search-aware locally normalized model.", "This indicates that it might be ameliorating the label bias associated with inexact search (discussed in Section 2.5).", "As discussed in Section 3.2, the globally normalized model, when initialized with a CE trained model, performs worse because of improper initialization of the search aware training.", "The self-normalized model starts off 1 BLEU point worse than the CE model point but global normalization, initialized with the self-normalized model improves the performance and is competitive with the best model for MT. This suggests that a better technique for initializing the optimization for globally normalized models should be helpful in improving the performance.", "In tables 5 and 6, we analyze the source of improvement from global normalization for MT. In table 5, we report the ngram overlap scores and ratio of length of the predictions to length of hypothesis for the case when the search-aware training is initialized with a self-normalized model.", "We observe that the globally normalized model produces longer predictions than the locally normalized model.", "More interestingly, it seems to have better 3 and 4-gram overlap and slightly worse un-igram and bigram overlap score than the locally normalized model.", "These observations suggest that globally normalized models are better able to take longer range effects into account and are also cautious about predicting the end-of-sentence symbol too soon.", "Moreover, in table 6, we observe N-gram overlap Length ratio pretrain-beam 63.5/35.7/21.8/13.7 0.931 locally-normalized 66.9/39.4/22.7/14.0 0.918 globally-normalized 65.0/39.1/23.2/14.7 0.959 Table 5: Breakdown of BLEU results on de-en Machine Translation dev set.", "Much of the existing work on search-aware training of globally normalized neural sequence models uses some mechanism like early updates (Collins and Roark, 2004) that relies on explicitly tracking if the gold sequence falls off the beam and is not end-to-end continuous.", "Andor et al. (2016) describe a method for training globally normalized neural feedforward models, which involves optimizing a CRF-based likelihood where the normalizer is approximated by the sum of the scores of the final beam elements.", "They describe label bias arising out of conditioning on partial input and hence focused on the scenario in which locally normalized models can be less expressive than globally normalized models, whereas we also consider another source of label bias which might be affecting the optimization of equally expressive locally and globally normalized conditional models.", "Wiseman and Rush (2016) also propose a beam search based training procedure that uses unnormalized scores similar to our approach.", "Their models achieve good performance over CE baselines a pattern that we observe in our results as well.", "In this work, we attempt to empirically analyze the factors affecting this boost in performance with end-to-end continuous search-aware training (Goyal et al., 2017b) for globally normalized models.", "Smith and Johnson (2007) proved that locally normalized conditional PCFGs and unnormalized conditional WCFGs are equally expressive for finite length sequences and posit that Maximum Entropy Markov Models (MEMMs) are weaker than CRFs because of the structural assumptions involved with MEMMs that result in label bias.", "Recently, energy based neural structured prediction models (Amos et al., 2016; Belanger and McCallum, 2016; Belanger et al., 2017) were proposed that define an energy function over candidate structured output space and use gradient based optimization to form predictions making the overall optimization search aware.", "These models are designed to model global interactions between the output random variables without specifying strong structural assumptions.", "We performed empirical analysis to analyze the interaction between label bias, search-aware optimization and global normalization in various scenarios.", "We proposed an extension to the continuous relaxation to beam search proposed by Goyal et al. (2017b) to train search-aware globally normalized models and comparable locally normalized models.", "We find that in the context of inexact search over large output spaces, globally normalized models are more effective than the locally normalized models in spite of them being equivalent in terms of their expressive power.", "This project is funded in part by the NSF under grant 1618044.", "We thank the three anonymous reviewers for their helpful feedback." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "method", "abstain", "objective", "objective", "result", "result", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "other", "other", "method", "abstain", "abstain", "method", "other", "other", "other", "method", "abstain", "result", "other", "other" ]
[ "Coreference resolution over semantic graphs like AMRs aims to group the graph nodes that represent the same entity.", "This is a crucial step for making document-level formal semantic representations.", "With annotated data on AMR coreference resolution, deep learning approaches have recently shown great potential for this task, yet they are usually data hungry and annotating data is costly.", "We propose a general pretraining method using variational graph autoencoder (VGAE) for AMR coreference resolution, which can leverage any general AMR corpus and even automatically parsed AMR data.", "Experiments on benchmarks show that the pretraining approach achieves performance gains of up to 6% absolute F1 points.", "Moreover, our model signifi-cantly improves on the previous state-of-the-art model by up to 11% F1 points.", "Abstract Meaning Representation (AMR) is a way to preserve the semantic meaning of a sentence in a graph (Banarescu et al., 2013).", "As shown in Figure 1, AMRs are directed and acyclic graphs where the nodes and edges indicate concepts and their semantic relations.", "As a sentence-level semantic representation, AMRs have been shown to be effective in many NLP tasks, including text summarization (Liu et al., 2015; Dohare et al., 2018), information extraction (Rao et al., 2017; Li et al., 2020b; Zhang and Ji, 2021), and machine translation (Song et al., 2019; Pham et al., 2020).", "More recently, the NLP tasks that are beyond the single-sentence level (Nallapati et al., 2016; Rajpurkar et al., 2016; Li et al., 2017; Chen et al., 2021) are attracting rising attention, and thus representing multiple sentences with AMR becomes important.", "To expand AMRs to represent multiple Work done as an intern at Tencent AI Lab.", "sentences, the task of AMR coreference resolution (O'Gorman et al., 2018) has been proposed, aiming at recognizing the concepts from multiple AMRs that represent the same entity.", "Figure 1 illustrates the AMR graphs of two consecutive sentences in a news article.", "Given them as the input, an AMR coreference resolver needs to group police and they (colored with blue), as well as shop and the implicit mention shop (dashed and colored with pink).", "Unlike text-based coreference resolution, where dense textual information is available, AMR coreference resolution deals with sparsely connected graphs and implicit graph nodes.", "More importantly, only a handful of annotated data (around 8K AMRs) exists for AMR coreference resolution.", "Furthermore, annotating such coreference information and sentence AMRs requires linguists, making the annotation very costly .", "Both situations add extra difficulties to this task.", "Early attempts on AMR coreference resolution adopt rule-based methods.", "For instance, Liu et al. (2015) only consider the nodes that represent entities (e.g., police in Figure 1), and they rely on string match to detect coreference.", "This method can cause errors, as concepts with the same surface string may not point to the same entity.", "It also fails 2790 to recognize any situations that involve a pronoun (e.g., police and they ).", "Anikina et al. (2020) build a pipeline system that uses a textual coreference resolution model (Lee et al., 2017) and a text-to-AMR aligner (Flanigan et al., 2014).", "Though this system can theoretically resolve many situations, in fact, it suffers from severe error propagation (Fu et al., 2021).", "With the availability of recent human-annotated data (O'Gorman et al., 2018) on AMR coreference resolution, later work starts exploring data-driven models.", "Fu et al. (2021) extend a standard text-based coreference model (Lee et al., 2017) on AMRs by replacing the LSTM encoder with a graph neural network (GNN).", "They show a significant performance boost over previous rule-based methods, and their generated document-level AMRs can help a downstream neural summarization system, demonstrating the potential of this task.", "However, the performance is still far from satisfactory, and they find that the main reason is the lack of annotated data.", "This calls for approaches that can leverage cheap and/or existing supervision signals to make further improvements.", "In this paper, we propose a model and a corresponding pretraining method based on Variational Graph Autoencoder (VGAE) (Kipf and Welling, 2016b).", "Our model extends AMRCoref (Fu et al., 2021), the current state-of-the-art model, by replacing the core GNN encoder with an improved VGAE encoder.", "Our model can leverage the reconstruction loss and variational restriction from the VGAE module as additional supervision at no extra cost.", "Since the loss by our VGAE model can work on any AMR graphs, we also study pretraining our model on the full AMR bank 1 with gold or automatically parsed annotations.", "In this way, the training signal can be further enriched; thus, the data hunger issue can be alleviated.", "Though there exist some work applying VAEs and VGAEs on concept knowledge graphs (Li et al., 2020a), corpus-level graphs (Xie et al., 2021) and text (Su et al., 2018), we are the first to study VGAE on a graph-based formal semantic representation, to the best of our knowledge.", "Experiments on the MS-AMR benchmark (O'Gorman et al., 2018) show that our model outperforms the previous state-of-the-art system by 11 absolute F1-score points.", "Besides, we find that pretraining with a larger AMR bank is helpful re-1 https://catalog.ldc.upenn.edu/ LDC2020T02 DocumentAMR Graph FFNN & S o ft m a x FFNN & S o ft m a x Concept Identification CoreferenceClustering GraphEncoder GRNE n c ode r PredictClusters Input Representation Figure 2: AMRCoref framework (Fu et al., 2021).", "gardless of whether gold or silver AMR annotations are used.", "This indicates another potential boost on the performance if more automatically annotated data can be used.", "Code and pretrained models are made public 2 .", "We take the end-to-end AMR coreference resolution model ( AMRCoref , Fu et al. 2021) as our baseline system.", "Generally, it adapts a text-based end-to-end coreference model (Lee et al., 2017) on AMRs by clustering AMR nodes instead of text spans.", "Another major difference is that they also consider omitted AMR nodes (e.g., the dashed node shop in Figure 1), which are represented by their parent nodes and the corresponding relation (e.g., depart-01 and :ARG1 ).", "As illustrated in Figure 2, AMRCoref consists of four essential modules: input representation, graph encoding, node type identification, and antecedent prediction.", "As the first step of AMRCoref, it calculates the embedding h (0) i for each AMR node x i from its character-level embedding e ci , token-level embedding e ti and fixed embedding e berti generated by a pretrained BERT model:", "where W concept and b concept are model parameters.", "The character-level and token-level embeddings can be learned from scratch.", "One can choose to eliminate BERT embedding e berti as a simple base model.", "Next, the representations H (0) = [ h (0)1 , . . . , h (0) N ] of all AMR nodes X = [ x 1 , . . . , x N ] are sent to a graph encoder together with the AMR edges.", "Since the input AMRs are disconnected (each AMR alone represents a sentence), Fu et al. (2021) heuristically connect the root nodes of these sentence AMRs to make a connected graph G .", "Specifically, G = ( X, A ) , where the edge set A consists of both the original AMR edges and the added ones between pairs of roots.", "The graph encoder, f GRN , is based on the Graph Recurrent Network (GRN, Song et al. 2018; Beck et al. 2018).", "It utilizes the gated operations of an LSTM (Hochreiter and Schmidhuber, 1997) step to simultaneously update each node representation h i by exchanging information from its incoming N iin and outgoing neighbors N iout that can be easily obtained from the edge set A : m ( l 1) in = (cid:88) j N iin [ h ( l 1) j ; r ij ] , m ( l 1) out = (cid:88) j N iout [ h ( l 1) j ; r ij ] , h ( l ) i = LSTM( h ( l 1) i , [ m ( l 1) in ; m ( l 1) out ]) , (2) where each r ij represents the embedding of the edge from x i to x j .", "After L steps of information exchange, z i = [ h (0) i ; h ( L ) i ] is used as the representation of node x i for the next step.", "The concept identification subtask is to determine the type for each AMR node from 6 predefined candidate types.", "Taking Figure 1 as an example, these types are: func (functional node like and ), ent (entity node like police ), ver (regular verbal node like report-01 ), ver x ( x [0 , 1 , 2] ) (verbal node with implicit argument like depart-01 ).", "Given the node representation z i from the graph encoder, a feed-forward network (FFNN type ) with softmax activation is adopted to calculate the probability distribution for its node type p typei : p typei = softmax(FFNN type ( z i )) .", "(3) This subtask is introduced for detecting implicit mentions as shown in Figure 1, and it can also provide additional supervision defined by cross-entropy loss: L type = 1 NN (cid:88) i =1 log p typei [ t i ] , (4) where t i is the index of the correct node type for node x i .", "In the last step, coreference clusters are predicted by finding the antecedent for each AMR node.", "Taking node x i for example, the score of a precedent node x j being its antecedent is defined as: s ( x j , x i ) = f m ( x j ) + f m ( x i ) + f ant ( x j , x i ) , f m ( x i ) = FFNN m ([ z i ; p typei ]) , f an ( x j , x i ) = FFNN ant ([ z j , z i ]) , (5) where FFNN m classifies if the given node involves in a coreference link, and FFNN ant determines if the given node pair form a coreference relation.", "Next, the scores are normalized into a probability distribution via a softmax layer, and the probability p i,j for x j being the antecedent of x i is: p x j ,x i = e s ( x j ,x i ) (cid:80) x (cid:48) Y ( x i ) e s ( x (cid:48) ,x i ) , (6) where Y ( x i ) represents all precedents of x i .", "The antecedent loss is a marginal log-likelihood on all correct antecedents of all the nodes, given the gold clustering for node i is GOLD( x i ) : L ant = log N (cid:89) i =1 (cid:88) x Y ( x i ) GOLD( x i ) p x,x i .", "Finally, the training loss is a combination of antecedent loss and node type prediction loss:", "This section describes our proposed model ( VG-AMRCoref ) that adopts Variational Graph Autoencoder (VGAE) to enable the cheap supervision of graph reconstruction.", "For fair comparison, we replace the original graph encoder of AMRCoref (Fig-ure 2) with our optimized VGAE module.", "By doing so, we make it possible to pretrain our model on other standard AMR data for stronger robustness and generalizability.", "We illustrate the model framework in Figure 3.", "(Sec.2.1), a VGAE graph encoder is applied to further encode the input graph nodes into the representations with more contextual information.", "VGAE consists of a local graph encoder and a a local graph decoder.", "Local Graph Encoder The local graph encoder functions as a typical graph neural network, where the node features in the l th layer are defines as: H ( l ) = f ( H ( l 1) , A ) .", "(9) A typical VGAE model usually applies a Graph Convolutional Network (GCN) (Kipf and Welling, 2016a) as its local graph encoder f GCN .", "Eq.", "9 can be further defined as: f GCN (cid:0) H ( l ) , A (cid:1) = (cid:16) (cid:101) D 12 (cid:101) A (cid:101) D 12 H ( l ) W ( l ) (cid:17) , (10) where ( ) is the Sigmoid activation function, (cid:101) A = A + I , I is the identity matrix, and (cid:101) D is the diagonal node degree matrix of (cid:101) A .", "We study equipping the vanilla VGAE model with other major graph encoders, such as Graph Attention Network (GAT, Velickovic et al. 2017) and Graph Recurrent Network (GRN, Beck et al. 2018; Song et al. 2018), to better capture the contextual information of each node.", "The GAT encoder f GAT considers attention from the neighbors: f GAT (cid:16) H ( l ) , A (cid:17) = ( (cid:88) W ( l 1) H ( l 1) ) , = Attention( H ( l 1) ) , (11) and the definition of the GRN encoder f GRN is given in Eq.", "2. This local graph encoder also takes L layers.", "Same with the baseline (Sec. 2.2), we choose the hidden layer features after encoding to be Z = [ H (0) ; H ( L ) ] for the next step.", "Besides, Z indicates the stochastic latent variable, and it is modeled by a Gaussian prior distribution (cid:81) i N ( z i , 0 , I ) .", "For z i Z : q ( z i | X, A ) = N ( z i | i , diag ( 2 i )) , (12) we have = f ( X, A ) and log = f ( X, A ) .", "Local Graph Decoder The hidden layer representation Z is also fed into a local graph decoder of VGAE.", "This decoder reconstructs the edge set A from Z .", "Typically, it is calculated by dot-product: A (cid:48) = ( ZZT ) , p ( A (cid:48) | Z ) = N (cid:89) i =1 N (cid:89) j =1 p (cid:0) A (cid:48) ij | z i , z j (cid:1) .", "(13)", "The loss from the VGAE module LV GAE is defined by the reconstruction loss on the edge set L edge and the variational restriction on the hidden parameters L var : LV GAE = L edge + L var = E q ( Z | X,A ) [log p ( A (cid:48) | Z )] KL [ q ( Z | X, A ) || p ( Z )] , (14) where KL [ q ( ) || p ( )] is the Kullback-Leibler divergence between q and p .", "Next, the encoded AMR graph node Z from Eq.", "12 is sent to the Concept Identification and Coreference Clustering step, which are described in Sec. 2.3 and 2.4.", "As shown in Figure 3, the overall loss L comes from three parts: VGAE loss LV GAE , concept type loss L type and the antecedent loss L ant , referring to Eq.", "14, 4 and 7, respectively: L = LV GAE + L type + L ant .", "Eq.", "14 shows that VGAE can be trained in a self-supervised way, which only needs node features X and the edge set A .", "So we propose to pretrain the VGAE graph encoder using AMR graphs when only AMR graphs are available.", "In this pretraining stage, the loss function L pt is defined as: L pt = LV GAE .", "After pretraining, the VGAE graph encoder will be fine-tuned on the coreference resolution downstream task.", "Datasets Following previous work, we choose the MS-AMR benchmark (O'Gorman et al., 2018), which has manually annotated coreference information over gold AMRs.", "It contains 273 documents for training, 9 for development and 9 for testing.", "In addition to the in-domain test set, we also evaluate on the Little Prince data ( LP ) that is annotated by (Fu et al., 2021) for out-of-domain evaluation.", "For pretraining, we choose the AMR bank 3.0 (LDC2020T02), the largest AMR corpus with only regular sentence-level AMRs.", "Please note that these AMRs are manually labeled and do not contain comprehensive document-level coreference annotations, thus they can not be utilized for task training.", "We consider this dataset as AMR-gold .", "To reduce the reliance on the annotated dataset, we conduct another setting, AMR-silver : we take the sentences of the AMR-gold dataset and apply a well-trained neural AMR parser (Van Noord and Bos, 2017) to generate silver AMR graphs.", "When doing this, a few documents failed because of postprocessing issues 3 , so one may notice that it has slight differences with AMR-gold, but we consider this to be acceptable.", "Smatch F1 score (Cai and Knight, 2013) on the silver results is 0.71, indicating an acceptable AMR parsing quality.", "We show the statistics in Table", "1. Evaluation Metrics To be consistent with previous work (Fu et al., 2021), we apply three evaluation metrics and an average F1 of all: MUC F1 (Vilain et al., 1995), B 3 F1 (Bagga and Baldwin, 1998) and CEAF 4 F1 (Luo, 2005).", "Hyperparameters For all of the experiment, we follow Fu et al. (2021) to set hyperparameters for fair comparison.", "For instance, the character embedding and concept type dimension are 32; the concept embedding dimension is 256.", "The pretrained BERT-base-cased model is used.", "We choose the number of local graph encoder layer of VGAE to be 3, an empirical value following Fu et al. (2021), and provide more details in the Ablation Study later.", "The optimizer is Adam (Kingma and Ba, 2017).", "We report average results on 5 runs with different random seeds.", "Baselines We choose to compare with the following 4 models.", "Rule-based (Liu et al., 2015): it merges entity nodes with the same surface string to build document AMRs.", "Pipeline (Anikina et al., 2020): it combines a pretrained text-based coreference model and an AMR-to-text aligner into a pipeline, where the text-based coreference resolution results are projected onto AMRs via AMR-to-text alignments.", "AMRCoref and AM-RCoref+bert are the baselines (Section 2) without and with BERT features, respectively.", "Since the local graph decoder has multiple choices including GRN, GCN and GAT, as described in Eq.", "9, so we compare the performance on the development set to select the best setting in Table", "2. Results show that our model can get the best performance when applying GAT, so we choose this setting in the main experiments.", "beddings from scratch; VG-AMRCoref+pretrain first pretrains the VGAE encoder using AMR-gold, and then fine-tune on the task; VG-AMRCoref+pretrain+bert is a model that adds pretrained BERT embeddings further.", "These three models are using GAT as the graph encoder.", "To compare with Fu et al. (2021) that applies a GRN as the graph encoder, we also conducte the VG-AMRCoref (GRN) that applies the same encoder.", "Both VG-AMRCoref (GRN) and VG-AMRCoref can be fairly compared with AMRCoref, given that they use the same training data and are under the same setting (without BERT).", "When applying GRN, our model improves about 8.6% and 3.2% Average F1 gains on inand out-domain.", "When applying GAT, we could have a significant improvement, specifically, 20.7% and 9.1% Average F1 gains on inand out-domain.", "With pretraining, VG-AMRCoref+pretrain performs better than VG-AMRCoref, improving 1.8% and 5.8% on the Average F1 score.", "This shows that our graph pretraining approach that learns from external data is effective, especially on the out-domain.", "Finally, we can notice that small gains can be found in the two domains when integrating with BERT knowledge.", "A possible reason is that only fixed BERT embeddings are applied.", "Since AMRCoref is undertrained, we see BERT improves the F1 scores by a large margin there.", "Overall, our best model outperforms the best baseline by around 11.1% and 3.4% on inand out-domain.", "Besides, though there is a performance gap between the inand outdomain test sets, our model shows improvements on both two domains.", "One may notice a significant gap between the dev and test results when comparing Table 2 and 3, which is also reported by Fu et al. (2021).", "After a careful check on the data, we find that the average Model MUC B 3 CEAF 4 Avg.", "cluster sizes of the dev and test sets are 3.6 and 5.6, respectively.", "Since the model predicts as correct if the predicted ancestor is in the same cluster as the current mention, a larger cluster size gives better chances to make correct decisions.", "We also calculate the average distance between a mention to its closest ancestor, and the values for the dev and test sets are 7.1 and 5.8.", "This also indicates that the dev set is even more difficult.", "VGAE Loss We first study how the VGAE loss from Eq.", "14 can affect model performance.", "We start with a basic setting: applying GAT as the graph encoder (GAT Encoder).", "Then we add variational restriction (+VGAE L var ), as well as the reconstruction loss of edge set (+VGAE L edge ).", "We show the results on the MS-AMR test set in Table", "4. With variational loss L var , we see an improvement of about 2.4% of Average F1.", "And with the edge set reconstruction loss L edge , we see the Average F1 increases again by 1.4%.", "In total, we see an overall improvement of 3.6% with the VGAE loss.", "formance (Zhou et al., 2020; Fu et al., 2021), due to the over-smoothing issue led by message passing over multiple layers on the graph.", "We compare 1 to 5 graph layers in the VGAE encoder, and show the Average F1 score of two domains (test set) in Figure", "4. When the number of layers is 3, the model achieves the best performance on both in-and out-domain.", "The performance increases from 1 to 3, and decreases from 3 to", "5. This observation is consistent with the AMRCoref model.", "Pretraining Data Size Our main results have shown that pretraining on the AMR-gold dataset makes a significant difference, especially for out-domain.", "We further investigate if our model can benefit from silver AMR data.", "We compare the Average F1 score with different pretraining sizes of AMR-gold and AMR-silver in Figure", "5. In both domains, the x-axis shows the number of pretraining data size.", "Gold and silver datasets have the same trend: more pretraining data leads to better performance.", "Though pretraining using the silver dataset is slightly worse than the gold dataset, our model can still improve.", "Specifically, while the AMR parser (Van Noord and Bos, 2017) is not the current state-of-the-art, results show that applying silver dataset is positive.", "In the future, we plan to optimize with better AMR parsers and larger datasets to see if the silver data may achieve even better results than the gold dataset.", "To further understand the predicted results of our model, we compare our best performed model (VG-AMRCoref+pretrain+bert) and the best baseline model (AMRCoref+bert) with two case studies.", "we keep a part of the content and highlight the coreference cluster tokens with different colors to indicate ground truth, base model prediction and our model prediction.", "Note that we illustrate both AMRs and original sentences to show the context better, while the sentences were not directly participated in the training and testing.", "This content piece shows a dialogue between two characters: me and little prince .", "In the ground truth, the coreference cluster is indicating little prince, and this can be easily recognized from the token prince in S1 and the token he in S5 and S6.", "However, to find out if the token I in S3 belongs to this cluster, one needs to read from S1.", "Because dialogues are going in turns, it is important to figure out which character said S3.", "Here, the answer should be little prince (token I means himself) and should be included in the cluster.", "This could be challenging due to the deep understanding of the previous content and also the difficulty of long dependency.", "Our model successfully recognized the coreference tokens in this situation.", "We illustrate another example from the MS-AMR test set in Figure 7.", "As can be observed form the ground truth, the highlighted tokens are indicating the coreference cluster of the main character in this article, I .", "The base model predicts a wrong answer in S1 ( who ), and misses the correct token I in that sentence.", "While both models ignore the token I in S2 and S3, compare with the base model, 2796 S1: For the little prince asked me abruptly -as if seized by a grave doubt -It is true , is n't it , that sheep eat little bushes ? (a / ask-01 :ARG0 (p / prince :mod (l2 / little)) :ARG1 (t /", "Consistent with the previous example case, both models tend to predict only a part of the ground truth that they are more confident with, in order to keep a reasonable good performance.", "While automatic evaluation only shows the overall performance, our case studies provide some interesting observations.", "The base and our model tend to predict fewer coreference nodes than the ground truth, but our model can capture larger and more accurate coreference clusters than the base model.", "Encoding AMRs using Graph Neural Networks Recently, graph neural networks (GNNs) have shown their simplicity and effectiveness in many NLP tasks, especially in encoding graph-structured input, such as knowledge graphs and other task-specific graphs (Li et al., 2020a; Xiong and Gao, 2019; Yin et al., 2019; Song et al., 2020b).", "Some methods are proposed to encode AMR graphs.", "For example, Graph Convolutional Networks (GCNs, Kipf and Welling 2016a) and some variations are well-studied for AMRs (Zhang et al., 2020; Cai and Lam, 2020).", "On the other S1: ...Well I might have signs of something on the autism spectrum but who does n't have one or two ?", "As a variant of GAT (Velickovic et al., 2017), relation-aware self-attention (Shaw et al., 2018) is recently proposed and has been shown more effective (Zhu et al., 2019; Song et al., 2020a) than other GNN variants on presenting AMRs for text generation.", "We have similar observations where GAT gives better results over GCN and GRN on encoding AMRs for AMR coreference resolution.", "Graph Pretraining Previous work shows that pretraining a model may bring better generalizability and performance gain, such as the pretrained language model, BERT (Devlin et al., 2018).", "There is limited research that focuses on pretraining graph neural networks.", "The work by Hu et al. (2019) proposes two methods to pretrain GNNs in both 2797 individual node level and the entire graph level.", "Though there are a few attempts to pretrain GNNs in a similar way with BERT, i.e., Graph Transformer (Dwivedi and Bresson, 2020), and Knowledge Graph Pretraining (Yu et al., 2020), there is still limited study in other NLP tasks.", "Our work fills this gap by taking advantage of knowledge learned from external data.", "Coreference Resolution Coreference resolution has long been an active research topic in NLP.", "Recently, Clark and Manning (2016) proposed a reinforcement learning approach to optimize a neural mention-ranking model for coreference.", "The first end-to-end neural coreference resolution method (Lee et al., 2017) targets span embeddings from context-dependent boundary representations using a head-finding attention mechanism.", "Then, Kantor and Globerson (2019) proposed the Entity Equalization mechanism to capture mentions in clusters using a neural network.", "Applying these textual coreference methods to AMR graphs requires extra AMR-to-text alignment, which can cause severe error propagation.", "To promote multi-sentence AMR coreference resolution, O'Gorman et al. (2018) annotated MS-AMR dataset, which considered coreferences, implicit role coreferences and bridging relations.", "Very recent work by Fu et al. (2021) is the first end-to-end AMR coreference resolution model for multi-sentence.", "This model achieves better and robust performance compared with selected baselines.", "This work proposed a new model (VG-AMRCoref) that is capable of self-supervised training for multi-sentence AMR coreference resolution.", "It applies VGAEs to encode document-level AMRs, significantly improving performance by up to 11% on the F1 score.", "We further proposed a simple but effective graph pretraining method using VGAEs, which can simultaneously boost in in-domain and out-domain performances.", "Analysis shows that potential boost performance may happen if more automatically parsed AMR data is available.", "One future work will focus on applying larger scale silver AMR datasets for pretraining to improve AMR coreference resolution.", "Another future direction is to investigate the generated document-level AMRs on more downstream tasks, like question answering and dialogue understanding." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "objective", "abstain", "abstain", "abstain" ]